Input image size must be greater than [64 1856] error in lidar labeler app
4 views (last 30 days)
Show older comments
I have an error in my machine learning algorithm where it would display an issue with the image size. I am using a .pcd file different to pandaSet yet I am wondering about why theres an error with the network model. I cant seem to determine what creates the netSize and how would I influence it for a new set of pcd dataSet
Error is:
Error using
Input image size must be greater than [64 1856]. The minimum input image size must be equal to or
greater than the input size in image input layer of the network.
Error in ()
iCheckImage(I, netSize);
params = iParseInputs(I, net, varargin{:});
predictedResult = semanticseg(I, algObj.PretrainedNetwork);
videoLabels = run(this, frame);
Error in lidar.internal.lidarLabeler.tool.TemporalLabelingTool/runAlgorithm
Error in vision.internal.labeler.tool.AlgorithmTab/setAlgorithmModeAndExecute
Error in vision.internal.labeler.tool.AlgorithmTab
feval(callback, src, event);
internal.Callback.execute(this.PushPerformedFcn, this, eventdata);
Error in matlab.ui.internal.toolstrip.base.ActionInterface>@(event,data)PeerEventCallback(this,event,data) (line 57)
this.PeerEventListener = addlistener(this.Peer, 'peerEvent', @(event, data) PeerEventCallback(this, event, data));
feval(fcn{1},varargin{:},fcn{2:end});
hgfeval(response, java(o), e.JavaEvent)
@(o,e) cbBridge(o,e,response));
classdef LidarSemanticSegmentation < lidar.labeler.AutomationAlgorithm
% LidarSemanticSegmentation Automation algorithm performs semantic
% segmentation in the point cloud.
% LidarSemanticSegmentation is an automation algorithm for segmenting
% a point cloud using SqueezeSegV2 semantic segmentation network
% which is trained on Pandaset data set.
%
% See also lidarLabeler, groundTruthLabeler
% lidar.labeler.AutomationAlgorithm.
% Copyright 2021 The MathWorks, Inc.
% ----------------------------------------------------------------------
% Step 1: Define the required properties describing the algorithm. This
% includes Name, Description, and UserDirections.
properties(Constant)
% Name Algorithm Name
% Character vector specifying the name of the algorithm.
Name = 'Lidar Semantic Segmentation';
% Description Algorithm Description
% Character vector specifying the short description of the algorithm.
Description = 'Segment the point cloud using SqueezeSegV2 network.';
% UserDirections Algorithm Usage Directions
% Cell array of character vectors specifying directions for
% algorithm users to follow to use the algorithm.
UserDirections = {['ROI Label Definition Selection: select one of ' ...
'the ROI definitions to be labeled'], ...
'Run: Press RUN to run the automation algorithm. ', ...
['Review and Modify: Review automated labels over the interval ', ...
'using playback controls. Modify/delete/add ROIs that were not ' ...
'satisfactorily automated at this stage. If the results are ' ...
'satisfactory, click Accept to accept the automated labels.'], ...
['Accept/Cancel: If the results of automation are satisfactory, ' ...
'click Accept to accept all automated labels and return to ' ...
'manual labeling. If the results of automation are not ' ...
'satisfactory, click Cancel to return to manual labeling ' ...
'without saving the automated labels.']};
end
% ---------------------------------------------------------------------
% Step 2: Define properties you want to use during the algorithm
% execution.
properties
% AllCategories
% AllCategories holds the default 'unlabelled', 'Vegetation',
% 'Ground', 'Road', 'RoadMarkings', 'SideWalk', 'Car', 'Truck',
% 'OtherVehicle', 'Pedestrian', 'RoadBarriers', 'Signs',
% 'Buildings' categorical types.
AllCategories = {'unlabelled'};
% PretrainedNetwork
% PretrainedNetwork saves the pretrained SqueezeSegV2 network.
PretrainedNetwork
end
%----------------------------------------------------------------------
% Note: this method needs to be included for lidarLabeler app to
% recognize it as using pointcloud
methods (Static)
% This method is static to allow the apps to call it and check the
% signal type before instantiation. When users refresh the
% algorithm list, we can quickly check and discard algorithms for
% any signal that is not support in a given app.
function isValid = checkSignalType(signalType)
isValid = (signalType == vision.labeler.loading.SignalType.PointCloud);
end
end
%----------------------------------------------------------------------
% Step 3: Define methods used for setting up the algorithm.
methods
function isValid = checkLabelDefinition(algObj, labelDef)
% Only Voxel ROI label definitions are valid for the Lidar
% semantic segmentation algorithm.
isValid = labelDef.Type == lidarLabelType.Voxel;
if isValid
algObj.AllCategories{end+1} = labelDef.Name;
end
end
function isReady = checkSetup(algObj)
% Is there one selected ROI Label definition to automate.
isReady = ~isempty(algObj.SelectedLabelDefinitions);
end
end
%----------------------------------------------------------------------
% Step 4: Specify algorithm execution. This controls what happens when
% the user presses RUN. Algorithm execution proceeds by first
% executing initialize on the first frame, followed by run on
% every frame, and terminate on the last frame.
methods
function initialize(algObj,~)
% Load the pretrained SqueezeSegV2 semantic segmentation network.
outputFolder = fullfile(tempdir, 'Pandaset');
pretrainedSqueezeSeg = load(fullfile(outputFolder,'trainedSqueezeSegV2PandasetNet.mat'));
% Store the network in the 'PretrainedNetwork' property of this object.
algObj.PretrainedNetwork = pretrainedSqueezeSeg.net;
end
function autoLabels = run(algObj, pointCloud)
% Setup categorical matrix with categories including
% 'Vegetation', 'Ground', 'Road', 'RoadMarkings', 'SideWalk',
% 'Car', 'Truck', 'OtherVehicle', 'Pedestrian', 'RoadBarriers',
% and 'Signs'.
autoLabels = categorical(zeros(size(pointCloud.Location,1), size(pointCloud.Location,2)), ...
0:12,algObj.AllCategories);
% Convert the input point cloud to five channel image.
I = helperPointCloudToImage(pointCloud);
% Predict the segmentation result.
predictedResult = semanticseg(I, algObj.PretrainedNetwork);
autoLabels(:) = predictedResult;
%using this area we would be able to continuously update the latest file on
% sending the output towards the CAN Network or atleast ensure that the
% item is obtainable
% This area would work the best.
%first we must
end
end
end
function image = helperPointCloudToImage(ptcloud)
% helperPointCloudToImage converts the point cloud to 5 channel image
image = ptcloud.Location;
image(:,5) = ptcloud.Intensity;
rangeData = iComputeRangeData(image(:,1),image(:,2),image(:,3));
image(:,4) = rangeData;
index = isnan(image);
image(index) = 0;
end
function rangeData = iComputeRangeData(xChannel,yChannel,zChannel)
rangeData = sqrt(xChannel.*xChannel+yChannel.*yChannel+zChannel.*zChannel);
end
1 Comment
zouhair
on 24 May 2024
Hi sur , i have the same probleme ,you can shar with me solution please ,if you find it
Answers (1)
Prasanna
on 28 Feb 2024
Hi Kevin,
It is my understanding that you are performing semantic segmentation in the point cloud using a pretrained SqueezeSegV2 semantic segmentation network and arefacing an error with respect to the input size in the image input layer of the network.
Based on the error message you've provided; it appears that the root cause of the issue is related to the size of the input image being fed into the neural network for semantic segmentation. The error message means that the neural network expects the input image to have dimensions that are at least 64 pixels in height and 1856 pixels in width. The ‘netSize’ variable mentioned in the error message likely refers to these minimum dimensions required by the image input layer of the network.
To further clarify, the pretrained network you are using, as referred to in the MATLAB documentation at https://www.mathworks.com/help/lidar/ug/semantic-segmentation-using-squeezesegv2-network.html, employs a network architecture where the expected input size is `[64 1856 5]`. This means that the network is designed to process images with a height of 64 pixels, a width of 1856 pixels, and 5 channels. Therefore, when preparing your pcd data for segmentation with this network, you must ensure that the images you use match these exact dimensions. If your converted images do not meet these specifications, the network will not be able to process them, resulting in the error you are encountering.
If the images from your .pcd files are smaller than the required size, you may need to apply padding or resizing to the images to meet the network's input size requirements. Padding involves adding extra pixels around the image to increase its size without changing the original content, while resizing changes the dimensions of the image, which could potentially alter the aspect ratio and the data within.
Here are some steps you can take to resolve the issue:
- Verify the size of the images after conversion from point cloud to image using ‘helperPointCloudToImage’.
- If the images are smaller than the required ‘netSize’, apply appropriate image processing techniques to adjust the size.
- Ensure that the preprocessing steps do not significantly alter the meaningful content of the images, as this could impact the performance of the semantic segmentation.
- Once the image size is adjusted, feed the processed images into the network.
Hope this helps.
Regards,
Prasanna
0 Comments
See Also
Categories
Find more on Labeling, Segmentation, and Detection in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!