Can someone expalin the if statement and the code below it and what does taggedCars doing?
1 view (last 30 days)
Show older comments
Kartikey Rai
on 15 Sep 2019
Commented: Kartikey Rai
on 16 Sep 2019
% Detecting light-coloured car's in a traffic video
% Accesing the video and getting basic info of it
trafficVid = VideoReader('traffic.mj2');
% Playing the video
implay('traffic.mj2');
%% 1st stage processig
% Selecting a frame of video and applying algorithm on it
% Then the algorith can be applied to all frame's of video
% Regional maxima
darkCarValue = 50;
% Converting RGB video to grey-scale video
% read - returns data, from the file represented by the
% file-reader object. The number of bytes specified in numbers determines
% the amount of data that is read.
darkCar = rgb2gray(read(trafficVid,71));
% imextendedmax - returns a binary image that identifies regions with
% intensity above a specific value, called regional maxima
noDarkCar = imextendedmax(darkCar, darkCarValue);
%Displaying above results
%figure
%imshow(darkCar);
%imshow(noDarkCar);
%% 2nd stage processing
% In this stage, we will be using morphological processing to remove
% objects like lane dividers, lane markings
% Approximation of the shape and size of markings, dividers
sedisk = strel('disk',2);
% imopen - remove small objects while preserving large objects
noSmallStructures = imopen(noDarkCar, sedisk);
% Displaying above results
%imshow(noSmallStructures)
%% 3rd stage processing
% In this stage, we will be applying above algorithm to every frame of the
% video via looping
% Calculating number of frames in video
nframes = trafficVid.NumberOfFrames;
% Refer to line 18, 19, 20
I = read(trafficVid, 1);
taggedCars = zeros([size(I,1) size(I,2) 3 nframes], class(I));
% Applying algorithm at every frame of video
for k = 1 : nframes
% Please refer above for knowing about functions used
singleFrame = read(trafficVid, k);
I = rgb2gray(singleFrame);
noDarkCars = imextendedmax(I, darkCarValue);
noSmallStructures = imopen(noDarkCars, sedisk);
noSmallStructures = bwareaopen(noSmallStructures, 150);
% Get the area and centroid of each remaining object in the frame. The
% object with the largest area is the light-colored car. Create a copy
% of the original frame and tag the car by changing the centroid pixel
% value to red.
taggedCars(:,:,:,k) = singleFrame;
stats = regionprops(noSmallStructures, {'Centroid','Area'});
if ~isempty([stats.Area])
areaArray = [stats.Area];
[junk,idx] = max(areaArray);
c = stats(idx).Centroid;
c = floor(fliplr(c));
width = 2;
row = c(1)-width:c(1)+width;
col = c(2)-width:c(2)+width;
taggedCars(row,col,1,k) = 255;
taggedCars(row,col,2,k) = 0;
taggedCars(row,col,3,k) = 0;
end
end
%% Displaying final results after applying above algorithm's
frameRate = trafficVid.FrameRate;
implay(taggedCars,frameRate);
0 Comments
Accepted Answer
Walter Roberson
on 15 Sep 2019
If no blobs are detected then regionprops() would return an empty struct, and [stats.Area] would be empty. In such a case you do not wish to mark any blob as being associated with a car.
taggedCars is an array of multiple RGB images, in which the centroid of each detected car is replaced with a single red pixel.
3 Comments
Walter Roberson
on 15 Sep 2019
[junk, idx] = max(areaArray);
finds the maximum value of areaArray and assigns that to the variable named junk, and returns the index that the maximum occured at in the variable idx .
The variable junk is then not used again in the code: the code does not care what the maximum area is, it only wants to know which one has the maximum area.
c = stats(idx).Centroid;
regionprops() returns a struct array, one element per blob that it found. stats(idx) is indexing into that struct array to return one particular struct -- the one associated with the blob with the largest area. The Centroid property of that blob is assigned to c.
c = floor(fliplr(c));
regionprops() returns Centroid information in the order [x, y, third_dimension, fourth_dimension, fifth_dimension... ] . In the particular case of 2D input, this would be just [x, y] . fliplr() of that would make the order [y, x] . round() of that finds the nearest integers to the values.
Why do this? It is because MATLAB indices are indexed first by row number and then by column number. row is usually represented through height location in a table, and column is usually represented by width location across the table. But height corresponds to cartesian y coordinate, and width across corresponds to cartesian x coordinate, so MATLAB arrays are indexed by [y, x] not by [x, y] . You would want to exchange the [x, y] of centroid coordinates to rounded [y, x] order if you were planning to use the rounded values as row and column indices.
row = c(1)-width:c(1)+width;
col = c(2)-width:c(2)+width;
If you only mark a single pixel in an image, then it is easy to overlook. So instead of marking only a single pixel, the code marks an area that is width pixels to the left, right, above, and below the center, creating a (2*width+1) x (2*width+1) block of pixels.
taggedCars(row,col,1,k) = 255;
taggedCars(row,col,2,k) = 0;
taggedCars(row,col,3,k) = 0;
row and col are both vectors here. When you index an array at a vector, the locations accessed are all possible combinations of the indices over the arrays. For example if width were 1 so that c(1)-width:c(1)+width were [c(1)-1, c(1), c(1)+width] and likewise [c(2)-1, c(2), c(2)+width] then taggedCars([c(1)-1, c(1), c(1)+width], [c(2)-1, c(2), c(2)+width] , 1, k) would refer to all 9 of the locations [c(1)-1, c(2)-1, 1, k], [c(1), c(2)-1, 1, k], [c(1)+1, c(2)-1,1,k], [c(1)-1,c(2),1,k], [c(1),c(2),1,k], [c(1)+1,c(2),1,k], [c(1)-1,c(2)+1,1,k], [c(1),c(2)+1,1,k], [c(1)+1,c(2)+1,1,k] . And all of those locations are to be assigned the value 255. Likewise for the (2*width+1) * (2*width+1) locations taggedCars(row,col,2,k) and taggedCars(row,col,3,k) are all assigned 0.
Why 255 and 0 and 0? Because in RGB representation, 255 is full red component, and 0 is no green component, and the second 0 is no blue component, and full red + no green + no blue is the combination for red. So the code is creating a (2*width+1) x (2*width+1) patch of red pixels centered on the centroid of the detected blob.
More Answers (0)
See Also
Categories
Find more on Image Segmentation and Analysis in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!