How do I detect or identify the middle row of the crops by hough()

2 views (last 30 days)
456.jpg
  1 Comment
MAHDI JAVADI
MAHDI JAVADI on 5 Jan 2019
Edited: MAHDI JAVADI on 5 Jan 2019
Actually I have used this code for binary on this picture.
clear clc
close all
rgb_img = imread('file name');
figure(1)
imshow(rgb_img);
r = rgb_img(:, :, 1);
g = rgb_img(:, :, 2);
b = rgb_img(:, :, 3);
imageGreyed = g - r/2 - b/2;
BW = imageGreyed >10; % BW = edge(imageGreyed,'canny'); figure (2),imshow(BW);
[H, T, R] = hough(BW); P = houghpeaks(H, 100);
lines = houghlines(BW, T, R, P, 'FillGap',400, 'MinLength', 400);
figure(2)
imshow(BW);

Sign in to comment.

Accepted Answer

Image Analyst
Image Analyst on 5 Jan 2019
I don't think you need hough. You can simply use regionprops() and ask for centroid and orientation. You can then use ismember on the labeled image to extract either
  1. the blob with the most vertical orientation, OR
  2. the blob with the x centroid closest to the middle of the image.
Hopefully those are the same blob - I think they will be for the example you posted. Starting with your segmented, binary image, it goes something like this:
labeledImage = bwlabel(binaryImage);
props = regionprops(labeledImage, 'Centroid', 'Orientation');
% Find blob based on angle
diffAngles = abs([props.Orientation] - 90);
[minAngleDiff, index] = min(diffAngles);
centerBlob = ismember(labeledImage, index);
% Find blob based on centroid
[rows, columns] = size(binaryImage)
xyCentroids = vertcat(props.Centroid);
xCentroids = xyCentroids(:, 1);
diffx = abs(xCentroids - columns/2);
[minXDiff, index] = min(diffx);
centerBlob2 = ismember(labeledImage, index);
I haven't tested that. It's just off the top of my head. If you run into trouble, write back.
  3 Comments
MAHDI JAVADI
MAHDI JAVADI on 6 Jan 2019
sorry, I raised my question badly. Basically I want to detect a line on a middle of each rows for a robot guide line but I can not. would you please help me? I just have this problem.
Image Analyst
Image Analyst on 7 Jan 2019
Edited: Image Analyst on 7 Jan 2019
Why do you want my crummy answer when there are papers published where people have worked on this for years and published algorithms that work and are robust. I gave you the link to those papers. Did you read any of them? If no, why not? They will have better algorithms than me or anyone here could suggest.
I could give you something that works for one image, and then not for the next, then I get it working for those two and you come along with a third where it doesn't work. Then I get it working for those three images, and along you come with a fourth image where it doesn't work. Etc. Why not just start right from the beginning with a proven algorithm that works robustly for all images? If you don't want to code it up yourself, you might even be able to buy the code from the authors to speed you along, or hire them to do your project for you. I mean if you're working on a project to have a robot move along a farm and automatically identify rows, crops, and weeds, and "take care of" the weeds, you must have a budget for software development - that might even be the cheapest part of your project.
And also, you had better take some images from your actual robot, since these images look like random ones taken off the internet and the problem is that their viewpoint, and magnification, and vanishing point are all over the place. We can't have that. Just post the images your specific robot captures. If you don't have them yet, then wait until it can generate some images because deleoping an algorithm that works for your images is much easier than one that works for all possible images take from any viewpoint and magnification and vanishing point.

Sign in to comment.

More Answers (0)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!