Hello, its my first time posting here so I apologize if im not doing something properly.
I'm currently doing a project where i need to know a drone's position and attittude in reference to his landing pad using only a camara. The landing pad has 11 led's of different colors. My first step was creating a mask so i could extract the leds. The image below shows the result of the masked points.
The mask is doing a pretty good job of getting rid of noise, but sometimes it doe not identify all 11 points. Now, i know i should use the matching features surf to know where the points went in the next frame, so i can then use the relativeCameraPose function to know the relative pose and attitude in reference to the landing pad. However, i cant get good matches. The image bellow are the matches using surf of 2 consecutive frames, where the red points are the detected leds on one frame and the blue ones on the frame after that.
What am i doing wrong? Is there a better way to get the relative pose?