How to convert 3D point cloud (extracted from sparse 3D reconstruction) from pixels to millmeters?
Show older comments
I have found a 3D point cloud using 3D sparse reconstruction (like this example http://www.mathworks.com/help/vision/ug/sparse-3-d-reconstruction-from-multiple-views.html )
Now I was wondering how I can convert the (X,Y,Z) in this point cloud to actual real world measurements in millimeters.
can I use some thing like this( http://www.mathworks.com/matlabcentral/answers/103990-how-to-convert-pixels-to-cm )? Thanks
Answers (1)
Dima Lisin
on 25 Aug 2014
0 votes
Hi Kijo,
In this example the (X, Y, Z) world coordinates are already in millimeters.
7 Comments
Dima Lisin
on 21 Oct 2014
Hi Luca,
The world coordinates in this example are relative to the checkerboard, not relative to the camera. The coordinate system is right-handed. If you look straight at the checkerboard, then the X-axis points to the right, the Y-axis points down, and the Z-axis points into the board. That is why you see negative Z values. Positive Z is on the other side of the checkerboard, below the table.
There is a doc page explaining the coordinate systems . This particular situation is explained in the last section: Calibration Pattern-Based Coordinate System.
Luca
on 22 Oct 2014
Thanks Dima. I understand that now. I have another question I appreciate if you can answer. In the example above we have to have an object with known size in the scene (checkerboard) to be able to calculate extrinsic camera calibration matrix and from that build camera matrix and finally the 3D point cloud in mm using triangulation.
Now my question is what if I do NOT have an object with known size in the scene? How can I draw the 3D point cloud of the scene in metric? assume I have my intrinsic camera calibration matrix.
Thanks
Dima Lisin
on 24 Oct 2014
If you only know the intrinsic camera parameters, then you can only get a 3-D reconstruction up to scale, i. e. in unknown units. Also, you would have to write some code. You would need to estimate the fundamental matrix, then from that compute the essential matrix, and from that get the rotation and translation (up to scale) between the two views. Then you can create a camera matrix for each view, and use the triangulate function.
If you want points and distances in actual units, and you don't have a reference object in the scene, then you would need a calibrated stereo pair of cameras. See this example.
Luca
on 24 Oct 2014
Thanks for explaining that to me. I appreciate it. The example you linked is for DENSE reconstruction. Is there any SPARSE 3D reconstruction example in MATLAB that reconstructs the scene in metric without having known sized objects in the scene?
Dima Lisin
on 25 Oct 2014
Unfortunately, not. The sparse reconstruction example uses a single calibrated camera and a checkerboard in the scene, and the example that uses a calibrated stereo pair of cameras does dense reconstruction. But you should be able to take these two examples and implement sparse reconstruction using a calibrated stereo pair. The steps are as follows:
- Calibrate your stereo cameras. If you have R2014b, use the Stereo Camera Calibrator app.
- Take a pair of stereo images.
- Undistort each image. You do not need to rectify them.
- Detect, extract, and match point features.
- Use the triangulate function to get 3D coordinates of the matched points. You would need to pass the stereoParametes object into triangulate, and the resulting 3D coordinates will be with respect to the optical center of camera 1.
Luca
on 29 Oct 2014
Hi Dima, Thanks for the step by step guide. So I followed that and I am happy with the results.
The only problem I have is that the Z values in point cloud are negative!
Might that be coming from the fact that I am using a different checkerboard pattern? because I am getting this warning: " Warning: The checkerboard must be asymmetric: one side should be even, and the other should be odd. Otherwise, the orientation of the board may be detected incorrectly ." I am using Bouguet's pattern in his Caltech's toolbox.
Also I realized if I change the order of images when reading and extracting features, then I will have positive Z values in point cloud, but the results are way off and don't match with the scene! if I read right images first and then left images when I stereo calibrate the cameras, then I have to read them the same order when extracting features, right?
Categories
Find more on Point Cloud Processing in Help Center and File Exchange
Products
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!