How does the "Simulation 3D Camera" block generate the depth map?

I am using the UAV Toolbox to generate a 3D point cloud by using the depth map output of the "Simulation 3D Camera" block. Could you please provide some information on how this depth map is generated?

In particular, I would like to know -
1) How this measurement is calculated?
2) How is the output of the depth map exactly one measurement?
3) How is the distance calculated?

 Accepted Answer

1) How this measurement is calculated?
The virtual camera in Unreal Engine returns a depth value for a corresponding pixel, thus we just use the data provided by the virtual camera.
2) How is the output of the depth map exactly one measurement?
This is achieved by using a camera model when there is a one-to-one mapping between a 2D image pixel and a 3D world point. This model is useful since the camera only has one output per pixel, unlike lidar sensors. For more information, please refer to the documentation below:
https://www.mathworks.com/help/vision/ug/camera-calibration.html
3) How is the distance calculated?
To calculate the distance, the formula used is the following -
distance = depth / cos(alpha)
Another key detail to note is that the virtual camera in Unreal Engine returns the distance of the surfaces of scene objects from the sensor plane (i.e. depth) and not the distance of the surface of scene objects from the camera viewpoint (i.e. range/distance). An image to illustrate this point is shown below -

More Answers (0)

Products

Release

R2021a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!