Technical Articles

Visualize, Label, and Fuse Sensor Data for Automated Driving

By Avinash Nehemiah and Mark Corless, MathWorks


Engineers developing perception algorithms for ADAS or fully automated driving generally have a wealth of input data to work with. But this data, coming from many different types of sensors, including radar, LiDAR, cameras, and lane detectors, initially raises more questions than it answers. How do you make sense of the data, bring it to life? How do you reconcile conflicting inputs from different sensors? And once you’ve developed an algorithm, how do you evaluate the results it’s producing?

Here’s a guide to features and capabilities in MATLAB® and Automated Driving Toolbox™ that can help you address these questions. We’ll focus on four key tasks: visualizing vehicle sensor data, labeling ground truth, fusing data from multiple sensors, and synthesizing sensor data to test tracking and fusion algorithms.

Visualizing Vehicle Sensor Data

Making sense of the data is the primary challenge in the early stages of perception system development. Sensors deliver their output in different formats and at different rates. For example, cameras provide images in 3D matrices, LiDAR provides a list of points, and embedded or intelligent cameras provide object lists with details on vehicles, lanes, and other objects. The disparate nature of these outputs makes it difficult to see the overall picture (Figure 1).

Figure 1. Examples of vehicle sensor data.

Figure 1. Examples of vehicle sensor data.

At this early stage, we need to know exactly how the sensors are representing the environment around the vehicle. The best type of visualization to use for this is a bird’s-eye plot because it allows us to visualize all the data from the different sensors in one place.

To create birds-eye plots we use the visualization tools in MATLAB and Automated Driving Toolbox. We then add more detail to the views with these objects:

  • The coverageAreaPlotter, which displays the sensor coverage area
  • The detectionPlotter, which displays lists of objects detected by vision, radar, and LiDAR sensors
  • The laneBoundaryPlotter, which overlays lane detections onto images

We now have accurate visualizations of sensor coverage, detections, and lane boundaries (Figure 2).

Figure 2. Plotting the sensor coverage area, transforming vehicle coordinates to image coordinates, plotting lanes and radar detections, and plotting the LiDAR point cloud.

Figure 2. (Clockwise from top left) Plotting the sensor coverage area, transforming vehicle coordinates to image coordinates, plotting lanes and radar detections, and plotting the LiDAR point cloud.

Automating Ground Truth Labeling

Ground truth is required to train object detectors using machine learning or deep learning techniques. It is also essential to evaluate existing detection algorithms. Establishing ground truth is often a labor-intensive process, requiring labels to be inserted into a video manually, frame by frame. The Ground Truth Labeler app in Automated Driving Toolbox includes computer vision algorithms to accelerate the process of labeling ground truth. The app has three key features (Figure 3):

  • The vehicle detector automates the detection and labeling of vehicles in keyframes by using an aggregate channel feature (ACF).
  • The temporal interpolator labels the objects detected in all frames between selected keyframes.
  • The point tracker uses a version of the Kanade-Lucas-Tomasi (KLT) algorithm to track regions of interest across frames.
  • The add algorithm lets you add custom algorithms and facilitates iterative development of object detectors.
Figure 3. The Ground Truth Labeler app.

Figure 3. The Ground Truth Labeler app.

Fusing Data from Multiple Sensors

Virtually every perception system uses input from several complementary sensors. Reconciling data from these sensors can be challenging because each one is likely to give a slightly different result—for example, the vision detector might report that a vehicle is in one location, while the radar detector shows that same vehicle in a nearby but distinctly different location.

The multiObjectTracker in Automated Driving Toolbox tracks and fuses detections. A common application is to fuse radar and vision detections and improve the estimated position of surrounding vehicles (Figure 4).

Figure 4. The multi-object tracker, used here to fuse radar data and vision detection data to produce a more accurate estimate of the vehicle’s location.

Figure 4. The multi-object tracker, used here to fuse radar data (red circle) and vision detection (blue triangle) data to produce a more accurate estimate of the vehicle’s location (black oval).

Synthesizing Sensor Data to Generate Test Scenarios

Some test scenarios, such as imminent collisions, are too dangerous to execute in a real vehicle, while others may require overcast skies or other specific weather conditions. We can address this challenge by synthesizing object-level sensor data to generate scenarios that include roads, vehicles, and pedestrians as virtual objects. We can use this synthetic data to test our tracking and sensor fusion algorithm. (Figure 5).

Figure 5. Top view, chase camera view, and birds-eye plot of a synthesized test scenario.

Figure 5. Top view, chase camera view, and birds-eye plot of a synthesized test scenario.

Using Vehicle Data in Perception Systems

Visualizing, fusing, and synthesizing vehicle data lays the groundwork for developing object detection algorithms. When we are ready to deploy the MATLAB algorithms, we can use MATLAB Coder™ to generate portable, ANSI/ISO compliant C/C++ code for integration into our embedded environment.

Published 2017 - 93165v00

View Articles for Related Capabilities

View Articles for Related Industries