Main Content

Pothole Detection with Zynq-Based Hardware

This example shows how to target a pothole detection algorithm to the Zynq® hardware using the SoC Blockset™ Support Package for Xilinx® Devices.

Setup Prerequisites

This example follows the algorithm development workflow that is detailed in the Developing Vision Algorithms for Zynq-Based Hardware example. If you have not already done so, please work through that example to gain the better understanding of the required workflow.

This algorithm corresponds to the Vision HDL Toolbox™ example, Pothole Detection (Vision HDL Toolbox). With the SoC Blockset Support Package for Xilinx Devices, you get a hardware reference design that allows for easy integration of your targeted algorithm in the context of a vision system.

If you have not yet done so, run through the guided setup wizard portion of the SoC Blockset Support Package for Xilinx Devices installation. You might have already completed this step when you installed this support package.

On the MATLAB® Home tab, in the Environment section of the Toolstrip, click Add-Ons > Manage Add-Ons. Locate SoC Blockset Support Package for Xilinx Devices, and click Setup.

The guided setup wizard performs a number of initial setup steps, and confirms that the target can boot and that the host and target can communicate.

For more information, see Set Up Xilinx Devices.

Pixel-Stream Model

This model provides a pixel-stream implementation of the algorithm for targeting HDL. Instead of working on full images, the HDL-ready algorithm works on a pixel-streaming interface.

The algorithm in this example performs pothole detection to identify potential holes in the road, and highlight them accordingly. It is an extension of the Bilateral Filtering with Zynq-Based Hardware example, with the addition of centroid calculation and overlay, as well as text labelling. In order to determine the largest pothole in each frame, the entire frame must be processed. A frame buffer stores the frame output from the morphological closing, while the largest pothole is found. Once the full frame has been parsed, a trigger signal releases the stored frame from the frame buffer.


Video Source The source video for this example comes from either the From Multimedia File block, that reads video data from a multimedia file, or from the Video Capture HDMI block, that captures live video frames from an HDMI source connected to the Zynq-based hardware. To configure the source, right-click on the variant selection icon in the lower-left corner of the Image Source block, choose Label mode active choice, and select either File or HW.

For this algorithm, the model is configured as listed:

  • A pixel format of RGB. This algorithm is written to work on a RGB pixel format, and both the From Multimedia File and Video Capture blocks are configured to deliver video frames in this format. Other supported pixel formats are YCbCr 4:2:2, and Y only.

Algorithm Configuration The algorithm, in addition to processing the image for pothole detection has some control as well.

  • GradThresh controls the edge detection part of the algorithm.

  • LineRGB changes the color of the fiducial marker and text overlays.

  • AreaThresh sets the minimum number of marked pixels in the detection window in order to be classified as a pothole. If this value is too low then the linear cracks and other defects that are not road hazards will be detected. If it is too high, then only the largest hazards will be detected.

  • ShowRaw toggles the displayed image on which the overlays are drawn between the RGB input video and the binary image that the detector sees.

  • TextString changes the text that is displayed on the overlay when a road hazard is detected.

The ShowRaw control port is a pure hardware connection in the targeted design. This port can run at any desired rate including at the pixel clock rate.

The model features a Video Frame Buffer block that provides a simplified simulation model of a frame buffer implemented in external memory. The Video Frame Buffer block is configured for a pixel format of RGB, and a video resolution of 640x480p. In the targeted design, the frame buffer connections will interface with the external memory on the chosen Zynq platform.

The Video Frame Buffer block input interface features the video pixel ports {R, G, B}, corresponding pixel control bus port {pixelCtrl}, and a frame buffer trigger {pop} port. The video stream that is to be stored in the frame buffer, and the corresponding video timing signals, are provided on the pixel and control bus ports. The frame buffer pop port is used to schedule the release of the stored video frame from the frame buffer. The release of the frame from the frame buffer is controlled from within the Pothole Detection Algorithm subsystem, and should be asserted high for a single clock cycle. Once triggered, the frame buffer will release the stored frame on the video pixel output ports {R, G, B} with corresponding video timing signals on the pixel control bus port {pixelCtrl}.

NOTE: During the first frame of simulation output, the Video Display scope display a black image. This condition indicates that no image data is available. This behavior is because the output of the pixel-streaming algorithm must be buffered to form a full-frame before being displayed.

Target the Algorithm

After you are satisfied with the pixel streaming algorithm simulation, you can target the pixel algorithm to the FPGA on the Zynq board.

Start the targeting workflow by right clicking the Pothole Detection Algorithm subsystem and selecting HDL Code > HDL Workflow Advisor.

  • In Step 1.1, select IP Core Generation workflow and select your target platform from the list.

  • In Step 1.2, select RGB reference design to match the pixel format of the Pothole Detection Algorithm subsystem. Set Source Video Resolution to 640x480p.

  • In Step 1.3, map the target platform interfaces to the input and output ports of your design. As this example uses the frame buffer interface, the relevant ports of the Pothole Detection Algorithm subsystem must be mapped accordingly.

The video stream from the algorithm subsystem to the frame buffer is mapped to the Frame Buffer Master interface, and the video stream from the frame buffer to the algorithm subsystem is mapped to the Frame Buffer Slave interface. The frame buffer pop signal is also be mapped as part of the Frame Buffer Master interface.

With reference to Target platform interface table, map the RFromFrameBuf port to the Frame Buffer Slave interface, and select R from the dropdown menu in the Bit Range / Address / FPGA Pin column. Similarly, select Frame Buffer Slave as the interface for the GFromFrameBuf, BFromFrameBuf and ctrlFromFrameBuf ports, and G, B and Pixel Control Bus from the Bit Range / Address / FPGA Pin column respectively.

Map the RToFrameBuf, GToFrameBuf, BToFrameBuf, ctrlToFrameBuf, an|d framePop ports to the Frame Buffer Master interface, and select R, G, B, Pixel Control Bus, and Frame Trigger from the dropdown menu in the Bit Range / Address / FPGA Pin column respectively.

Also, map the GradThresh, LineRGB, AreaThresh, and TextString ports to AXI4-Lite for software interaction, the ShowRaw port to push button 0, and LED to LED 0.

  • Step 2 prepares the design for generation by doing some design checks.

  • Step 3 generates HDL code for the IP core.

  • Step 4 integrates the newly generated IP core into the larger Vision Zynq reference design.

Execute each step in sequence to experience the full workflow, or, if you are already familiar with preparation and HDL code generation phases, right-click Step 4.1 in the table of contents on the left hand side and select Run to selected task.

  • In Step 4.2, the workflow generates a targeted hardware interface model and, if the Embedded Coder® Support Package for Xilinx® Zynq Platform has been installed, a Zynq software interface model. Click Run this task button with the default settings.

Steps 4.3 and 4.4

The rest of the workflow generates a bitstream for the FPGA, downloads it to the target, and reboots the board.

Because this process can take 20-40 minutes, you can choose to bypass this step by using a pre-generated bitstream for this example that ships with product and was placed on the SDCard during setup.

To use this pre-generated bitstream execute the following:

>> vz = visionzynq();
>> changeFPGAImage(vz,'visionzynq-zedboard-hdmicam-pothole_detection.bit');

To use a bitstream for another platform, replace 'zedboard' with the platform name.

Alternatively, you can continue with Steps 4.3 and 4.4.

Using the Generated Models from the HDL Workflow Advisor

Step 4.2 generated two, or four, models depending on whether Embedded Coder® is installed: A 'targeted hardware interface' model and associated library model, and a 'software interface' model and associated library model. The 'targeted hardware interface' model can be used to control the reference design from the Simulink model without Embedded Coder. The 'software interface' model supports full software targeting to the Zynq when Embedded Coder and the Embedded Coder Support Package for Xilinx Zynq Platform are installed, enabling External mode simulation, Processor-in-the-loop, and full deployment.

The library models are created so that any changes to the hardware generation model are propagated to any custom targeted hardware simulation or software interface models that exist.

Setup Video Playback

When running either of the generated models, which run the targeted portions of the algorithm on the board, you must provide an HDMI input source. For instance, replay the provided 640x480 front-facing vehicle camera source video, vzPotholeDetection640.avi, by connecting the HDMI input of the board as the secondary display of the Simulink host computer. To configure your secondary display to use 640x480 resolution, see Configure Display for VGA Playback.

In this example, the video timing after inverse perspective mapping is not compliant with the HDMI standard. The output of the inverse perspective is enabled by using external memory as an output display frame buffer. By default, the frame buffer is configured in YCbCr 4:2:2 format. To view the output of the pothole detection algorithm on the HDMI output, the frame buffer must be configured for RGB pixel format. To configure the frame buffer, open the Getting Started example model.

In the Getting Started model, configure these Video Capture HDMI block parameters:

  • Video source - HDMI input

  • Frame size - 640x480p

  • Pixel format - RGB

On the To Video Display block, set these parameters:

  • Input Color Format - RGB

  • Input Signal - Separate color signals

Run the Getting Started model to configure the frame buffer. Once you can see output on the external monitor, play the source video using full-screen mode and set to repeat.

Alternatively, check the Bypass FPGA user logic option on the Video Capture block. This option reroutes the input video directly to the output HDMI display.

Leave the source video running and close the Getting Started model.

Targeted Hardware Interface Model: In this model, you can adjust the configuration of the reference design and read or drive control ports of the hardware user logic. These configuration changes affect the design while it is running on the target. You can also display captured video from the target device.

Software Interface Model: In this model, you can run in External mode to control the configuration of the reference design, and read or drive any control ports of the hardware user logic that you connected to AXI-Lite registers. These configuration changes affect the design while it is running on the target. You can use this model to fully deploy a software design. (This model is generated only if Embedded Coder and the Embedded Coder Support Package for Xilinx Zynq Platform are installed.)