Technical Articles

Accelerating the Verification of Signal Processing Integrated Circuits with HDL Verifier

By Steffen Löbel and Jan Hahlbeck, NXP


“One clear advantage of our new workflow based on HDL Verifier is the ability to quickly identify the source of defects.”

The verification of signal processing integrated circuit (IC) designs poses several unique challenges that can strain conventional testing methods. The algorithmic complexity of filters, mixers, and other advanced signal processing functions requires rigorous validation to ensure the implemented IC behaves as intended with bit-true precision. Further, because ICs often operate across a wide range of possible inputs and configurations, it is essential to assess corner cases—rare but critical scenarios that can slip past test plans focused on predefined, predictable sequences.

My team at NXP has adopted a new workflow for IC verification to address these challenges. Based on MATLAB®, Simulink®, and HDL Verifier™, this workflow incorporates constrained-random verification and Universal Verification Methodology (UVM) techniques to validate edge cases and explore the state space with randomized inputs while maintaining control through constraints (Figure 1). In this workflow, which we recently used to verify a radio tuner IC for the automotive industry, MATLAB and Simulink models are exported as SystemVerilog DPI-C components using HDL Verifier and integrated as reference models in the verification testbench for our verification environment based on the Cadence® Xcelium™ simulator. This approach not only enabled us to reduce verification time by 20 to 30 percent, but it also allowed us to increase test coverage and find more implementation defects earlier in development.

A flowchart showing how the IC verification workflow incorporates constrained-random verification and UVM techniques.

Figure 1. The IC verification workflow incorporates constrained-random verification and UVM techniques.

Comparing Old and New Workflows

When testing similar IC designs in the past, we would typically use MATLAB to generate input stimuli for our complete system. We would then run simulations in MATLAB or Simulink and capture the results as a golden reference pattern. Once the RTL implementation was complete, we would apply the same stimuli to the DUT and check its results against the golden reference. While this approach worked, it had a few drawbacks. First, the verification was, for the most part, end to end, making it difficult to identify the root cause of defects since all components were tested together. Second, it was not easy to perform constrained random verification. As a result, while common scenarios and use cases were verified, many edge cases were not. Third, it did not adhere to UVM, which has since become the standard framework in our way of implementing testbenches.

In contrast, the new workflow enables direct reuse of our existing MATLAB and Simulink reference models in our HDL simulation environment (Cadence Xcelium). Each component in the reference model corresponds with its counterpart in the DUT. The example signal processing chain shown in Figure 2, for example, includes a filter modeled in Simulink, followed by a mixer and a second filter modeled in MATLAB. We use HDL Verifier to generate C code for the model with SystemVerilog DPI-C wrapper, enabling us to integrate each component into the testbench.

The reference model components and the DUT components are run in parallel in the HDL simulation environment, and their outputs are evaluated on the fly by a checker that acts as a UVM scoreboard, performing bit-accurate comparisons of the output of each associated component pair (for example, the reference model mixer and the DUT mixer) and of the complete end-to-end chain.

An example of the signal processing chain where components in the reference model  correspond with their counterparts in the DUT.

Figure 2. Parallel structure for comparing the results of reference model components (top row) generated from MATLAB and Simulink against the results of corresponding DUT components (bottom row).

Randomizing Inputs and Visualizing Results

After running preliminary tests—in this instance with a set of predefined AM, FM, and Digital Audio Broadcasting (DAB) radio streams—on the testbench to verify the basic functionality of the signal processing algorithms, the next step in the workflow is constrained-random verification. This stage involves extensive simulations in which all configuration settings for the design were assigned random values within a constrained range. For example, we vary mixer settings, filter settings, delays, gains, and other key configuration parameters and run simulations to assess the design’s performance for each set of randomized configuration options.

For each test, we can review detailed results, including the specific settings that were used, the inputs used as stimuli to the IP, the results from the reference model implementation, the results from the RTL implementation, and the result of the checker comparison (Figure 3).

A screenshot of the testbench displaying randomized IP register settings, IP input, RTL output, reference model output, and checker statistics.

Figure 3. Waveform display showing randomized IP register settings, IP input, RTL output, reference model output, and checker statistics.

We also review reports showing results in aggregate for a complete series of components (Figure 4). These reports show the number of checks performed for each component in the chain, and the number of errors—that is, the number of discrepancies identified between RTL and reference model outputs.

A screenshot of the summary report displaying 45 errors.

Figure 4. Summary report showing test results for multiple components. Here, tests on the H6 component identified 45 errors.

When an error is identified, we check both the reference model implementation in MATLAB or Simulink, and we check the RTL implementation. In some instances, we’ve traced the source of the discrepancy to the original reference design, but the problem more often stems from an RTL implementation error. In either case, once the defect is diagnosed and remedied, we rerun the test simulations to verify that the fix has completely resolved any differences between the reference model and RTL implementation.

Key Improvements and Next Steps

One clear advantage of our new workflow based on HDL Verifier is the ability to quickly identify the source of defects. Compared to an approach that relies on end-to-end testing, a UVM-oriented approach that enables both component-level and system-level testing—like the one we have applied—makes it much easier to pinpoint the subsystem with the defect as well as the specific stimuli for that component that can be used to replicate the defect.

Further, because randomized settings often test the system in ways the design engineers may not have anticipated, the new workflow facilitates the uncovering of implementation defects much earlier in the development process as compared to conventional test plans focused on well-established use cases. In short, we can find defects without manual checks and without spending time thinking of unusual scenarios and edge cases to test.

We are able to reuse our existing MATLAB and Simulink models in HDL simulations, and the benefits of this reuse continue to compound on each subsequent spin or revision of the IC. Taken together, these advantages contributed to the significant reduction in verification time—up to 30%—that we achieved on the radio signal processing IC. Based on this metric and the other advantages we have realized, other NXP teams are looking to adopt the same workflow for the development of a radio front end for a radar IC and other IC designs.

Published 2025

View Articles for Related Capabilities

View Articles for Related Industries