Main Content

Implement Hardware-Efficient Real Partial-Systolic Matrix Solve Using QR Decomposition with Tikhonov Regularization

This example shows how to use the Real Partial-Systolic Matrix Solve Using QR Decomposition block to solve the regularized least-squares matrix equation

$$\left[\begin{array}{c}\lambda I_n\\A\end{array}\right]X =
\left[\begin{array}{c}0_{n,p}\\B\end{array}\right],$$

where A is an m-by-n matrix with m >= n, B is m-by-p, X is n-by-p, $I_n=$ eye(n), $0_{n,p}=$ zeros(n,p), and $\lambda$ is a regularization parameter.

The least-squares solution is

$$X_\textrm{ls} = (\lambda^2I_n + A^\mathrm{T}A)^{-1}A^\mathrm{T}B$$

but is computed without squares or inverses.

Define Matrix Dimensions

Specify the number of rows in matrices A and B, the number of columns in matrix A, and the number of columns in matrix B.

m = 300; % Number of rows in matrices A and B
n = 10;  % Number of columns in matrix A
p = 1;   % Number of columns in matrix B

Define Tikhonov Regularization Parameter

Small, positive values of the regularization parameter can improve the conditioning of the problem and reduce the variance of the estimates. While biased, the reduced variance of the estimate often results in a smaller mean squared error when compared to least-squares estimates.

regularizationParameter = 0.01;

Generate Random Least-Squares Matrices

For this example, use the helper function realRandomLeastSquaresMatrices to generate random matrices A and B for the least-squares problem AX=B. The matrices are generated such that the elements of A and B are between -1 and +1, and A has rank r.

rng('default')
r = 3;  % Rank of A
[A,B] = fixed.example.realRandomLeastSquaresMatrices(m,n,p,r);

Select Fixed-Point Data Types

Use the helper function realQRMatrixSolveFixedpointTypes to select fixed-point data types for input matrices A and B, and output X such that there is a low probability of overflow during the computation.

max_abs_A = 1;  % Upper bound on max(abs(A(:))
max_abs_B = 1;  % Upper bound on max(abs(B(:))
precisionBits = 32;   % Number of bits of precision
T = fixed.realQRMatrixSolveFixedpointTypes(m,n,max_abs_A,max_abs_B,...
    precisionBits,[],[],regularizationParameter);
A = cast(A,'like',T.A);
B = cast(B,'like',T.B);
OutputType = fixed.extractNumericType(T.X);

Open the Model

model = 'RealPartialSystolicQRMatrixSolveModel';
open_system(model);

The Data Handler subsystem in this model takes real matrices A and B as inputs. It sends rows of A and B to QR block using the AMBA AXI handshake protocol. The validIn signal indicates when data is available. The ready signal indicates that the block can accept the data. Transfer of data occurs only when both the validIn and ready signals are high. You can set a delay between the feeding in rows of A and B in the Data Handler to emulate the processing time of the upstream block. validIn remains high when rowDelay is set to 0 because this indicates the Data Handler always has data available.

Set Variables in the Model Workspace

Use the helper function setModelWorkspace to add the variables defined above to the model workspace. These variables correspond to the block parameters for the Real Partial-Systolic Matrix Solve Using QR Decomposition block.

numSamples = 1; % Number of sample matrices
rowDelay = 1; % Delay of clock cycles between feeding in rows of A and B
fixed.example.setModelWorkspace(model,'A',A,'B',B,'m',m,'n',n,'p',p,...
    'regularizationParameter',regularizationParameter,...
    'numSamples',numSamples,'rowDelay',rowDelay,'OutputType',OutputType);

Simulate the Model

out = sim(model);

Construct the Solution from the Output Data

The Real Partial-Systolic Matrix Solve Using QR Decomposition block outputs data one row at a time. When a result row is output, the block sets validOut to true. The rows of X are output in the order they are computed, last row first, so you must reconstruct the data to interpret the results. To reconstruct the matrix X from the output data, use the helper function matrixSolveModelOutputToArray.

X = fixed.example.matrixSolveModelOutputToArray(out.X,n,p,numSamples);

Verify the Accuracy of the Output

Verify that the relative error between the fixed-point output and builtin MATLAB® in double-precision floating-point is small.

$$X_\textrm{double} = \left[\begin{array}{c}\lambda I_n\\A\end{array}\right] \backslash
\left[\begin{array}{c}0_{n,p}\\B\end{array}\right]$$

A_lambda = double([regularizationParameter*eye(n);A]);
B_0 = [zeros(n,p);double(B)];
X_double = A_lambda\B_0;
relativeError = norm(X_double - double(X))/norm(X_double)
relativeError =

   3.3594e-05

Suppress mlint warnings in this file.

%#ok<*NOPTS>
%#ok<*NASGU>
%#ok<*ASGLU>

See Also