# fixed.realSingularValueLowerBound

Estimate lower bound for smallest singular value of real-valued matrix

*Since R2021b*

## Syntax

## Description

returns an estimate of a lower bound for the smallest singular value of a real-valued matrix
with `s_n`

= fixed.realSingularValueLowerBound(`m`

,`n`

,`noiseStandardDeviation`

,`p_s_n`

)`m`

rows and `n`

columns, where *m*≥*n*.

returns an estimate of a lower bound for the smallest singular value of a real-valued matrix $$\left[\begin{array}{c}\lambda {I}_{n}\\ A\end{array}\right]$$ where `s_n`

= fixed.realSingularValueLowerBound(`m`

,`n`

,`noiseStandardDeviation`

,`p_s_n`

,`regularizationParameter`

)*λ* is the
`regularizationParameter`

, *A* is an
*m*-by-*n* matrix with *m* >=
*n*, and *I _{n}* =

`eye(`*n*)

.`p_s_n`

and `regularizationParameter`

are optional
parameters. If not supplied or empty, then their default values are used.

## Examples

### Algorithms to Determine Fixed-Point Types for Real Q-less QR Matrix Solve A'AX=B

This example shows the algorithms that the `fixed.realQlessQRMatrixSolveFixedpointTypes`

function uses to analytically determine fixed-point types for the solution of the real matrix equation $${A}^{\prime}AX=B$$, where $$A$$ is an $$m$$-by-$$n$$ matrix with $$m>n$$, $$B$$ is $$n$$-by-$$p$$, and $$X$$ is $$n$$-by-$$p$$.

**Overview**

You can solve the fixed-point matrix equation $${A}^{\prime}AX=B$$ using QR decomposition. Using a sequence of orthogonal transformations, QR decomposition transforms matrix $$A$$ in-place to upper triangular $$R$$, where $$QR=A$$ is the economy-size QR decomposition. This reduces the equation to an upper-triangular system of equations $${R}^{\prime}RX=B$$. To solve for $$X$$, compute $$X=R\backslash ({R}^{\prime}\backslash B)$$ through forward- and backward-substitution of $$R$$ into $$B$$.

You can determine appropriate fixed-point types for the matrix equation $${A}^{\prime}AX=B$$ by selecting the fraction length based on the number of bits of precision defined by your requirements. The `fixed.realQlessQRMatrixSolveFixedpointTypes`

function analytically computes the following upper bounds on $$R$$, and $$X$$ to determine the number of integer bits required to avoid overflow [1,2,3].

The upper bound for the magnitude of the elements of $$R={Q}^{\prime}A$$ is

$$\mathrm{max}(|R(:)|)\le \sqrt{m}\mathrm{max}(|A(:)|)$$.

The upper bound for the magnitude of the elements of $$X=({A}^{\prime}A)\backslash B$$ is

$$\mathrm{max}(|X(:)|)\le \frac{\sqrt{n}\mathrm{max}(|B(:)|)}{\mathrm{min}(\text{svd}(A){)}^{2}}$$.

Since computing $$\text{svd}(A)$$ is more computationally expensive than solving the system of equations, the `fixed.realQlessQRMatrixSolveFixedpointTypes`

function estimates a lower bound of $$\mathrm{min}(\text{svd}(A))$$.

Fixed-point types for the solution of the matrix equation $$({A}^{\prime}A)X=B$$ are generally well-bounded if the number of rows, $$m$$, of $$A$$ are much greater than the number of columns, $$n$$ (i.e. $$m\gg n$$), and $$A$$ is full rank. If $$A$$ is not inherently full rank, then it can be made so by adding random noise. Random noise naturally occurs in physical systems, such as thermal noise in radar or communications systems. If $$m=n$$, then the dynamic range of the system can be unbounded, for example in the scalar equation $$x={a}^{2}/b$$ and $$a,b\in [-1,1]$$, then $$x$$ can be arbitrarily large if $$b$$ is close to $$0$$.

**Proofs of the Bounds**

**Properties and Definitions of Vector and Matrix Norms**

The proofs of the bounds use the following properties and definitions of matrix and vector norms, where $$Q$$ is an orthogonal matrix, and $$v$$ is a vector of length $$m$$ [6].

$$\begin{array}{lcl}||Av|{|}_{2}& \le & ||A|{|}_{2}||v|{|}_{2}\\ ||Q|{|}_{2}& =& 1\\ ||v|{|}_{\infty}& =& \mathrm{max}(|v(:)|)\\ ||v|{|}_{\infty}& \le & ||v|{|}_{2}\phantom{\rule{0.2777777777777778em}{0ex}}\le \phantom{\rule{0.2777777777777778em}{0ex}}\sqrt{m}||v|{|}_{\infty}\end{array}$$

If $$A$$ is an $$m$$-by-$$n$$ matrix and $$QR=A$$ is the economy-size QR decomposition of $$A$$, where $$Q$$ is orthogonal and $$m$$-by-$$n$$ and $$R$$ is upper-triangular and $$n$$-by-$$n$$, then the singular values of $$R$$ are equal to the singular values of $$A$$. If $$A$$ is nonsingular, then

$$||{R}^{-1}|{|}_{2}=||({R}^{\prime}{)}^{-1}|{|}_{2}=\frac{1}{\mathrm{min}(\text{svd}(R))}=\frac{1}{\mathrm{min}(\text{svd}(A))}$$

**Upper Bound for R = Q'A**

The upper bound for the magnitude of the elements of $$R$$ is

$$\mathrm{max}(|R(:)|)\le \sqrt{m}\mathrm{max}(|A(:)|)$$.

**Proof of Upper Bound for R = Q'A**

The $$j$$th column of $$R$$ is equal to $$R(:,j)={Q}^{\prime}A(:,j)$$, so

$$\begin{array}{rcl}\mathrm{max}(|R(:,j)|)& =& ||R(:,j)|{|}_{\infty}\\ & \le & ||R(:,j)|{|}_{2}\\ & =& ||{Q}^{\prime}A(:,j)|{|}_{2}\\ & \le & ||{Q}^{\prime}|{|}_{2}||A(:,j)|{|}_{2}\\ & =& ||A(:,j)|{|}_{2}\\ & \le & \sqrt{m}||A(:,j)|{|}_{\infty}\\ & =& \sqrt{m}\mathrm{max}(|A(:,j)|)\\ & \le & \sqrt{m}\mathrm{max}(|A(:)|).\end{array}$$

Since $$\mathrm{max}(|R(:,j)|)\le \sqrt{m}\mathrm{max}(|A(:)|)$$ for all $$1\le j$$, then

$$\mathrm{max}(|R(:)|)\le \sqrt{m}\mathrm{max}(|A(:)|).$$

**Upper Bound for X = (A'A)\B**

The upper bound for the magnitude of the elements of $$X=({A}^{\prime}A)\backslash B$$ is

$$\mathrm{max}(|X(:)|)\le \frac{\sqrt{n}\mathrm{max}(|B(:)|)}{\mathrm{min}(\text{svd}(A){)}^{2}}$$.

**Proof of Upper Bound for X = (A'A)\B**

If $$A$$ is not full rank, then $$\mathrm{min}(\text{svd}(A))=0$$, and if $$B$$ is not equal to zero, then $$\sqrt{n}\mathrm{max}(|B(:)|)/\mathrm{min}(\text{svd}(A){)}^{2}=\infty $$and so the inequality is true.

If $${A}^{\prime}Ax=b$$ and $$QR=A$$ is the economy-size QR decomposition of $$A$$, then $${A}^{\prime}Ax={R}^{\prime}{Q}^{\prime}QRx={R}^{\prime}Rx=b$$. If $$A$$ is full rank then $$x={R}^{-1}\cdot (({R}^{\prime}{)}^{-1}b)$$. Let $$x=X(:,j)$$ be the $$j$$th column of $$X$$, and $$b=B(:,j)$$ be the $$j$$th column of $$B$$. Then

$$\begin{array}{rcl}\mathrm{max}(|x(:)|)& =& ||x|{|}_{\infty}\\ & \le & ||x|{|}_{2}\\ & =& ||{R}^{-1}\cdot (({R}^{\prime}{)}^{-1}b)|{|}_{2}\\ & \le & ||{R}^{-1}|{|}_{2}||({R}^{\prime}{)}^{-1}|{|}_{2}||b|{|}_{2}\\ & =& (1/\mathrm{min}(\text{svd}(A){)}^{2})\cdot ||b|{|}_{2}\\ & =& ||b|{|}_{2}/\mathrm{min}(\text{svd}(A){)}^{2}\\ & \le & \sqrt{n}||b|{|}_{\infty}/\mathrm{min}(\text{svd}(A){)}^{2}\\ & =& \sqrt{n}\mathrm{max}(|b(:)|)/\mathrm{min}(\text{svd}(A){)}^{2}.\end{array}$$

Since $$\mathrm{max}(|x(:)|)\le \sqrt{n}\mathrm{max}(|b(:)|)/\mathrm{min}(\text{svd}(A){)}^{2}$$ for all rows and columns of $$B$$ and $$X$$, then

$$\mathrm{max}(|X(:)|)\le \frac{\sqrt{n}\mathrm{max}(|B(:)|)}{\mathrm{min}(\text{svd}(A){)}^{2}}$$.

**Lower Bound for min(svd(A))**

You can estimate a lower bound $$s$$ of $$\mathrm{min}(\text{svd}(A))$$for real-valued $$A$$ using the following formula,

$$s={\sigma}_{N}\sqrt{2{\gamma}^{-1}(\frac{{p}_{s}\phantom{\rule{0.16666666666666666em}{0ex}}\Gamma (m-n+1)\Gamma (n/2)}{{2}^{m-n}\Gamma \left(\frac{m+1}{2}\right)\Gamma \left(\frac{m-n+1}{2}\right)},\phantom{\rule{0.2777777777777778em}{0ex}}\frac{m-n+1}{2})}$$

where $${\sigma}_{N}$$ is the standard deviation of random noise added to the elements of $$A$$, $$1-{p}_{s}$$ is the probability that $$s\le \mathrm{min}(\text{svd}(A))$$, $$\Gamma $$ is the `gamma`

function, and $${\gamma}^{-1}$$is the inverse incomplete gamma function `gammaincinv`

.

The proof is found in [1]. It is derived by integrating the formula in Lemma 3.3 from [3] and rearranging terms.

Since $$s\le \mathrm{min}(\text{svd}(A))$$ with probability $$1-{p}_{s}$$, then you can bound the magnitude of the elements of $$X$$ without computing $$\text{svd}(A)$$,

$$\mathrm{max}(|X(:)|)\le \frac{\sqrt{n}\mathrm{max}(|B(:)|)}{\mathrm{min}(\text{svd}(A){)}^{2}}\le \frac{\sqrt{n}\mathrm{max}(|B(:)|)}{{s}^{2}}$$ with probability $$1-{p}_{s}$$.

You can compute $$s$$ using the `fixed.realSingularValueLowerBound`

function which uses a default probability of 5 standard deviations below the mean, $${p}_{s}=(1+\text{erf}(-5/\sqrt{2}))/2\approx 2.8665\cdot 1{0}^{-7}$$, so the probability that the estimated bound for the smallest singular value $$s$$ is less than the actual smallest singular value of $$A$$ is $$1-{p}_{s}\approx 0.9999997$$.

**Example**

This example runs a simulation with many random matrices and compares the analytical bounds with the actual singular values of $$A$$ and the actual largest elements of $$R={Q}^{\prime}A$$, and $$X=({A}^{\prime}A)\backslash B$$.

**Define System Parameters**

Define the matrix attributes and system parameters for this example.

`m`

is the number of rows in matrix `A`

. In a problem such as beamforming or direction finding, `m`

corresponds to the number of samples that are integrated over.

m = 300;

`n`

is the number of columns in matrix `A`

and rows in matrices B and `X`

. In a least-squares problem, `m`

is greater than `n`

, and usually `m`

is much larger than `n`

. In a problem such as beamforming or direction finding, `n`

corresponds to the number of sensors.

n = 10;

`p`

is the number of columns in matrices `B`

and `X`

. It corresponds to simultaneously solving a system with `p`

right-hand sides.

p = 1;

In this example, set the rank of matrix `A`

to be less than the number of columns. In a problem such as beamforming or direction finding, $$\text{rank}(A)$$ corresponds to the number of signals impinging on the sensor array.

rankA = 3;

`precisionBits`

defines the number of bits of precision required for the matrix solve. Set this value according to system requirements.

precisionBits = 24;

In this example, real-valued matrices `A`

and `B`

are constructed such that the magnitude of their elements is less than or equal to one. Your own system requirements will define what those values are. If you don't know what they are, and `A`

and `B`

are fixed-point inputs to the system, then you can use the `upperbound`

function to determine the upper bounds of the fixed-point types of `A`

and `B`

.

`max_abs_A`

is an upper bound on the maximum magnitude element of A.

max_abs_A = 1;

`max_abs_B`

is an upper bound on the maximum magnitude element of B.

max_abs_B = 1;

Thermal noise standard deviation is the square root of thermal noise power, which is a system parameter. A well-designed system has the quantization level lower than the thermal noise. Here, set `thermalNoiseStandardDeviation`

to the equivalent of $$-50$$dB noise power.

thermalNoiseStandardDeviation = sqrt(10^(-50/10))

thermalNoiseStandardDeviation = 0.0032

The standard deviation of the noise from quantizing a real signal is $${2}^{-\text{precisionBits}}/\sqrt{12}$$ [4,5]. Use `fixed.realQuantizationNoiseStandardDeviation`

to compute this. See that it is less than `thermalNoiseStandardDeviation`

.

quantizationNoiseStandardDeviation = fixed.realQuantizationNoiseStandardDeviation(precisionBits)

quantizationNoiseStandardDeviation = 1.7206e-08

**Compute Fixed-Point Types**

In this example, assume that the designed system matrix $$A$$ does not have full rank (there are fewer signals of interest than number of columns of matrix $$A$$), and the measured system matrix $$A$$ has additive thermal noise that is larger than the quantization noise. The additive noise makes the measured matrix $$A$$ have full rank.

Set $${\sigma}_{\text{noise}}={\sigma}_{\text{thermal}\text{}\text{noise}}$$.

noiseStandardDeviation = thermalNoiseStandardDeviation;

Use `fixed.realQlessQRMatrixSolveFixedpointTypes`

to compute fixed-point types.

```
T = fixed.realQlessQRMatrixSolveFixedpointTypes(m,n,max_abs_A,max_abs_B,...
precisionBits,noiseStandardDeviation)
```

`T = `*struct with fields:*
A: [0x0 embedded.fi]
B: [0x0 embedded.fi]
X: [0x0 embedded.fi]

`T.A`

is the type computed for transforming $\mathit{A}$ to $\mathit{R}$ in-place so that it does not overflow.

T.A

ans = [] DataTypeMode: Fixed-point: binary point scaling Signedness: Signed WordLength: 31 FractionLength: 24

`T.B`

is the type computed for B so that it does not overflow.

T.B

ans = [] DataTypeMode: Fixed-point: binary point scaling Signedness: Signed WordLength: 27 FractionLength: 24

`T.X`

is the type computed for the solution $$X=({A}^{\prime}A)\backslash B$$ so that there is a low probability that it overflows.

T.X

ans = [] DataTypeMode: Fixed-point: binary point scaling Signedness: Signed WordLength: 40 FractionLength: 24

**Upper Bound for R**

The upper bound for $$R$$ is computed using the formula $$\mathrm{max}(|R(:)|)\le \sqrt{m}\mathrm{max}(|A(:)|)$$, where $$m$$ is the number of rows of matrix $$A$$. This upper bound is used to select a fixed-point type with the required number of bits of precision to avoid an overflow in the upper bound.

upperBoundR = sqrt(m)*max_abs_A

upperBoundR = 17.3205

**Lower Bound for min(svd(A)) for Real A**

A lower bound for $$\mathrm{min}(\text{svd}(A))$$ is estimated by the `fixed.realSingularValueLowerBound`

function using a probability that the estimate $$s$$ is not greater than the actual smallest singular value. The default probability is 5 standard deviations below the mean. You can change this probability by specifying it as the last input parameter to the `fixed.realSingularValueLowerBound`

function.

estimatedSingularValueLowerBound = fixed.realSingularValueLowerBound(m,n,noiseStandardDeviation)

estimatedSingularValueLowerBound = 0.0371

**Simulate and Compare to the Computed Bounds**

The bounds are within an order of magnitude of the simulated results. This is sufficient because the number of bits translates to a logarithmic scale relative to the range of values. Being within a factor of 10 is between 3 and 4 bits. This is a good starting point for specifying a fixed-point type. If you run the simulation for more samples, then it is more likely that the simulated results will be closer to the bound. This example uses a limited number of simulations so it doesn't take too long to run. For real-world system design, you should run additional simulations.

Define the number of samples, `numSamples`

, over which to run the simulation.

numSamples = 1e4;

Run the simulation.

```
[actualMaxR,singularValues,X_values] = runSimulations(m,n,p,rankA,max_abs_A,max_abs_B,numSamples,...
noiseStandardDeviation,T);
```

You can see that the upper bound on $$R$$ compared to the measured simulation results of the maximum value of $$R$$ over all runs is within an order of magnitude.

upperBoundR

upperBoundR = 17.3205

max(actualMaxR)

ans = 8.1682

Finally, see that the estimated lower bound of $$\mathrm{min}(\text{svd}(A))$$ compared to the measured simulation results of $$\mathrm{min}(\text{svd}(A))$$ over all runs is also within an order of magnitude.

estimatedSingularValueLowerBound

estimatedSingularValueLowerBound = 0.0371

`actualSmallestSingularValue = min(singularValues,[],'all')`

actualSmallestSingularValue = 0.0421

Plot the distribution of the singular values over all simulation runs. The distributions of the largest singular values correspond to the signals that determine the rank of the matrix. The distributions of the smallest singular values correspond to the noise. The derivation of the estimated bound of the smallest singular value makes use of the random nature of the noise.

clf fixed.example.plot.singularValueDistribution(m,n,rankA,... noiseStandardDeviation,singularValues,... estimatedSingularValueLowerBound,"real");

Zoom in to the smallest singular value to see that the estimated bound is close to it.

xlim([estimatedSingularValueLowerBound*0.9, max(singularValues(n,:))]);

Estimate the largest value of the solution, X, and compare it to the largest value of X found during the simulation runs. The estimation is within an order of magnitude of the actual value, which is sufficient for estimating a fixed-point data type, because it is between 3 and 4 bits.

This example uses a limited number of simulation runs. With additional simulation runs, the actual largest value of X will approach the estimated largest value of X.

estimated_largest_X = fixed.realQlessQRMatrixSolveUpperBoundX(m,n,max_abs_B,noiseStandardDeviation)

estimated_largest_X = 7.2565e+03

`actual_largest_X = max(abs(X_values),[],'all')`

actual_largest_X = 582.6761

Plot the distribution of X values and compare it to the estimated upper bound for X.

clf fixed.example.plot.xValueDistribution(m,n,rankA,noiseStandardDeviation,... X_values,estimated_largest_X,"real normally distributed random");

**Supporting Functions**

The `runSimulations`

function creates a series of random matrices $$A$$ and $$B$$ of a given size and rank, quantizes them according to the computed types, computes the QR decomposition of $$A$$, and solves the equation $${A}^{\prime}AX=B$$. It returns the maximum values of $$R={Q}^{\prime}A$$, the singular values of $$A$$, and the values of $$X$$ so their distributions can be plotted and compared to the bounds.

function [actualMaxR,singularValues,X_values] = runSimulations(m,n,p,rankA,max_abs_A,max_abs_B,... numSamples,noiseStandardDeviation,T) precisionBits = T.A.FractionLength; A_WordLength = T.A.WordLength; B_WordLength = T.B.WordLength; actualMaxR = zeros(1,numSamples); singularValues = zeros(n,numSamples); X_values = zeros(n,numSamples); for j = 1:numSamples A = max_abs_A*fixed.example.realRandomLowRankMatrix(m,n,rankA); % Adding random noise makes A non-singular. A = A + fixed.example.realNormalRandomArray(0,noiseStandardDeviation,m,n); A = quantizenumeric(A,1,A_WordLength,precisionBits); B = fixed.example.realUniformRandomArray(-max_abs_B,max_abs_B,n,p); B = quantizenumeric(B,1,B_WordLength,precisionBits); [~,R] = qr(A,0); X = R\(R'\B); actualMaxR(j) = max(abs(R(:))); singularValues(:,j) = svd(A); X_values(:,j) = X; end end

**References**

Thomas A. Bryan and Jenna L. Warren. “Systems and Methods for Design Parameter Selection”. Patent pending. U.S. Patent Application No. 16/947,130. 2020.

Perform QR Factorization Using CORDIC. Derivation of the bound on growth when computing QR. MathWorks. 2010.

Zizhong Chen and Jack J. Dongarra. “Condition Numbers of Gaussian Random Matrices”. In: SIAM J. Matrix Anal. Appl. 27.3 (July 2005), pp. 603–620. issn: 0895-4798. doi: 10.1137/040616413. url: https://dx.doi.org/10.1137/040616413.

Bernard Widrow. “A Study of Rough Amplitude Quantization by Means of Nyquist Sampling Theory”. In: IRE Transactions on Circuit Theory 3.4 (Dec. 1956), pp. 266–276.

Bernard Widrow and István Kollár. Quantization Noise – Roundoff Error in Digital Computation, Signal Processing, Control, and Communications. Cambridge, UK: Cambridge University Press, 2008.

Gene H. Golub and Charles F. Van Loan. Matrix Computations. Second edition. Baltimore: Johns Hopkins University Press, 1989.

Suppress mlint warnings in this file.

%#ok<*NASGU> %#ok<*ASGLU>

### Algorithms to Determine Fixed-Point Types for Real Least-Squares Matrix Solve AX=B

This example shows the algorithms that the `fixed.realQRMatrixSolveFixedpointTypes`

function uses to analytically determine fixed-point types for the solution of the real least-squares matrix equation $$AX=B$$, where $$A$$ is an $$m$$-by-$$n$$ matrix with $$m\ge n$$, $$B$$ is $$m$$-by-$$p$$, and $$X$$ is $$n$$-by-$$p$$.

**Overview**

You can solve the fixed-point least-squares matrix equation $$AX=B$$ using QR decomposition. Using a sequence of orthogonal transformations, QR decomposition transforms matrix $$A$$ in-place to upper triangular $$R$$, and transforms matrix $$B$$ in-place to $C={Q}^{\prime}B$, where $$QR=A$$ is the economy-size QR decomposition. This reduces the equation to an upper-triangular system of equations $$RX=C$$. To solve for $\mathit{X}$, compute $$X=R\backslash C$$ through back-substitution of $$R$$ into $$C$$.

You can determine appropriate fixed-point types for the least-squares matrix equation $$AX=B$$ by selecting the fraction length based on the number of bits of precision defined by your requirements. The `fixed.realQRMatrixSolveFixedpointTypes`

function analytically computes the following upper bounds on $$R$$, $$C={Q}^{\prime}B$$, and $$X$$ to determine the number of integer bits required to avoid overflow [1,2,3].

The upper bound for the magnitude of the elements of $$R$$ is

$$\mathrm{max}(|R(:)|)\le \sqrt{m}\mathrm{max}(|A(:)|)$$.

The upper bound for the magnitude of the elements of $$C={Q}^{\prime}B$$ is

$$\mathrm{max}(|C(:)|)\le \sqrt{m}\mathrm{max}(|B(:)|)$$.

The upper bound for the magnitude of the elements of $$X=A\backslash B$$ is

$$\mathrm{max}(|X(:)|)\le \frac{\sqrt{m}\mathrm{max}(|B(:)|)}{\mathrm{min}(\text{svd}(A))}$$.

Since computing $$\text{svd}(A)$$ is more computationally expensive than solving the system of equations, the `fixed.realQRMatrixSolveFixedpointTypes`

function estimates a lower bound of $$\mathrm{min}(\text{svd}(A))$$.

Fixed-point types for the solution of the matrix equation $$AX=B$$ are generally well-bounded if the number of rows, $$m$$, of $$A$$ are much greater than the number of columns, $$n$$ (i.e. $$m\gg n$$), and $$A$$ is full rank. If $$A$$ is not inherently full rank, then it can be made so by adding random noise. Random noise naturally occurs in physical systems, such as thermal noise in radar or communications systems. If $$m=n$$, then the dynamic range of the system can be unbounded, for example in the scalar equation $$x=a/b$$ and $$a,b\in [-1,1]$$, then $$x$$ can be arbitrarily large if $$b$$ is close to $$0$$.

**Proofs of the Bounds**

**Properties and Definitions of Vector and Matrix Norms**

The proofs of the bounds use the following properties and definitions of matrix and vector norms, where $$Q$$ is an orthogonal matrix, and $$v$$ is a vector of length $$m$$ [6].

$$\begin{array}{lcl}||Av|{|}_{2}& \le & ||A|{|}_{2}||v|{|}_{2}\\ ||Q|{|}_{2}& =& 1\\ ||v|{|}_{\infty}& =& \mathrm{max}(|v(:)|)\\ ||v|{|}_{\infty}& \le & ||v|{|}_{2}\phantom{\rule{0.2777777777777778em}{0ex}}\le \phantom{\rule{0.2777777777777778em}{0ex}}\sqrt{m}||v|{|}_{\infty}\end{array}$$

If $$A$$ is an $$m$$-by-$$n$$ matrix and $$QR=A$$ is the economy-size QR decomposition of $$A$$, where $$Q$$ is orthogonal and $$m$$-by-$$n$$ and $$R$$ is upper-triangular and $$n$$-by-$$n$$, then the singular values of $$R$$ are equal to the singular values of $$A$$. If $$A$$ is nonsingular, then

$$||{R}^{-1}|{|}_{2}=||({R}^{\prime}{)}^{-1}|{|}_{2}=\frac{1}{\mathrm{min}(\text{svd}(R))}=\frac{1}{\mathrm{min}(\text{svd}(A))}$$

**Upper Bound for R = Q'A**

The upper bound for the magnitude of the elements of $$R$$ is

$$\mathrm{max}(|R(:)|)\le \sqrt{m}\mathrm{max}(|A(:)|)$$.

**Proof of Upper Bound for R = Q'A**

The $$j$$th column of $$R$$ is equal to $$R(:,j)={Q}^{\prime}A(:,j)$$, so

$$\begin{array}{rcl}\mathrm{max}(|R(:,j)|)& =& ||R(:,j)|{|}_{\infty}\\ & \le & ||R(:,j)|{|}_{2}\\ & =& ||{Q}^{\prime}A(:,j)|{|}_{2}\\ & \le & ||{Q}^{\prime}|{|}_{2}||A(:,j)|{|}_{2}\\ & =& ||A(:,j)|{|}_{2}\\ & \le & \sqrt{m}||A(:,j)|{|}_{\infty}\\ & =& \sqrt{m}\mathrm{max}(|A(:,j)|)\\ & \le & \sqrt{m}\mathrm{max}(|A(:)|).\end{array}$$

Since $$\mathrm{max}(|R(:,j)|)\le \sqrt{m}\mathrm{max}(|A(:)|)$$ for all $$1\le j$$, then

$$\mathrm{max}(|R(:)|)\le \sqrt{m}\mathrm{max}(|A(:)|).$$

**Upper Bound for C = Q'B**

The upper bound for the magnitude of the elements of $$C={Q}^{\prime}B$$ is

$$\mathrm{max}(|C(:)|)\le \sqrt{m}\mathrm{max}(|B(:)|)$$.

**Proof of Upper Bound for C = Q'B**

The proof of the upper bound for $$C={Q}^{\prime}B$$ is the same as the proof of the upper bound for $$R={Q}^{\prime}A$$ by substituting $$C$$ for $$R$$ and $$B$$ for $$A$$.

**Upper Bound for X = A\B**

The upper bound for the magnitude of the elements of $$X=A\backslash B$$ is

$$\mathrm{max}(|X(:)|)\le \frac{\sqrt{m}\mathrm{max}(|B(:)|)}{\mathrm{min}(\text{svd}(A))}$$.

**Proof of Upper Bound for X = A\B**

If $$A$$ is not full rank, then $$\mathrm{min}(\text{svd}(A))=0$$, and if $$B$$ is not equal to zero, then $$\sqrt{m}\mathrm{max}(|B(:)|)/\mathrm{min}(\text{svd}(A))=\infty $$ and so the inequality is true.

If $$A$$ is full rank, then $$x={R}^{-1}({Q}^{\prime}b)$$. Let $$x=X(:,j)$$ be the $$j$$th column of $$X$$, and $$b=B(:,j)$$ be the $$j$$th column of $$B$$. Then

$$\begin{array}{rcl}\mathrm{max}(|x(:)|)& =& ||x|{|}_{\infty}\\ & \le & ||x|{|}_{2}\\ & =& ||{R}^{-1}\cdot ({Q}^{\prime}b)|{|}_{2}\\ & \le & ||{R}^{-1}|{|}_{2}||{Q}^{\prime}|{|}_{2}||b|{|}_{2}\\ & =& (1/\mathrm{min}(\text{svd}(A)))\cdot 1\cdot ||b|{|}_{2}\\ & =& ||b|{|}_{2}/\mathrm{min}(\text{svd}(A))\\ & \le & \sqrt{m}||b|{|}_{\infty}/\mathrm{min}(\text{svd}(A))\\ & =& \sqrt{m}\mathrm{max}(|b(:)|)/\mathrm{min}(\text{svd}(A)).\end{array}$$

Since $$\mathrm{max}(|x(:)|)\le \sqrt{m}\mathrm{max}(|b(:)|)/\mathrm{min}(\text{svd}(A))$$ for all rows and columns of $$B$$ and $$X$$, then

$$\mathrm{max}(|X(:)|)\le \frac{\sqrt{m}\mathrm{max}(|B(:)|)}{\mathrm{min}(\text{svd}(A))}$$.

**Lower Bound for min(svd(A))**

You can estimate a lower bound $$s$$ of $$\mathrm{min}(\text{svd}(A))$$for real-valued $$A$$ using the following formula,

$$s={\sigma}_{N}\sqrt{2{\gamma}^{-1}(\frac{{p}_{s}\phantom{\rule{0.16666666666666666em}{0ex}}\Gamma (m-n+1)\Gamma (n/2)}{{2}^{m-n}\Gamma \left(\frac{m+1}{2}\right)\Gamma \left(\frac{m-n+1}{2}\right)},\phantom{\rule{0.2777777777777778em}{0ex}}\frac{m-n+1}{2})}$$

where $${\sigma}_{N}$$ is the standard deviation of random noise added to the elements of $$A$$, $$1-{p}_{s}$$ is the probability that $$s\le \mathrm{min}(\text{svd}(A))$$, $$\Gamma $$ is the `gamma`

function, and $${\gamma}^{-1}$$is the inverse incomplete gamma function `gammaincinv`

.

The proof is found in [1]. It is derived by integrating the formula in Lemma 3.3 from [3] and rearranging terms.

Since $$s\le \mathrm{min}(\text{svd}(A))$$ with probability $$1-{p}_{s}$$, then you can bound the magnitude of the elements of $$X$$ without computing $$\text{svd}(A)$$,

$$\mathrm{max}(|X(:)|)\le \frac{\sqrt{m}\mathrm{max}(|B(:)|)}{\mathrm{min}(\text{svd}(A))}\le \frac{\sqrt{m}\mathrm{max}(|B(:)|)}{s}$$ with probability $$1-{p}_{s}$$.

You can compute $$s$$ using the `fixed.realSingularValueLowerBound`

function which uses a default probability of 5 standard deviations below the mean $${p}_{s}=(1+\text{erf}(-5/\sqrt{2}))/2\approx 2.8665\cdot 1{0}^{-7}$$, so the probability that the estimated bound for the smallest singular value $$s$$ is less than the actual smallest singular value of $$A$$ is $$1-{p}_{s}\approx 0.9999997$$.

**Example**

This example runs a simulation with many random matrices and compares the analytical bounds with the actual singular values of $$A$$ and the actual largest elements of $$R={Q}^{\prime}A$$, $$C={Q}^{\prime}B$$, and $$X=A\backslash B$$.

**Define System Parameters**

Define the matrix attributes and system parameters for this example.

`m`

is the number of rows in matrices `A`

and `B`

. In a problem such as beamforming or direction finding, `m`

corresponds to the number of samples that are integrated over.

m = 300;

`n`

is the number of columns in matrix `A`

and rows in matrix `X`

. In a least-squares problem, `m`

is greater than `n`

, and usually `m`

is much larger than `n`

. In a problem such as beamforming or direction finding, `n`

corresponds to the number of sensors.

n = 10;

`p`

is the number of columns in matrices `B`

and `X`

. It corresponds to simultaneously solving a system with `p`

right-hand sides.

p = 1;

In this example, set the rank of matrix `A`

to be less than the number of columns. In a problem such as beamforming or direction finding, $$\text{rank}(A)$$ corresponds to the number of signals impinging on the sensor array.

rankA = 3;

`precisionBits`

defines the number of bits of precision required for the matrix solve. Set this value according to system requirements.

precisionBits = 24;

In this example, real-valued matrices `A`

and `B`

are constructed such that the magnitude of their elements is less than or equal to one. Your own system requirements will define what those values are. If you don't know what they are, and `A`

and `B`

are fixed-point inputs to the system, then you can use the `upperbound`

function to determine the upper bounds of the fixed-point types of `A`

and `B`

.

`max_abs_A`

is an upper bound on the maximum magnitude element of A.

max_abs_A = 1;

`max_abs_B`

is an upper bound on the maximum magnitude element of B.

max_abs_B = 1;

Thermal noise standard deviation is the square root of thermal noise power, which is a system parameter. A well-designed system has the quantization level lower than the thermal noise. Here, set `thermalNoiseStandardDeviation`

to the equivalent of $$-50$$dB noise power.

thermalNoiseStandardDeviation = sqrt(10^(-50/10))

thermalNoiseStandardDeviation = 0.0032

The standard deviation of the noise from quantizing the elements of a real signal is $${2}^{-\text{precisionBits}}/\sqrt{12}$$ [4,5]. Use the `fixed.realQuantizationNoiseStandardDeviation`

function to compute this. See that it is less than `thermalNoiseStandardDeviation`

.

quantizationNoiseStandardDeviation = fixed.realQuantizationNoiseStandardDeviation(precisionBits)

quantizationNoiseStandardDeviation = 1.7206e-08

**Compute Fixed-Point Types**

In this example, assume that the designed system matrix $$A$$ does not have full rank (there are fewer signals of interest than number of columns of matrix $$A$$), and the measured system matrix $$A$$ has additive thermal noise that is larger than the quantization noise. The additive noise makes the measured matrix $$A$$ have full rank.

Set $${\sigma}_{\text{noise}}={\sigma}_{\text{thermal}\text{}\text{noise}}$$.

noiseStandardDeviation = thermalNoiseStandardDeviation;

Use `fixed.realQRMatrixSolveFixedpointTypes`

to compute fixed-point types.

```
T = fixed.realQRMatrixSolveFixedpointTypes(m,n,max_abs_A,max_abs_B,...
precisionBits,noiseStandardDeviation)
```

`T = `*struct with fields:*
A: [0x0 embedded.fi]
B: [0x0 embedded.fi]
X: [0x0 embedded.fi]

`T.A`

is the type computed for transforming $\mathit{A}$ to $\mathit{R}$ in-place so that it does not overflow.

T.A

ans = [] DataTypeMode: Fixed-point: binary point scaling Signedness: Signed WordLength: 31 FractionLength: 24

`T.B`

is the type computed for transforming $\mathit{B}$ to ${\mathit{Q}}^{\prime}\mathit{B}$ in-place so that it does not overflow.

T.B

ans = [] DataTypeMode: Fixed-point: binary point scaling Signedness: Signed WordLength: 31 FractionLength: 24

`T.X`

is the type computed for the solution $\mathit{X}=\mathit{A}\backslash \mathit{B}\text{\hspace{0.17em}}$so that there is a low probability that it overflows.

T.X

ans = [] DataTypeMode: Fixed-point: binary point scaling Signedness: Signed WordLength: 36 FractionLength: 24

**Upper Bounds for R and C=Q'B**

The upper bounds for $$R$$ and $$C={Q}^{\prime}B$$ are computed using the following formulas, where $$m$$ is the number of rows of matrices $$A$$ and $$B$$.

$$\mathrm{max}(|R(:)|)\le \sqrt{m}\mathrm{max}(|A(:)|)$$

$$\mathrm{max}(|C(:)|)\le \sqrt{m}\mathrm{max}(|B(:)|)$$

These upper bounds are used to select a fixed-point type with the required number of bits of precision to avoid overflows.

upperBoundR = sqrt(m)*max_abs_A

upperBoundR = 17.3205

upperBoundQB = sqrt(m)*max_abs_B

upperBoundQB = 17.3205

**Lower Bound for min(svd(A)) for Real A**

A lower bound for $$\mathrm{min}(\text{svd}(A))$$ is estimated by the `fixed.realSingularValueLowerBound`

function using a probability that the estimate $$s$$ is not greater than the actual smallest singular value. The default probability is 5 standard deviations below the mean. You can change this probability by specifying it as the last input parameter to the `fixed.realSingularValueLowerBound`

function.

estimatedSingularValueLowerBound = fixed.realSingularValueLowerBound(m,n,noiseStandardDeviation)

estimatedSingularValueLowerBound = 0.0371

**Simulate and Compare to the Computed Bounds**

The bounds are within an order of magnitude of the simulated results. This is sufficient because the number of bits translates to a logarithmic scale relative to the range of values. Being within a factor of 10 is between 3 and 4 bits. This is a good starting point for specifying a fixed-point type. If you run the simulation for more samples, then it is more likely that the simulated results will be closer to the bound. This example uses a limited number of simulations so it doesn't take too long to run. For real-world system design, you should run additional simulations.

Define the number of samples, `numSamples`

, over which to run the simulation.

numSamples = 1e4;

Run the simulation.

```
[actualMaxR,actualMaxQB,singularValues,X_values] = runSimulations(m,n,p,rankA,max_abs_A,max_abs_B,...
numSamples,noiseStandardDeviation,T);
```

You can see that the upper bound on $$R$$ compared to the measured simulation results of the maximum value of $$R$$ over all runs is within an order of magnitude.

upperBoundR

upperBoundR = 17.3205

max(actualMaxR)

ans = 8.3029

You can see that the upper bound on $$C={Q}^{\prime}B$$ compared to the measured simulation results of the maximum value of $$C={Q}^{\prime}B$$ over all runs is also within an order of magnitude.

upperBoundQB

upperBoundQB = 17.3205

max(actualMaxQB)

ans = 2.5707

Finally, see that the estimated lower bound of $$\mathrm{min}(\text{svd}(A))$$ compared to the measured simulation results of $$\mathrm{min}(\text{svd}(A))$$ over all runs is also within an order of magnitude.

estimatedSingularValueLowerBound

estimatedSingularValueLowerBound = 0.0371

`actualSmallestSingularValue = min(singularValues,[],'all')`

actualSmallestSingularValue = 0.0420

Plot the distribution of the singular values over all simulation runs. The distributions of the largest singular values correspond to the signals that determine the rank of the matrix. The distributions of the smallest singular values correspond to the noise. The derivation of the estimated bound of the smallest singular value makes use of the random nature of the noise.

clf fixed.example.plot.singularValueDistribution(m,n,rankA,noiseStandardDeviation,... singularValues,estimatedSingularValueLowerBound,"real");

Zoom in to smallest singular value to see that the estimated bound is close to it.

xlim([estimatedSingularValueLowerBound*0.9, max(singularValues(n,:))]);

Estimate the largest value of the solution, X, and compare it to the largest value of X found during the simulation runs. The estimation is within an order of magnitude of the actual value, which is sufficient for estimating a fixed-point data type, because it is between 3 and 4 bits.

This example uses a limited number of simulation runs. With additional simulation runs, the actual largest value of X will approach the estimated largest value of X.

estimated_largest_X = fixed.realMatrixSolveUpperBoundX(m,n,max_abs_B,noiseStandardDeviation)

estimated_largest_X = 466.5772

`actual_largest_X = max(abs(X_values),[],'all')`

actual_largest_X = 44.8056

Plot the distribution of X values and compare it to the estimated upper bound for X.

clf fixed.example.plot.xValueDistribution(m,n,rankA,noiseStandardDeviation,... X_values,estimated_largest_X,"real normally distributed random");

**Supporting Functions**

The `runSimulations`

function creates a series of random matrices $$A$$ and $$B$$ of a given size and rank, quantizes them according to the computed types, computes the QR decomposition of $$A$$, and solves the equation $$AX=B$$. It returns the maximum values of $$R={Q}^{\prime}A$$ and $$C={Q}^{\prime}B$$, the singular values of $$A$$, and the values of $$X$$ so their distributions can be plotted and compared to the bounds.

function [actualMaxR,actualMaxQB,singularValues,X_values] = runSimulations(m,n,p,rankA,max_abs_A,max_abs_B,... numSamples,noiseStandardDeviation,T) precisionBits = T.A.FractionLength; A_WordLength = T.A.WordLength; B_WordLength = T.B.WordLength; actualMaxR = zeros(1,numSamples); actualMaxQB = zeros(1,numSamples); singularValues = zeros(n,numSamples); X_values = zeros(n,numSamples); for j = 1:numSamples A = max_abs_A*fixed.example.realRandomLowRankMatrix(m,n,rankA); % Adding normally distributed random noise makes A non-singular. A = A + fixed.example.realNormalRandomArray(0,noiseStandardDeviation,m,n); A = quantizenumeric(A,1,A_WordLength,precisionBits); B = fixed.example.realUniformRandomArray(-max_abs_B,max_abs_B,m,p); B = quantizenumeric(B,1,B_WordLength,precisionBits); [Q,R] = qr(A,0); C = Q'*B; X = R\C; actualMaxR(j) = max(abs(R(:))); actualMaxQB(j) = max(abs(C(:))); singularValues(:,j) = svd(A); X_values(:,j) = X; end end

**References**

Thomas A. Bryan and Jenna L. Warren. “Systems and Methods for Design Parameter Selection”. Patent pending. U.S. Patent Application No. 16/947,130. 2020.

Perform QR Factorization Using CORDIC. Derivation of the bound on growth when computing QR. MathWorks. 2010.

Zizhong Chen and Jack J. Dongarra. “Condition Numbers of Gaussian Random Matrices”. In: SIAM J. Matrix Anal. Appl. 27.3 (July 2005), pp. 603–620. issn: 0895-4798. doi: 10.1137/040616413. url: https://dx.doi.org/10.1137/040616413.

Bernard Widrow. “A Study of Rough Amplitude Quantization by Means of Nyquist Sampling Theory”. In: IRE Transactions on Circuit Theory 3.4 (Dec. 1956), pp. 266–276.

Bernard Widrow and István Kollár. Quantization Noise – Roundoff Error in Digital Computation, Signal Processing, Control, and Communications. Cambridge, UK: Cambridge University Press, 2008.

Gene H. Golub and Charles F. Van Loan. Matrix Computations. Second edition. Baltimore: Johns Hopkins University Press, 1989.

Suppress `mlint`

warnings in this file.

%#ok<*NASGU> %#ok<*ASGLU>

## Input Arguments

`m`

— Number of rows in matrix

positive integer-valued scalar

Number of rows in matrix, specified as a positive integer-valued scalar. The number
of rows, `m`

, must be greater than or equal to the number of columns,
`n`

.

**Data Types: **`double`

`n`

— Number of columns in matrix

positive integer-valued scalar

Number of columns in matrix, specified as a positive integer-valued scalar. The
number of rows, `m`

, must be greater than or equal to the number of
columns, `n`

.

**Data Types: **`double`

`noiseStandardDeviation`

— Standard deviation of additive random noise in matrix

scalar

Standard deviation of additive random noise in matrix, specified as a scalar.

**Data Types: **`double`

`p_s_n`

— Probability that estimate of lower bound is larger than actual smallest singular value of matrix

`2.8665e-07`

(default) | scalar

Probability that estimate of lower bound is larger than actual smallest singular value of matrix, specified as a scalar.

If `p_s_n`

is not supplied or empty, then the default of
`p_s_n = (1/2)*(1+erf(-5/sqrt(2))) = 2.8665e-07`

is used, which is 5
standard deviations below the mean, so the probability that the estimated lower bound
for the smallest singular value is less than the actual smallest singular value is
`1 - p_s = 0.99999971 - p_s = 0.9999997`

.

**Data Types: **`double`

`regularizationParameter`

— Regularization parameter

`0`

(default) | nonnegative scalar

Regularization parameter, specified as a nonnegative scalar. Small, positive values of the regularization parameter can improve the conditioning of the problem and reduce the variance of the estimates. While biased, the reduced variance of the estimate often results in a smaller mean squared error when compared to least-squares estimates.

`regularizationParameter`

is the Tikhonov regularization
parameter of the matrix $$\left[\begin{array}{c}\lambda {I}_{n}\\ A\end{array}\right]$$ where *λ* is the
`regularizationParameter`

, *A* is an
*m*-by-*n* matrix with *m* >=
*n*, and *I* =
`eye(`

.*n*)

**Data Types: **`single`

| `double`

| `int8`

| `int16`

| `int32`

| `int64`

| `uint8`

| `uint16`

| `uint32`

| `uint64`

| `fi`

## Output Arguments

`s_n`

— Estimate of lower bound for smallest singular value of real-valued matrix

scalar

Estimate of lower bound for smallest singular value of real-valued matrix, returned as a scalar.

## Tips

Use

`fixed.realSingularValueLowerBound`

to estimate the smallest singular value of a matrix to estimate a bound for max(|*X*(:)|). For example, in`fixed.realQRMatrixSolveFixedpointTypes`

, the elements of*X*=*R*\(*Q*'*B*) are bounded in magnitude by$$\mathrm{max}\left(\left|X(:)\right|\right)\le \frac{\sqrt{m}\mathrm{max}\left(\left|B(:)\right|\right)}{\mathrm{min}\left(\text{svd}\left(A\right)\right)}\le \frac{\sqrt{m}\mathrm{max}\left(\left|B(:)\right|\right)}{s}$$

with probability 1-

*p*._{s}max(|

*X*(:)|) is smaller when the denominator in the above equation is larger.If nothing else is known about a matrix, then generally, the smallest singular value will be larger if:

there is additive random noise.

the number of rows,

*m*, is much larger than the number of columns,*n*.

If the noise standard deviation is not known, you can approximate it as the standard deviation of the quantization error. You can compute the quantization error using

`fixed.realQuantizationNoiseStandardDeviation`

.For

*s*to be a useful bound on the smallest singular value of*A*, the probability that*s*is greater than the smallest singular value of*A*should be small. A practical value to use is$${p}_{s}=\left(1/2\right)\cdot \left(1+\text{erf}\left(-5/\sqrt{2}\right)\right)\approx 3\cdot {10}^{-7}$$

which is 5 standard deviations below the mean, so the probability that the estimated bound for the smallest singular value is less than the actual smallest singular value is 1-

*p*≈ 0.9999997._{s}`fixed.realSingularValueLowerBound`

is used in these functions.

## Algorithms

Given a *m*-by-*n* real-valued matrix
*A* and standard deviation *σ _{N}* of
additive random noise on the elements of

*A*, you can compute an estimate of a lower bound for the smallest singular value of

*A*,

*s*, such that the probability,

*p*, of

_{s}*s*being greater than the smallest singular value of

*A*using this formula [1][2].

$$s={\sigma}_{N}\sqrt{2{\gamma}^{-1}\left(\frac{{p}_{s}\Gamma \left(m-n+1\right)\Gamma (n/2)}{{2}^{m-n}\Gamma \left(\frac{m+1}{2}\right)\Gamma \left(\frac{m-n+1}{2}\right)},\frac{m-n+1}{2}\right)}$$

## References

[1] Bryan, Thomas A. and Jenna L. Warren. "Systems and Methods for Design Parameter Selection." U.S. Patent Application No. 16/947, 130. 2020.

[2] Chen, Zizhong and Jack J.
Dongarra. "Condition Numbers of Gaussian Random Matrices." *SIAM Journal on Matrix
Analysis and Applications* 27, no. 3 (July 2005): 603-620.
https://doi.org/10.1137/040616413.

## Extended Capabilities

### C/C++ Code Generation

Generate C and C++ code using MATLAB® Coder™.

## Version History

**Introduced in R2021b**

### R2022a: Support for Tikhonov regularization parameter

The `fixed.realSingularValueLowerBound`

function now supports the
Tikhonov regularization parameter, regularizationParameter.

## Open Example

You have a modified version of this example. Do you want to open this example with your edits?

## MATLAB Command

You clicked a link that corresponds to this MATLAB command:

Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.

Select a Web Site

Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .

You can also select a web site from the following list:

## How to Get Best Site Performance

Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.

### Americas

- América Latina (Español)
- Canada (English)
- United States (English)

### Europe

- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)

- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)