In addition to estimating model parameters, the toolbox algorithms also estimate variability of the model parameters that result from random disturbances in the output.

Understanding model variability helps you to understand how different your model parameters would be if you repeated the estimation using a different data set (with the same input sequence as the original data set) and the same model structure.

When validating your parametric models, check the uncertainty values. Large uncertainties in the parameters might be caused by high model orders, inadequate excitation, and poor signal-to-noise ratio in the data.

Uncertainty in the model is called *model covariance*.

When you estimate a model, the covariance matrix of the estimated
parameters is stored with the model. Use `getcov`

to
fetch the covariance matrix. Use `getpvec`

to
fetch the list of parameters and their individual uncertainties that
have been computed using the covariance matrix. The covariance matrix
is used to compute all uncertainties in model output, Bode plots,
residual plots, and pole-zero plots.

Computing the covariance matrix is based on the assumption that
the model structure gives the correct description of the system dynamics.
For models that include a disturbance model *H*,
a correct uncertainty estimate assumes that the model produces white
residuals. To determine whether you can trust the estimated model
uncertainty values, perform residual analysis tests on your model,
as described in Residual Analysis.
If your model passes residual analysis tests, there is a good chance
that the true system lies within the confidence interval and any parameter
uncertainties results from random disturbances in the output.

For output-error models, such as transfer function models, state-space
with K=0 and polynomial models of output-error form, with the noise
model *H* fixed to `1`

, the covariance
matrix computation does not assume white residuals. Instead, the covariance
is estimated based on the estimated color of the residual correlations.
This estimation of the noise color is also performed for state-space
models with *K*=`0`

, which is
equivalent to an output-error model.

You can view the following uncertainty information from linear and nonlinear grey-box models:

Uncertainties of estimated parameters.

Type

`present(model)`

at the prompt, where`model`

represents the name of a linear or nonlinear model.Confidence intervals on the linear model plots, including step-response, impulse-response, Bode, Nyquist, noise spectrum and pole-zero plots.

Confidence intervals are computed based on the variability in the model parameters. For information about displaying confidence intervals, see the corresponding plot section.

Covariance matrix of the estimated parameters in linear models and nonlinear grey-box models.

Use

`getcov`

.Estimated standard deviations of polynomial coefficients, poles/zeros, or state-space matrices using

`idssdata`

,`tfdata`

,`zpkdata`

, and`polydata`

.Simulated output values for linear models with standard deviations using the

`sim`

command.Call the

`sim`

command with output arguments, where the second output argument is the estimated standard deviation of each output value. For example, type`[ysim,ysimsd]=sim(model,data)`

, where`ysim`

is the simulated output,`ysimsd`

contains the standard deviations on the simulated output, and`data`

is the simulation data.To perform Monte-Carlo analysis, use

`rsample`

to generate a random sampling of an identified model in a given confidence region. An array of identified systems of the same structure as the input system is returned. The parameters of the returned models are perturbed about their nominal values in a way that is consistent with the parameter covariance.To simulate the effect of parameter uncertainties on a model's response, use

`simsd`

.

Was this topic helpful?