# recursiveARX

Online parameter estimation of ARX model

## Description

Use the `recursiveARX` System object™ for parameter estimation with real-time data using an ARX model structure. If all the data you need for estimation is available at once and you are estimating a time-invariant model, use the offline estimation function `arx`.

To perform parameter estimation with real-time data:

1. Create the `recursiveARX` object and set its properties.

2. Call the object with arguments, as if it were a function.

## Creation

### Syntax

``arxobj = recursiveARX``
``arxobj = recursiveARX([na,nb,nk])``
``arxobj = recursiveARX([na,nb,nk],A0,B0)``
``arxobj = recursiveARX(___,Name,Value)``

### Description

example

````arxobj = recursiveARX` creates a System object for online parameter estimation of a default single-output ARMA model. The default model structure has a polynomial of order 1 and initial polynomial coefficient values `eps`.```
````arxobj = recursiveARX([na,nb,nk])` sets the orders of polynomials `A` and `B` to `na` and `nb`, respectively. This syntax also sets the input-output delay to `nk`.```

example

````arxobj = recursiveARX([na,nb,nk],A0,B0)` specifies the initial coefficient values of polynomials `A` and `B` by setting the `InitialA` property to `A0` and the `InitialB` property to `B0`. Specify initial values to potentially avoid local minima during estimation.```

example

````arxobj = recursiveARX(___,Name,Value)` specifies one or more properties of the model structure or recursive estimation algorithm using name-value arguments. For example, `arxobj = recursiveARX(2,EstimationMethod="NormalizedGradient")` creates an estimation object that uses a normalized gradient estimation method.Before R2021a, use commas to separate each name and value, and enclose `Name` in quotes. For example, ```arxobj = recursiveARA(2,"EstimationMethod","NormalizedGradient")``` creates an estimation object that uses a normalized gradient estimation method.```

### Input Arguments

expand all

Order of polynomial A(q), specified as a nonnegative integer.

Order of polynomial B(q) + 1, specified as a 1-by-Nu vector of positive integers, where Nu is the number of inputs. The ithe element of `nb` is the input-output delay for the ith input.

For MISO models, there are as many B(q) polynomials as the number of inputs. The ith element of `nb` is the order for the ith B(q) polynomial.

Input-output delay, specified as a 1-by-Nu vector of nonnegative integers, where Nu is the number of inputs. The ithe element of `nk` is the input-output delay for the ith input. `nk` is number of input samples that occur before the input affects the output.

## Properties

expand all

Unless otherwise indicated, properties are nontunable, which means you cannot change their values after calling the object. Objects lock when you call them, and the `release` function unlocks them.

If a property is tunable, you can change its value at any time.

Estimated coefficients of the polynomial A(q), returned as a row vector. The elements of this vector appear in order of ascending powers of q-1.

`A` is initially empty when you create the object and is populated after you run the online parameter estimation.

Estimated coefficients of polynomial B(q), stored as an Nu-by-`max(nb+nk)` matrix, where Nu is the number of inputs.

The ith row of `B` corresponds to the ith input and contains `nk(i)` leading zeros, followed by `nb(i)` estimated parameters, specified in order of ascending powers of q-1. Ignore zero entries beyond `nk(i)+nb(i)`.

For MISO models, there are as many B(q) polynomials as the number of inputs. `nb(i)` is the order of ith polynomial Bi(q)+1 for the ith input.

`B` is initially empty when you create the object and is populated after you run the online parameter estimation.

Initial coefficients of the polynomial A(q), specified as a row vector. The length of this vector must be `na` + 1, where `na` is the order of A(q). The first element of this vector must be `1`. Specify the coefficients in ascending powers of q-1.

If the initial guesses are much smaller than the default `InitialParameterCovariance` value of 10000, the software accords less importance to the initial guesses during estimation. In that case, specify a smaller initial parameter covariance.

Tunable: Yes

Initial values for the coefficients of polynomial B(q), specified as an Nu-by-`max(nb+nk)` matrix. Nu is the number of inputs.

For MISO models, there are as many B(q) polynomials as the number of inputs. The ith row of `InitialB` corresponds to the ith input and must contain `nk(i)` zeros, followed by `nb(i)` initial parameter values. Entries beyond `nk(i)+nb(i)` are ignored.

If the initial guesses are much smaller than the default `InitialParameterCovariance`, 10000, the initial guesses are given less importance during estimation. In that case, specify a smaller initial parameter covariance.

Initial values of the output buffer in finite-history estimation, specified as `0` or a (W + `na`)-by-1 vector, where W is equal to `WindowLength`.

Use this property to control the initial behavior of the algorithm.

When you set `InitialOutputs` to `0`, the object populates the buffer with zeros.

If you set the initial buffer to `0` or if the buffer does not contain enough information, you see a warning message during the initial phase of your estimation. The warning usually clears after a few cycles. The number of cycles required to buffer sufficient information depends on the order of your polynomials and your input delays. If the warning persists, evaluate the content of your signals.

Tunable: Yes

#### Dependencies

To enable this property, set `History` to `'Finite'`.

Initial values of the inputs in the finite history window, specified as `0` or as a (W-1+max(`nb`)+max(`nk`))-by-Nuu matrix, where W is equal to `WindowLength` and Nu is the number of inputs.

Use this property to control the initial behavior of the algorithm.

When you set `InitialInputs` to `0`, the object populates the buffer with zeros.

If the initial buffer is set to `0` or does not contain enough information, you see a warning message during the initial phase of your estimation. The warning should clear after a few cycles. The number of cycles it takes for sufficient information to be buffered depends upon the order of your polynomials and your input delays. If the warning persists, you should evaluate the content of your signals.

Tunable: Yes

#### Dependencies

To enable this property, set `History` to `'Finite'`.

Estimated covariance P of the parameters, stored as an Np-by-Np symmetric positive-definite matrix, where Np is the number of parameters to be estimated. The software computes P assuming that the residuals (difference between estimated and measured outputs) are white noise and the variance of these residuals is 1.

The interpretation of P depends on your settings for the `History` and `EstimationMethod` properties.

• If you set `History` to `'Infinite'` and `EstimationMethod` to:

• `'ForgettingFactor'`R2 * P is approximately equal to twice the covariance matrix of the estimated parameters, where R2 is the true variance of the residuals.

• `'KalmanFilter'`R2 * P is the covariance matrix of the estimated parameters, and R1 /R2 is the covariance matrix of the parameter changes. Here, R1 is the covariance matrix that you specify in `ProcessNoiseCovariance`.

• If `History` is `'Finite'` (sliding-window estimation) — R2P is the covariance of the estimated parameters. The sliding-window algorithm does not use this covariance in the parameter-estimation process. However, the algorithm does compute the covariance for output so that you can use it for statistical evaluation.

`ParameterCovariance` is initially empty when you create the object and is populated after you run the online parameter estimation.

#### Dependencies

To enable this property, use one of the following configurations:

• Set `History` to `'Finite'`.

• Set `History` to `'Infinite'` and set `EstimationMethod` to either `'ForgettingFactor'` or `'KalmanFilter'`.

Covariance of the initial parameter estimates, specified as one of these values:

• Real positive scalar α — Covariance matrix is an N-by-N diagonal matrix in which α is each diagonal element. N is the number of parameters to be estimated.

• Vector of real positive scalars [α1,...,αN] — Covariance matrix is an N-by-N diagonal matrix in which α1 through αN] are the diagonal elements.

• N-by-N symmetric positive-definite matrix.

`InitialParameterCovariance` represents the uncertainty in the initial parameter estimates. For large values of `InitialParameterCovariance`, the software accords less importance to the initial parameter values and more importance to the measured data during the beginning of estimation.

Tunable: Yes

#### Dependency

To enable this property, set `History` to `'Infinite'` and set `EstimationMethod` to either `'ForgettingFactor'` or `'KalmanFilter'`.

Recursive estimation algorithm used for online estimation of model parameters, specified as one of the following:

• `'ForgettingFactor'` — Use forgetting factor algorithm for parameter estimation.

• `'KalmanFilter'` — Use Kalman filter algorithm for parameter estimation.

• `'NormalizedGradient'` — Use normalized gradient algorithm for parameter estimation.

• `'Gradient'` — Use unnormalized gradient algorithm for parameter estimation.

Forgetting factor and Kalman filter algorithms are more computationally intensive than gradient and unnormalized gradient methods. However, the former algorithms have better convergence properties. For information about these algorithms, see Recursive Algorithms for Online Parameter Estimation.

#### Dependencies

To enable this property, set `History` to `'Infinite'`

Forgetting factor λ for parameter estimation, specified as a scalar in the range (0, 1].

Suppose that the system remains approximately constant over T0 samples. You can choose λ to satisfy this condition:

`${T}_{0}=\frac{1}{1-\lambda }$`
• Setting λ to 1 corresponds to "no forgetting" and estimating constant coefficients.

• Setting λ to a value less than 1 implies that past measurements are less significant for parameter estimation and can be "forgotten". Set λ to a value less than 1 to estimate time-varying coefficients.

Typical choices of λ are in the range [0.98, 0.995].

Tunable: Yes

#### Dependencies

To enable this property, set `History` to `'Infinite'` and set `EstimationMethod` to `'ForgettingFactor'`.

Option to enable or disable parameter estimation, specified as one of the following:

• `true` — The `step` function estimates the parameter values for that time step and updates the parameter values.

• `false` — The `step` function does not update the parameters for that time step and instead outputs the last estimated value. You can use this option when your system enters a mode where the parameter values do not vary with time.

Note

If you set `EnableAdapation` to `false`, you must still execute the `step` command. Do not skip `step` to keep parameter values constant, because parameter estimation depends on current and past I/O measurements. `step` ensures past I/O data is stored, even when it does not update the parameters.

Tunable: Yes

Floating point precision of parameters, specified as one of the following values:

• `'double'` — Double-precision floating point

• `'single'` — Single-precision floating point

Setting `DataType` to `'single'` saves memory but leads to loss of precision. Specify `DataType` based on the precision required by the target processor where you will deploy generated code.

You must set `DataType` during object creation using a name-value argument.

Covariance matrix of parameter variations, specified as one of the following:

• Real nonnegative scalar, α — Covariance matrix is an N-by-N diagonal matrix, with α as the diagonal elements.

• Vector of real nonnegative scalars, [α1,...,αN] — Covariance matrix is an N-by-N diagonal matrix, with [α1,...,αN] as the diagonal elements.

• N-by-N symmetric positive semidefinite matrix.

N is the number of parameters to be estimated.

The Kalman filter algorithm treats the parameters as states of a dynamic system and estimates these parameters using a Kalman filter. `ProcessNoiseCovariance` is the covariance of the process noise acting on these parameters. Zero values in the noise covariance matrix correspond to estimating constant coefficients. Values larger than 0 correspond to time-varying parameters. Use large values for rapidly changing parameters. However, the larger values result in noisier parameter estimates.

Tunable: Yes

#### Dependencies

To enable this property, set `History` to `'Infinite'` and set `EstimationMethod` to `'KalmanFilter'`.

Adaptation gain, γ, used in gradient recursive estimation algorithms, specified as a positive scalar.

Specify a large value for `AdaptationGain` when your measurements have a high signal-to-noise ratio.

Tunable: Yes

#### Dependencies

To enable this property, set `History` to `'Infinite'` and set `EstimationMethod` to either `'Gradient'` or `'NormalizedGradient'`.

Bias in adaptation gain scaling used in the `'NormalizedGradient'` method, specified as a nonnegative scalar.

The normalized gradient algorithm divides the adaptation gain at each step by the square of the two-norm of the gradient vector. If the gradient is close to zero, this division can cause jumps in the estimated parameters. `NormalizationBias` is the term introduced in the denominator to prevent such jumps. If you observe jumps in estimated parameters, increase `NormalizationBias`.

Tunable: Yes

#### Dependencies

To enable this property, set `History` to `'Infinite'` and set `EstimationMethod` to `'NormalizedGradient'`.

Data history type, which defines the type of recursive algorithm to use, specified as one of the following:

• `'Infinite'` — Use an algorithm that aims to minimize the error between the observed and predicted outputs for all time steps from the beginning of the simulation.

• `'Finite'` — Use an algorithm that aims to minimize the error between the observed and predicted outputs for a finite number of past time steps.

Algorithms with infinite history aim to produce parameter estimates that explain all data since the start of the simulation. These algorithms still use a fixed amount of memory that does not grow over time. To select an infinite-history algorithm, use `EstimationMethod`.

Algorithms with finite history aim to produce parameter estimates that explain only a finite number of past data samples. This method is also called sliding-window estimation. The object provides one finite-history algorithm. To define the window size, specify the `WindowLength` property.

For more information on recursive estimation methods, see Recursive Algorithms for Online Parameter Estimation.

You must set `History` during object creation using a name-value argument.

Window size for finite-history estimation, specified as a positive integer indicating the number of samples..

Choose a window size that balances estimation performance with computational and memory burden. Sizing factors include the number and time variance of the parameters in your model. `WindowLength` must be greater than or equal to the number of estimated parameters.

Suitable window length is independent of whether you are using sample-based or frame-based input processing (see `InputProcessing`). However, when using frame-based processing, your window length must be greater than or equal to the number of samples (time steps) contained in the frame.

You must set `WindowLength` during object creation using a name-value argument.

#### Dependencies

To enable this property, set `History` to `'Finite'`.

Input processing method, specified as one of the following:

• `'Sample-based'` — Process streamed signals one sample at a time.

• `'Frame-based'` — Process streamed signals in frames that contain samples from multiple time steps. Many machine sensor interfaces package multiple samples and transmit these samples together in frames. `'Frame-based'` processing allows you to input this data directly without having to first unpack it.

The `InputProcessing` property impacts the dimensions for the input and output signals when using the recursive estimator object.

• `Sample-based`

• `y` and `estimatedOutput` are scalars.

• `u` is a 1-by-Nu vector, where Nu is the number of inputs.

• `Frame-based` with M samples per frame

• `y` and `estimatedOutput` are M-by-1 vectors.

• `u` is an M-by-Nu matrix.

You must set `InputProcessing` during object creation using a name-value argument.

## Usage

### Syntax

``[A,B,estimatedOutput] = arxobj(y,u)``

### Description

````[A,B,estimatedOutput] = arxobj(y,u)` updates and returns the coefficients and output of `recursiveARX` model `arxobj` based on real-time output data `y` and input data `u`.```

### Input Arguments

expand all

Output data acquired in real time, specified as a real scalar.

Input data acquired in real time, specified as one of the following:

• Real scalar for a SISO ARX model

• Column vector or real values of length Nu for a MISO ARX model with Nu inputs

### Output Arguments

expand all

Estimated output, returned as a real scalar. The output is estimated using input-output estimation data, current parameter values, and the recursive estimation algorithm specified in the `recursiveARX` System object.

## Object Functions

To use an object function, specify the System object as the first input argument. For example, to release system resources of a System object named `obj`, use this syntax:

`release(obj)`

expand all

 `step` Run System object algorithm `release` Release resources and allow changes to System object property values and input characteristics `reset` Reset internal states of System object

expand all

 `clone` Create duplicate System object `isLocked` Determine if System object is in use

## Examples

collapse all

Create a System object for online parameter estimation of a SISO ARX model.

`obj = recursiveARX;`

The ARX model has a default structure with polynomials of order 1 and initial polynomial coefficient values, `eps`.

Load the estimation data. In this example, use a static data set for illustration.

```load iddata1 z1; output = z1.y; input = z1.u;```

Estimate ARX model parameters online using `step`.

```for i = 1:numel(input) [A,B,EstimatedOutput] = step(obj,output(i),input(i)); end```

View the current estimated values of polynomial `B` coefficients.

`obj.B`
```ans = 1×2 0 0.7974 ```

View the current covariance estimate of the parameters.

`obj.ParameterCovariance`
```ans = 2×2 0.0002 0.0001 0.0001 0.0034 ```

View the current estimated output.

`EstimatedOutput`
```EstimatedOutput = -4.7766 ```

Specify ARX model orders and delays.

```na = 1; nb = 2; nk = 1;```

Create a System object for online estimation of SISO ARX model with known initial polynomial coefficients.

```A0 = [1 0.5]; B0 = [0 1 1]; obj = recursiveARX([na nb nk],A0,B0);```

Specify the initial parameter covariance.

`obj.InitialParameterCovariance = 0.1;`

`InitialParameterCovariance` represents the uncertainty in your guess for the initial parameters. Typically, the default `InitialParameterCovariance` (10000) is too large relative to the parameter values. This results in initial guesses being given less importance during estimation. If you have confidence in the initial parameter guesses, specify a smaller initial parameter covariance.

Specify orders and delays for ARX model with two inputs and one output.

```na = 1; nb = [2 1]; nk = [1 3];```

`nb` and `nk` are specified as row vectors of length equal to number of inputs, Nu.

Specify initial polynomial coefficients.

```A0 = [1 0.5]; B0 = [0 1 1 0; 0 0 0 0.8];```

`B0` has Nu rows and `max(nb+nk)` columns. The i-th row corresponds to i-th input and is specified as having `nk(i)` zeros, followed by `nb(i)` initial values. Values after `nb(i)+nk(i)` are ignored.

Create a System object for online estimation of ARX model with known initial polynomial coefficients.

`obj = recursiveARX([na nb nk],A0,B0);`

Create a System object that uses the normalized gradient algorithm for online parameter estimation of an ARX model.

`obj = recursiveARX([1 2 1],'EstimationMethod','NormalizedGradient');`