Using tapped delays in the process of training an artificial neural network for the purpose of dynamic system modeling.

5 views (last 30 days)
Hello,I'm trying to use an artificial neural network for creating a model for the system :
from what I have got so far first I should get the response of the system to an arbitrary input and use that data for training my network. after discretisation this second order diffrence equation () would describe the dynamic behaviour of system.
Now what is the method to implement this delays in my neural network? should I just give these delayed outputs and inputs of the plant as a input to my neural network or is there an easier way to achive this?

Answers (1)

Harsh
Harsh on 11 Jan 2025
Hi Alirea,
You can use a time delay neural network (TDNN) which is specifically designed to handle temporal sequences by using past inputs and outputs as part of the input to the network. TDNNs use a fixed-size window of past inputs as part of the input to the network at each time step. This allows the network to learn dependencies across time.
You can use “timedelaynet” function in MATLAB for this functionality. Please refer to the following documentation to understand how to implement the function in MATLAB - https://www.mathworks.com/help/releases/R2022b/deeplearning/ref/timedelaynet.html

Categories

Find more on Sequence and Numeric Feature Data Workflows in Help Center and File Exchange

Products


Release

R2022b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!