Neural ODE for dynamic systems with input signals

Hi! Community!
Mathworks provided a nice example here for modeling dynamic systems through neural ODE.
Is it possible to consider input signals in training? That is, to define the differential equation to be:
where is the input signal.
However, dlode45 will not allow the ODE function to be with more than three inputs.
So is there any other possible approach to incorporate the input signal?
Thanks a lot!

 Accepted Answer

Hi Bowei,
You should be able to create a new ODE function that has only three inputs as required. Let me show a few cases.
Case 1 -
In this case you can define . Assuming you have f as a function handle you can define g in code with:
g = @(t,x,theta) f(t,x,theta,e(t))
Then solve using g in dlode45.
Case 2 -
This is a special case of case 1:
g = @(t,x,theta) f(t,x,theta) + e(t)
Call dlode45 with g.
Case 3 - for
In this case you have an extra hyperparameter i which you just have to select a specific value for. For example let and . You could write this in code as:
e = @(t,i) cos(i*t);
f = @(t,x,A,i) A*x + e(t,i);
x0 = dlarray(randn());
tspan = [0,1];
A = dlarray(randn());
i = 3;
x = dlode45(@(t,x,A) f(t,x,A,i), tspan, x0, A, DataFormat="CB");
Note that in this case you can loop over the values you want for i.
Hope that helps,
Ben

7 Comments

Thanks Ben, this works well!
Hello Ben,
Would you please explain how to model ODEs with external input signals using the neuralODELayer in reference to the new functionalities introduced in the recent release (R2023a)?
Thanks,
Shubham
Firstly neuralODELayer is only available from R2023b. If R2023a you could use a custom layer using dlode45.
To use neuralODELayer with an external input signal, you will need to be able to implement that external input signal in the dlnetwork that is passed to neuralODELayer. You create layer = neuralODELayer(odenet, tspan), if odenet has 1 input then this corresponds to integrating the ODE . If odenet has 2 inputs then it is . So if you're modelling an ODE of the form then you need implement withing the odenet.
As an example, here's how you could create .
x0 = dlarray(1,"CB");
t0 = dlarray(0,"CB");
sinLayer = functionLayer(@sin, Acceleratable=true);
hiddenSize = 10;
odenet = [
sinLayer
concatenationLayer(1,2)
fullyConnectedLayer(hiddenSize)
tanhLayer
fullyConnectedLayer(1)];
odenet = dlnetwork(odenet,t0,x0);
tspan = [0,0.1];
odeLayer = neuralODELayer(odenet,tspan);
odeLayer =
NeuralODELayer with properties: Name: '' TimeInterval: [0 0.1000] GradientMode: 'direct' RelativeTolerance: 1.0000e-03 AbsoluteTolerance: 1.0000e-06 Learnable Parameters Network: [1×1 dlnetwork] State Parameters No properties. Use properties method to see a list of all properties.
Here the odenet-s first input is t which becomes via the sinLayer. Then I concatenate with the second input to odenet, x to create , and pass that through standard neural network layers.
A more difficult case is that you don't know a functional form for , but only have samples . In this case, instead of sinLayer = functionLayer(@sin) you should use a layer that interpolates from the samples . One way to do this is with interp1. Here's some code demonstrating that, I'll use again just to demonstrate.
% create toy data
ti = dlarray(linspace(0,1,10));
ui = sin(ti); % in practice you aren't aware of the functional form of u(t).
interpLayer = functionLayer(@(t) dlarray(interp1(ti,ui,t),"CB"), Acceleratable=true);
Next you use interpLayer just like sinLayer above. Part of what this does is store the sample data ti, ui on the interpLayer (actually on the function_handle on that layer). A more flexible approach would be to implement a custom layer to perform this interpolation.
If you have multiple different external signals corresponding to different observations/batch elements, you may need a more intricate approach, since you will have to align the with the batch element inputs to odenet.
Hello Ben,
Thank you for providing this useful explanation however there are some aspects in the application of NeuralODELayer I do not completely understand and I would appreciate your help with them.
The examples illustrated above created a odeLayer which must be included within a different neural network in order to be applied. For instance:
Layers =[
featureInputLayer(2,'Name','Input')
neuralODELayer(odeLayer, timesteps, 'GradientMode', 'adjoint', 'Name', 'neuralODE')
];
net = dlnetwork(Layers);
And to train the model we need:
net = trainnet(net_input, targets, net, "huber", options);
Here is where my doubt lies, to train the model net_input and targets must have the same dimensions. What would happen if u(t) and targets match in their length and time steps but we only have a single value of x0 (which is the initialization of X)?
Additionally, if there is a way to reproduce the example provided by Mathworks with the application of neuralODELayer? In the aforementioned example the dlode45 function is applied and their inputs are: timesteps, initial value of y (y0) and theta (weights and biases of the NN). If I intend to use neuralODELayer and trainnet the dimensions of the inputs and targets do not match because y0 is obteined from the targets.
Ben
Ben on 11 Dec 2024
Edited: Ben on 11 Dec 2024
On the second question regarding the example, neuralODELayer can be used here as follows:
% https://uk.mathworks.com/help/deeplearning/ug/dynamical-system-modeling-using-neural-ode.html
% using neuralODELayer
% Generate training data
x0 = [2; 0];
A = [-0.1 -1; 1 -0.1];
trueModel = @(t,y) A*y;
numTimeSteps = 2000;
T = 15;
odeOptions = odeset(RelTol=1.e-7);
t = linspace(0, T, numTimeSteps);
[~, xTrain] = ode45(trueModel, t, x0, odeOptions);
xTrain = xTrain';
% Rearrange the single solution [x(1),x(2),...,x(end)] into subsequences
% x(t) and [x(t+1), ..., x(t+40)].
neuralOdeTimesteps = 40;
dt = t(2);
timesteps = (0:neuralOdeTimesteps)*dt;
input = xTrain(:,1:end-neuralOdeTimesteps);
numObs = numTimeSteps - neuralOdeTimesteps;
targets = cell(numObs, 1);
for i = 1:numObs
targets{i} = xTrain(:,i + (1:neuralOdeTimesteps));
end
% Design neural ODE.
stateSize = size(xTrain,1);
hiddenSize = 20;
odeNet = [
fullyConnectedLayer(hiddenSize)
tanhLayer
fullyConnectedLayer(stateSize)];
odeNet = dlnetwork(odeNet,Initialize=false);
odeLayer = neuralODELayer(odeNet,timesteps,GradientMode="adjoint");
net = dlnetwork([featureInputLayer(stateSize); odeLayer]);
% Train neural ODE.
opts = trainingOptions("adam", ...
Plots="training-progress", ...
ExecutionEnvironment="cpu", ...
InputDataFormats="CB",...
TargetDataFormats="CTB");
trainednet = trainnet(input,targets,net,"l2loss",opts);
% Use neural ODE to predict on a longer time interval.
% This requires extracting the layer, setting the new time interval, and
% calling replaceLayer to put the layer back into the network with the new
% time interval.
% Alternatively you can extract odeNet = trainednet.Layers(2).Network and use
% dlode45(odeNet, t, x0)
layer = trainednet.Layers(2);
layer.TimeInterval = t;
inferencenet = replaceLayer(trainednet,layer.Name,layer);
inferencenet = initialize(inferencenet);
x0Pred1 = sqrt([2,2]);
xPred1 = predict(inferencenet,x0Pred1);
plot(xPred1(:,1),xPred1(:,2))
Regarding trainnet - the inputs and targets don't have to have the same dimensions. A neuralODELayer(net, ts) will take an initial condition as input, corresponding to the state at time ts(1), and output the solution as a sequence (with T dimension) at times ts(2:end).
Hope that helps.
Hi again @Ben,
After reading your suggestions regarding the application of neuralODELayer with external input signals I created a custom layer to work with multiple input signals. The approach I followed works when the inputs signals are not splited into minibatches manually like the variable targets in the example you provided
for i = 1:numObs
targets{i} = xTrain(:,i + (1:neuralOdeTimesteps));
end
If I split the input signals in a similar fashion, the script does not run. I posted a new question here providing more details.
The question I have is if splitting the targets and the input signals manually (like in your example) should do the same as setting the minibatches within trainingOptions before using the trainnet function to train the model.
I appreciate your help in advance
Sergio

Sign in to comment.

More Answers (1)

Hi Bowei,
Thanks for the feedback on our neural ode example! For your request, can you elloborate on what type of signal e(t) might be and what use cases you're looking to apply neural ode's to?
David

1 Comment

Thanks David!
I was thinking about training neural ode for predicting states of dynamic systems given some arbitrary input signals, such as:
where is the system state vector, a function of time t; the input signal is also a function of time t.
For example, can be like or or a random process .
The example here corresponding to the case of and training a neural ode for a set of initial conditions.
Is it possible to train a neural ode for a fixed initial condition, but for a set of input signals .

Sign in to comment.

Categories

Find more on Deep Learning Toolbox in Help Center and File Exchange

Products

Release

R2021b

Asked:

on 24 Jan 2022

Edited:

on 29 Apr 2025

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!