LSTM architecture for a sequence-to-sequence model
3 views (last 30 days)
Show older comments
I have build a LSTM sequence-to-sequence model which I would like to convert to onnx and use in a python script-
My input data is as follows, 11307 sequences of 24 times steps and 6 features:
XTrain = 11307x1 cell array
{ 6x24 double}
YTrain = 11307x1 cell array
{1x24 catgorical}
The layer is :
numFeatures = 6
NumHiddenUnis = 50
numClasses = 2
layers = [ ...
sequenceInputLayer(numFeatures)
lstmLayer(numHiddenUnits,'OutputMode','sequence')
fullyConnectedLayer(numClasses)
softmaxLayer
classificationLayer];
netlstm=trainNetwork(XTrain,YTrain,layers,options)
Whe I run the model on test data with numFeatures = 6, sequence_length = 24 in matlab, the results are good.
Exporting netlstm to onnx using
exportONNXNetwork(netlstm, 'file.onnx')
Viewing the file.onnx in a model checker show that the initial_h is 1x1x50, not 1x24x50 as expected.
Loading the onnx.model in python, I can run the model with input 1x1x6, but not he full sequence of 24 time steps.
Why is the sequnce length missing from initial_h after exporting to onnx?
The example: Sequence-to-Sequence Classification Using Deep Learning - MATLAB & Simulink - MathWorks Nordic
gives the same issue when exporting the network after running the example.
0 Comments
Answers (1)
See Also
Categories
Find more on Predictive Maintenance Toolbox in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!