Error using trainnet (line 46)

48 views (last 30 days)
Bahadir
Bahadir on 15 Oct 2025 at 8:58
Commented: Bahadir on 17 Oct 2025 at 20:46
Dear sir;
My XTrain 48941x1 cell and TTrain 48941x1 categorical as a shown at below
why does I get this error?
Error using trainnet (line 46)
Number of observations in predictors (48941) and targets (1) must match. Check that
the data and network are consistent.
layers = [
sequenceInputLayer([30 30 1],'Name','input') % For 2-D image sequence input, InputSize is vector of three elements [h w c], where h is the image height, w is the image width, and c is the number of channels of the image.
convolution2dLayer(5,8,'Stride',1,'Padding','same','WeightsInitializer','he','Name','conv','DilationFactor',1);
batchNormalizationLayer('Name','bn') % A batch normalization layer normalizes a mini-batch of data across all observations for each channel independently. To speed up training of the convolutional neural network and reduce the sensitivity to network initialization, use batch normalization layers between convolutional layers and nonlinearities, such as ReLU layers. After normalization, the layer scales the input with a learnable scale factor γ and shifts it by a learnable offset β.
reluLayer('Name','Relu') % An ELU activation layer performs the identity operation on positive inputs and an exponential nonlinearity on negative inputs.
convolution2dLayer(5,8,'Stride',2,'Padding','same','WeightsInitializer','he','Name','conv','DilationFactor',1);
batchNormalizationLayer('Name','bn') % A batch normalization layer normalizes a mini-batch of data across all observations for each channel independently. To speed up training of the convolutional neural network and reduce the sensitivity to network initialization, use batch normalization layers between convolutional layers and nonlinearities, such as ReLU layers. After normalization, the layer scales the input with a learnable scale factor γ and shifts it by a learnable offset β.
reluLayer('Name','Relu') % An ELU activation layer performs the identity operation on positive inputs and an exponential nonlinearity on negative inputs.
convolution2dLayer(5,8,'Stride',1,'Padding','same','WeightsInitializer','he','Name','conv','DilationFactor',1);
batchNormalizationLayer('Name','bn') % A batch normalization layer normalizes a mini-batch of data across all observations for each channel independently. To speed up training of the convolutional neural network and reduce the sensitivity to network initialization, use batch normalization layers between convolutional layers and nonlinearities, such as ReLU layers. After normalization, the layer scales the input with a learnable scale factor γ and shifts it by a learnable offset β.
reluLayer('Name','Relu') % An ELU activation layer performs the identity operation on positive inputs and an exponential nonlinearity on negative inputs.
convolution2dLayer(5,16,'Stride',2,'Padding','same','WeightsInitializer','he','Name','conv','DilationFactor',1);
batchNormalizationLayer('Name','bn') % A batch normalization layer normalizes a mini-batch of data across all observations for each channel independently. To speed up training of the convolutional neural network and reduce the sensitivity to network initialization, use batch normalization layers between convolutional layers and nonlinearities, such as ReLU layers. After normalization, the layer scales the input with a learnable scale factor γ and shifts it by a learnable offset β.
reluLayer('Name','Relu') % An ELU activation layer performs the identity operation on positive inputs and an exponential nonlinearity on negative inputs.
convolution2dLayer(5,16,'Stride',1,'Padding','same','WeightsInitializer','he','Name','conv','DilationFactor',1);
batchNormalizationLayer('Name','bn') % A batch normalization layer normalizes a mini-batch of data across all observations for each channel independently. To speed up training of the convolutional neural network and reduce the sensitivity to network initialization, use batch normalization layers between convolutional layers and nonlinearities, such as ReLU layers. After normalization, the layer scales the input with a learnable scale factor γ and shifts it by a learnable offset β.
reluLayer('Name','Relu') % An ELU activation layer performs the identity operation on positive inputs and an exponential nonlinearity on negative inputs.
convolution2dLayer(5,32,'Stride',2,'Padding','same','WeightsInitializer','he','Name','conv','DilationFactor',1);
batchNormalizationLayer('Name','bn') % A batch normalization layer normalizes a mini-batch of data across all observations for each channel independently. To speed up training of the convolutional neural network and reduce the sensitivity to network initialization, use batch normalization layers between convolutional layers and nonlinearities, such as ReLU layers. After normalization, the layer scales the input with a learnable scale factor γ and shifts it by a learnable offset β.
reluLayer('Name','Relu') % An ELU activation layer performs the identity operation on positive inputs and an exponential nonlinearity on negative inputs.
globalAveragePooling2dLayer(Name="gap1")
fullyConnectedLayer(7)
softmaxLayer];
options = trainingOptions("adam", ...
MaxEpochs=4, ...
InitialLearnRate=0.002,...
MiniBatchSize=128,...
GradientThreshold=1, ...
LearnRateSchedule="piecewise", ...
LearnRateDropPeriod=20, ...
LearnRateDropFactor=0.8, ...
L2Regularization=1e-3,...
Shuffle="every-epoch", ...
Plots="training-progress", ...
ObjectiveMetricName="loss", ...
OutputNetwork="best-validation", ...
ValidationPatience=5, ... % Specify the validation patience as 5 so training stops if the recall has not decreased for five iterations.
ValidationFrequency=50, ...
Verbose=false, ...
Metrics="accuracy", ...
ValidationData={XValidation,TValidation});
net = trainnet(XTrain,TTrain,layers,"crossentropy",options);
  3 Comments
Bahadir
Bahadir on 15 Oct 2025 at 20:44
Dear @Matt J
XTrain (48941x1 cell) file is 355 MB.
Therefore I attached XTrain (700x1 cell) and TTrain(700x1 categorical) with first 700 row.
I get same fault.
Error using trainnet (line 46)
Number of observations in predictors (700) and targets (1) must match. Check that the data and
network are consistent.
Walter Roberson
Walter Roberson on 15 Oct 2025 at 21:28
In order to test we would need corresponding XValidation and TValidation

Sign in to comment.

Accepted Answer

Matt J
Matt J on 16 Oct 2025 at 1:16
Edited: Matt J on 16 Oct 2025 at 2:15
It appears that if your XTrain is in cell array form, you need to put your TTrain data in cell form as well:
load('attachedData.mat'); clear ans; whos %Inventory
Name Size Bytes Class Attributes TTrain 200x1 1154 categorical XTrain 200x1 1464000 cell layers 22x1 23222 nnet.cnn.layer.Layer options 1x1 1442212 nnet.cnn.TrainingOptionsADAM
TTrain=num2cell(TTrain);
options.Plots='none'; %Online environment doesn't support plots
options.Verbose=true;
options.ValidationData={XTrain,TTrain}; %Fake validation data
testPrediction=minibatchpredict(dlnetwork(layers), XTrain(1:3)) %test
testPrediction = 1×7×3 single array
testPrediction(:,:,1) = 0.0716 0.1252 0.2103 0.1289 0.1714 0.1264 0.1662 testPrediction(:,:,2) = 0.0835 0.1331 0.1854 0.1318 0.1799 0.1212 0.1652 testPrediction(:,:,3) = 0.0935 0.1419 0.1846 0.1289 0.1629 0.1257 0.1625
net = trainnet(XTrain,TTrain,layers,"crossentropy",options);
Iteration Epoch TimeElapsed LearnRate TrainingLoss ValidationLoss TrainingAccuracy ValidationAccuracy _________ _____ ___________ _________ ____________ ______________ ________________ __________________ 0 0 00:00:02 0.002 2.0279 15 1 1 00:00:02 0.002 2.0175 22.656 4 4 00:00:04 0.002 1.6088 1.8219 52.344 44.5 Training stopped: Max epochs completed
  2 Comments
Matt J
Matt J on 16 Oct 2025 at 2:15
Edited: Matt J on 16 Oct 2025 at 13:32
You are using a sequenceInputLayer, but your training inputs appear to just be 30x30 images. An imageInputLayer might be more appropriate...
load('attachedData.mat');
XTrain=cat(4,XTrain{:});
layers(1)=imageInputLayer([30,30,1],Name="input");
options.Plots='none'; %Online environment doesn't support plots
options.Verbose=true;
options.ValidationData={XTrain,TTrain}; %Fake validation data
testPrediction=minibatchpredict(dlnetwork(layers), XTrain(:,:,:,1:3)) %test
testPrediction = 3×7
0.2280 0.2614 0.1365 0.0465 0.0868 0.0722 0.1687 0.2249 0.2505 0.1357 0.0480 0.1050 0.0783 0.1576 0.2297 0.2142 0.1242 0.0550 0.1048 0.0976 0.1744
<mw-icon class=""></mw-icon>
<mw-icon class=""></mw-icon>
net = trainnet(XTrain,TTrain,layers,"crossentropy",options);
Iteration Epoch TimeElapsed LearnRate TrainingLoss ValidationLoss TrainingAccuracy ValidationAccuracy _________ _____ ___________ _________ ____________ ______________ ________________ __________________ 0 0 00:00:02 0.002 2.0273 20 1 1 00:00:02 0.002 1.9511 28.906 4 4 00:00:03 0.002 1.4855 1.7872 73.438 28 Training stopped: Max epochs completed
Bahadir
Bahadir 29 minutes ago
Thanks it is OK

Sign in to comment.

More Answers (0)

Categories

Find more on Image Data Workflows in Help Center and File Exchange

Products


Release

R2024a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!