How to change the input size of Input layer in actor-critic
2 views (last 30 days)
Show older comments
I am tring to make the cord of reninforcement learning using by reinforcemet tool.
I want to apply actor-critic method to simple pendulum which throwing ball.
% DDPG agent training
% setting environment
env = Environment;
obsInfo = env.getObservationInfo;
actInfo = env.getActionInfo;
numObs = obsInfo.Dimension(1); % numObs = 2
numAct = numel(actInfo); % numAct = 1
% CRITIC
statePath =[
featureInputLayer(numObs, 'Normalization','none','Name','observation')
fullyConnectedLayer(128, 'Name','CriticStateFC1')
reluLayer('Name','CriticRelu1')
fullyConnectedLayer(200,'Name','CriticStateFC2')];
actionPath = [
featureInputLayer(numAct,'Normalization','none','Name','action')
fullyConnectedLayer(200,'Name','CriticActionFC1','BiasLearnRateFactor', 0)];
commonPath = [
additionLayer(2,'Name','add')
reluLayer('Name','CriticCommonRelu')
fullyConnectedLayer(1,'Name','CriticOutput')];
criticNetwork = layerGraph(statePath);
criticNetwork = addLayers(criticNetwork, actionPath);
criticNetwork = addLayers(criticNetwork, commonPath);
criticNetwork = connectLayers(criticNetwork,'CriticStateFC2','add/in1');
criticNetwork = connectLayers(criticNetwork,'CriticStateFC1','add/in2');
criticOptions = rlRepresentationOptions('LearnRate',1e-03,'GradientThreshold',1);
critic = rlQValueRepresentation(criticNetwork,obsInfo,actInfo,'Observation',{'observation'},'Action',{'action'},criticOptions);
% ACTOR
actorNetwork = [
featureInputLayer(numObs,'Normalization','none','Name','observation')
fullyConnectedLayer(128,'Name','ActorFC1')
reluLayer('Name','ActorRelu1')
fullyConnectedLayer(200,'Name','ActorFC2')
reluLayer('Name','ActorRelu2')
fullyConnectedLayer(1,'Name','ActorFC3')
tanhLayer('Name','ActorTanh1')
scalingLayer('Name','ActorScaling','Scale',max(actInfo.UpperLimit))];
actorOptions = rlRepresentationOptions('LearnRate',5e-04,'GradientThreshold',1);
actor= rlDetrministicActorrepresentation(actorNetwork,obsInfo,actInfo,'Observation',{'observation'},'Action',{'ActorScaling'},actorOptions);
% agent options
agentOptions = rlDDPGAgentOptions(...
'SampleTime',env.t_span,...
'TargetSmoothFactor',1e-3,...
'ExperienceBufferLength',1e6,...
'MiniBatchSize',128);
% Noise
agentOptions.NoiseOptions.Variance = 0.4;
agentOptions.NoiseOptions.VariancedecayRate = 1e-5;
agent = rlDDPGAgent(actor,critic,agentOptions);
% トレーニングオプション
maxepisodes = 20000;
maxsteps = 1e8;
trainingOptions = rlTrainingOptions(...
'MaxEpisodes',maxepisodes,...
'MaxStepsPerEpisode',maxsteps,...
'StopOneError','on',...
'Verbose',false,...
'Plots','training-progress',...
'StopTrainingCriteria','Averagereward',...
'StopTrainingValue',Inf,...
'ScoreAveragingWindowLength',10);
% drowing environment
plot(env);
% training agent
trainingStates = train(agent,env,trainingOptions);
% simulation
simOptions = rlSimulationOptions('MaxSteps',maxsteps);
experience = sim(env,agent,simOptions);
But, the error happed about Input size. There are two inputs, angle and angular velocity, but the input size of the network layer does not match. Does anyone know how to match the size?
Error: rl.representation.model.rlLayerModel / createInternalNeuralNetwork (line 670)
Invalid network.
Error: rl.representation.model.rlLayerModel / buildNetwork (line 650)
this.Assembler, this.AnalyzedLayers, this.NetworkInfo] = createInternalNeuralNetwork (this);
Error: rl.representation.model.rlLayerModel (line 57)
this = buildNetwork (this);
Error: rl.util.createInternalModelFactory (line 18)
Model = rl.representation.model.rlLayerModel (Model, UseDevice, ObservationNames, ActionNames);
Error: rlQValueRepresentation (line 116)
Model = rl.util.createInternalModelFactory (Model, Options, ObservationNames, ActionNames, InputSize, OutputSize);
Error: train_agent (line 38)
critic = rlQValueRepresentation (criticNetwork, obsInfo, actInfo,'Observation', {'observation'},'Action', {'action'}, criticOptions);
Cause:
Network: Multiple graph components. The layer graph must consist of a single join component.
Initial layer of isolation component:
Layer'observation'(8-layer component)
Layer'action'(three-layer component)
Layer'add': Input sizes do not match. The input size for this layer is different from the required input size.
Input for this layer:
From layer'CriticStateFC2' (size 200 (C) x 1 (B))
From layer'CriticStateFC1'(size 128 (C) x 1 (B)
0 Comments
Answers (1)
Ayush Aniket
on 20 Jun 2024
The error message indicates that there's a mismatch in the input sizes of the layers within your network, specifically at the addition layer where the state and action paths converge. Additionally, the error points out that the network consists of multiple disconnected components, which is not allowed. The network must be a single, connected graph.
This is happening due to incorrect addition of statePath and actionPath layers in your network. You should modify your code as shown below:
criticNetwork = connectLayers(criticNetwork,'CriticStateFC2','add/in1');
criticNetwork = connectLayers(criticNetwork,'CriticActionFC1','add/in2'); % Corrected connection
0 Comments
See Also
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!