The reward gets stuck on a single value during training or randomly fluctuates (Reinforcement Learning)

13 views (last 30 days)
I train the reinforcement learning system, and on the reward plot I have some failures during which the reward does not change. This doesn’t look normal, especially when compared with examples (Biped Robot, etc.) I believe that some rlDDPGAgentOptions settings are responsible for this, but it seems that I changed all the possible settings, but even after several thousand episodes, the system does not learn. What can be the reason for this behavior of this graph during training?

Accepted Answer

Ari Biswas
Ari Biswas on 5 May 2020
It could mean that the training is experiencing a local minima. You can try out a few things:
1. Change the OU noise options to favor more exploration so that the robot can explore more states and get new rewards.
2. Design a different reward function that is not too dependent on sparse rewards. From the graph (flatlines) it looks like you have a sparse reward for a state that the agent is continuously visiting.
In most cases, designing better reward functions will improve training. That being said, 350 episodes might be too early to expect good results. I would let it run for a few 1000 episodes at least before coming to a conclusion that something needs to change.
  4 Comments
Abd Al-Rahman Al-Remal
Abd Al-Rahman Al-Remal on 12 Jun 2021
Hi,
When you say to change the noise options to favour more exploration: how would this be implemented? i.e what parameters should be changed and in what manner?
My case is slightly different than OP's however as my agent just stays at the same reward value consistently (I've never tested it for more than 100 episodes or so however).
Many thanks!

Sign in to comment.

More Answers (0)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!