Clear Filters
Clear Filters

Training a DDPG, and observation values are zero. How do I initialize the first episode to have initial values to the action?

3 views (last 30 days)
Hello,
I am training a DDPG agent with four actions. My observations are zero for more than 1000 episodes. I suspect because the action values have been zero, that is affecting the observations. How do I set the action values for the first episode to some values at start.
Actions are torque input with min and max (200) and later multiplied with gain 100. Is there something, I need to do to properly to get the observations to not stay as zeros.
  4 Comments
Bay Jay
Bay Jay on 2 Jul 2023
I have a followup question,
This is what I know: during training, the episode ends at the end of the simulation time, tf.
If you have an RL problem, and there are no isdone condition because you just want agent to learn the "optimal" solution to maximize a - reward, but you want the RL to know that the only termination condition is the specific set time, tf. (Tf =5, is fixed and does not change). How do you set the isdone condition. Do you connect a time clock to the isDone or you just leave it unconnected. If it is left unconnected, how does the agent know that that time is the terminating condition? Any recommendation to ensure I am properly training the agent would be appreciated.
Emmanouil Tzorakoleftherakis
Not very clear why you would want the agent to learn when the termination time of the episode? After training you can always choose to 'unplug' the agent as you see fit.

Sign in to comment.

Answers (0)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!