Main Content

Proximal Policy Optimization Agents

Proximal policy optimization (PPO) is a model-free, online, on-policy, policy gradient reinforcement learning method. This algorithm is a type of policy gradient training that alternates between sampling data through environmental interaction and optimizing a clipped surrogate objective function using stochastic gradient descent. The clipped surrogate objective function improves training stability by limiting the size of the policy change at each step [1].

PPO is a simplified version of TRPO. TRPO is more computationally expensive than PPO, but TRPO tends to be more robust than PPO if the environment dynamics are deterministic and the observation is low dimensional. For more information on TRPO agents, see Trust Region Policy Optimization Agents.

For more information on the different types of reinforcement learning agents, see Reinforcement Learning Agents.

PPO agents can be trained in environments with the following observation and action spaces.

Observation SpaceAction Space
Discrete or continuousDiscrete or continuous

PPO agents use the following actor and critic representations.

CriticActor

Value function critic V(S), which you create using rlValueRepresentation

Stochastic policy actor π(S), which you create using rlStochasticActorRepresentation

During training, a PPO agent:

  • Estimates probabilities of taking each action in the action space and randomly selects actions based on the probability distribution.

  • Interacts with the environment for multiple steps using the current policy before using mini-batches to update the actor and critic properties over multiple epochs.

If the UseDeterministicExploitation option in rlPPOAgentOptions is set to true the action with maximum likelihood is always used in sim and generatePolicyFunction. As a result, the simulated agent and the generated policy behave deterministically.

Actor and Critic Functions

To estimate the policy and value function, a PPO agent maintains two function approximators:

  • Actor π(S|θ) — The actor, with parameters θ, takes observation S and returns:

    • The probabilities of taking each action in the action space when in state S (for discrete action spaces)

    • The mean and standard deviation of the Gaussian probability distribution for each action (for continuous action spaces)

  • Critic V(S|ϕ) — The critic, with parameters ϕ, takes observation S and returns the corresponding expectation of the discounted long-term reward.

When training is complete, the trained optimal policy is stored in actor π(S).

For more information on creating actors and critics for function approximation, see Create Policy and Value Function Representations.

Agent Creation

You can create and train PPO agents at the MATLAB® command line or using the Reinforcement Learning Designer app.

For more information on creating agents using Reinforcement Learning Designer, see Create Agents Using Reinforcement Learning Designer.

At the command line, you can create a PPO agent with default actor and critic representations based on the observation and action specifications from the environment. To do so, perform the following steps.

  1. Create observation specifications for your environment. If you already have an environment interface object, you can obtain these specifications using getObservationInfo.

  2. Create action specifications for your environment. If you already have an environment interface object, you can obtain these specifications using getActionInfo.

  3. If needed, specify the number of neurons in each learnable layer or whether to use an LSTM layer. To do so, create an agent initialization option object using rlAgentInitializationOptions.

  4. Specify agent options using an rlPPOAgentOptions object.

  5. Create the agent using an rlPPOAgent object.

Alternatively, you can create actor and critic representations and use these representations to create your agent. In this case, ensure that the input and output dimensions of the actor and critic representations match the corresponding action and observation specifications of the environment.

  1. Create an actor using an rlStochasticActorRepresentation object.

  2. Create a critic using an rlValueRepresentation object.

  3. If needed, specify agent options using an rlPPOAgentOptions object.

  4. Create the agent using the rlPPOAgent function.

PPO agents support actors and critics that use recurrent deep neural networks as function approximators.

For more information on creating actors and critics for function approximation, see Create Policy and Value Function Representations.

Training Algorithm

PPO agents use the following training algorithm. To configure the training algorithm, specify options using an rlPPOAgentOptions object.

  1. Initialize the actor π(S) with random parameter values θ.

  2. Initialize the critic V(S) with random parameter values ϕ.

  3. Generate N experiences by following the current policy. The experience sequence is

    Sts,Ats,Rts+1,Sts+1,,Sts+N1,Ats+N1,Rts+N,Sts+N

    Here, St is a state observation, At is an action taken from that state, St+1 is the next state, and Rt+1 is the reward received for moving from St to St+1.

    When in state St, the agent computes the probability of taking each action in the action space using π(St) and randomly selects action At based on the probability distribution.

    ts is the starting time step of the current set of N experiences. At the beginning of the training episode, ts = 1. For each subsequent set of N experiences in the same training episode, tsts + N.

    For each experience sequence that does not contain a terminal state, N is equal to the ExperienceHorizon option value. Otherwise, N is less than ExperienceHorizon and SN is the terminal state.

  4. For each episode step t = ts+1, ts+2, …, ts+N, compute the return and advantage function using the method specified by the AdvantageEstimateMethod option.

    • Finite Horizon (AdvantageEstimateMethod = "finite-horizon") — Compute the return Gt, which is the sum of the reward for that step and the discounted future reward [2].

      Gt=k=tts+N(γktRk)+bγNt+1V(Sts+N|ϕ)

      Here, b is 0 if Sts+N is a terminal state and 1 otherwise. That is, if Sts+N is not a terminal state, the discounted future reward includes the discounted state value function, computed using the critic network V.

      Compute the advantage function Dt.

      Dt=GtV(St|ϕ)

    • Generalized Advantage Estimator (AdvantageEstimateMethod = "gae") — Compute the advantage function Dt, which is the discounted sum of temporal difference errors [3].

      Dt=k=tts+N1(γλ)ktδkδk=Rt+bγV(St|ϕ)

      Here, b is 0 if Sts+N is a terminal state and 1 otherwise. λ is a smoothing factor specified using the GAEFactor option.

      Compute the return Gt.

      Gt=Dt+V(St|ϕ)

    To specify the discount factor γ for either method, use the DiscountFactor option.

  5. Learn from mini-batches of experiences over K epochs. To specify K, use the NumEpoch option. For each learning epoch:

    1. Sample a random mini-batch data set of size M from the current set of experiences. To specify M, use the MiniBatchSize option. Each element of the mini-batch data set contains a current experience and the corresponding return and advantage function values.

    2. Update the critic parameters by minimizing the loss Lcritic across all sampled mini-batch data.

      Lcritic(ϕ)=1Mi=1M(GiV(Si|ϕ))2

    3. Normalize the advantage values Di based on recent unnormalized advantage values.

      • If the NormalizedAdvantageMethod option is 'none', do not normalize the advantage values.

        D^iDi

      • If the NormalizedAdvantageMethod option is 'current', normalize the advantage values based on the unnormalized advantages in the current mini-batch.

        D^iDimean(D1,D2,,DM)std(D1,D2,,DM)

      • If the NormalizedAdvantageMethod option is 'moving', normalize the advantage values based on the unnormalized advantages for the N most recent advantages, including the current advantage value. To specify the window size N, use the AdvantageNormalizingWindow option.

        D^iDimean(D1,D2,,DN)std(D1,D2,,DN)

    4. Update the actor parameters by minimizing the actor loss function Lactor across all sampled mini-batch data.

      Lactor(θ)=1Mi=1M(min(ri(θ)Di,ci(θ)Di)+wi(θ,Si))ri(θ)=πi(Si|θ)πi(Si|θold)ci(θ)=max(min(ri(θ),1+ε),1ε)

      Here:

      • Di and Gi are the advantage function and return value for the ith element of the mini-batch, respectively.

      • πi(Si|θ) is the probability of taking action Ai when in state Si, given the updated policy parameters θ.

      • πi(Si|θold) is the probability of taking action Ai when in state Si, given the previous policy parameters θold from before the current learning epoch.

      • ε is the clip factor specified using the ClipFactor option.

      • i(θ) is the entropy loss and w is the entropy loss weight factor, specified using the EntropyLossWeight option. For more information on entropy loss, see Entropy Loss.

  6. Repeat steps 3 through 5 until the training episode reaches a terminal state.

Entropy Loss

To promote agent exploration, you can add an entropy loss term wi(θ,Si) to the actor loss function, where w is the entropy loss weight and i(θ,Si) is the entropy.

The entropy value is higher when the agent is more uncertain about which action to take next. Therefore, maximizing the entropy loss term (minimizing the negative entropy loss) increases the agent uncertainty, thus encouraging exploration. To promote additional exploration, which can help the agent move out of local optima, you can specify a larger entropy loss weight.

For a discrete action space, the agent uses the following entropy value. In this case, the actor outputs the probability of taking each possible discrete action.

i(θ,Si)=k=1Pπk(Si|θ)lnπk(Si|θ)

Here:

  • P is the number of possible discrete actions.

  • πk(Si|θ) is the probability of taking action Ak when in state Si following the current policy.

For a continuous action space, the agent uses the following entropy value. In this case, the actor outputs the mean and standard deviation of the Gaussian distribution for each continuous action.

i(θ,Si)=12k=1Cln(2πeσk,i2)

Here:

  • C is the number of continuous actions output by the actor.

  • σk,i is the standard deviation for action k when in state Si following the current policy.

References

[1] Schulman, John, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. “Proximal Policy Optimization Algorithms.” ArXiv:1707.06347 [Cs], July 19, 2017. https://arxiv.org/abs/1707.06347.

[2] Mnih, Volodymyr, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. “Asynchronous Methods for Deep Reinforcement Learning.” ArXiv:1602.01783 [Cs], February 4, 2016. https://arxiv.org/abs/1602.01783.

[3] Schulman, John, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel. “High-Dimensional Continuous Control Using Generalized Advantage Estimation.” ArXiv:1506.02438 [Cs], October 20, 2018. https://arxiv.org/abs/1506.02438.

See Also

|

Related Topics