Main Content

rlACAgentOptions

Options for AC agent

Description

Use an rlACAgentOptions object to specify options for creating actor-critic (AC) agents. To create an actor-critic agent, use rlACAgent

For more information see Actor-Critic Agents.

For more information on the different types of reinforcement learning agents, see Reinforcement Learning Agents.

Creation

Description

opt = rlACAgentOptions creates a default option set for an AC agent. You can modify the object properties using dot notation.

example

opt = rlACAgentOptions(Name,Value) sets option properties using name-value pairs. For example, rlDQNAgentOptions('DiscountFactor',0.95) creates an option set with a discount factor of 0.95. You can specify multiple name-value pairs. Enclose each property name in quotes.

Properties

expand all

Number of steps the agent interacts with the environment before learning from its experience, specified as a positive integer. When the agent uses a recurrent neural network, NumStepsToLookAhead is treated as the training trajectory length.

Entropy loss weight, specified as a scalar value between 0 and 1. A higher loss weight value promotes agent exploration by applying a penalty for being too certain about which action to take. Doing so can help the agent move out of local optima.

For episode step t, the entropy loss function, which is added to the loss function for actor updates, is:

Ht=Ek=1Mμk(St|θμ)lnμk(St|θμ)

Here:

  • E is the entropy loss weight.

  • M is the number of possible actions.

  • μk(St|θμ) is the probability of taking action Ak when in state St following the current policy.

When gradients are computed during training, an additional gradient component is computed for minimizing this loss function.

Option to return the action with maximum likelihood for simulation and policy generation, specified as a logical value. When UseDeterministicExploitation is set to true, the action with maximum likelihood is always used in sim and generatePolicyFunction, which casues the agent to behave deterministically.

When UseDeterministicExploitation is set to false, the agent samples actions from probability distributions, which causes the agent to behave stochastically.

Sample time of agent, specified as a positive scalar.

Within a Simulink® environment, the agent gets executed every SampleTime seconds of simulation time.

Within a MATLAB® environment, the agent gets executed every time the environment advances. However, SampleTime is the time interval between consecutive elements in the output experience returned by sim or train.

Discount factor applied to future rewards during training, specified as a positive scalar less than or equal to 1.

Object Functions

rlACAgentActor-critic reinforcement learning agent

Examples

collapse all

Create an AC agent options object, specifying the discount factor.

opt = rlACAgentOptions('DiscountFactor',0.95)
opt = 
  rlACAgentOptions with properties:

             NumStepsToLookAhead: 32
               EntropyLossWeight: 0
    UseDeterministicExploitation: 0
                      SampleTime: 1
                  DiscountFactor: 0.9500

You can modify options using dot notation. For example, set the agent sample time to 0.5.

opt.SampleTime = 0.5;

Compatibility Considerations

expand all

Behavior change in future release

See Also

Introduced in R2019a