Q-Learning Agent
The Q-learning algorithm is an off-policy reinforcement learning method for environments with a discrete action space. A Q-learning agent trains a Q-value function critic to estimate the value of the optimal policy, while following an epsilon-greedy policy based on the value estimated by the critic (it does not try to directly learn an optimal policy). For more information on the different types of reinforcement learning agents, see Reinforcement Learning Agents.
In Reinforcement Learning Toolbox™, a Q-learning agent is implemented by an rlQAgent
object.
Note
Q-learning agents do not support recurrent networks.
Q-learning agents can be trained in environments with the following observation and action spaces.
Observation Space | Action Space |
---|---|
Continuous or discrete | Discrete |
Q agents use the following critic.
Critic | Actor |
---|---|
Q-value function critic
Q(S,A), which you create
using | Q agents do not use an actor |
During training, the agent explores the action space using epsilon-greedy exploration. During each control interval the agent selects a random action with probability ϵ, otherwise it selects the action for which the action-value function greatest with probability 1–ϵ.
Critic Used by the Q-Learning Agent
To estimate the value of the optimal policy, a Q-learning agent uses a critic. The critic is a function approximator object that implements the parametrized action-value function Q(S,A;ϕ), using parameters ϕ. For a given observation S and action A, the critic stores the corresponding estimate of the expected discounted cumulative long-term reward when following the optimal policy (this is the value of the optimal policy). During training, the critic tunes the parameters in ϕ to improve its action-value function estimation. After training, the parameters remain at their tuned values in the critic internal to the trained agent.
For critics that use table-based value functions, the parameters in ϕ are the actual Q(S,A) values in the table.
For more information on critics, see Create Policies and Value Functions.
Q-Learning Agent Creation
To create a Q-learning agent:
Create observation specifications for your environment. If you already have an environment object, you can obtain these specifications using
getObservationInfo
.Create action specifications for your environment. If you already have an environment object, you can obtain these specifications using
getActionInfo
.Create an approximation model for your critic. Depending on the type of problem and on the specific critic you use in the next step, this model can be an
rlTable
object (only for discrete observation spaces), a custom basis function with initial parameter values, or a neural network object. The inputs and outputs of the model you create depend on the type of critic you use in the next step.Create a critic using
rlQValueFunction
orrlVectorQValueFunction
. Use the model you created in the previous step as a first input argument.Specify agent options using an
rlQAgentOptions
object. Alternatively, you can skip this step and modify the agent options later using dot notation.Create the agent using
rlQAgent
.
Q-Learning Training Algorithm
Q-learning agents use the following training algorithm. To configure the training
algorithm, specify options using an rlQAgentOptions
object.
Initialize the critic Q(S,A;ϕ) with random parameter values in ϕ.
For each training episode:
At the beginning of the episode, get the initial observation from the environment.
Repeat the following operations for each step of the episode until S is a terminal state.
For the current observation S, select a random action A with probability ϵ. Otherwise, select the action for which the critic value function is greatest.
To specify ϵ and its decay rate, use the
EpsilonGreedyExploration
option.Execute action A. Observe the reward R and the next observation S'.
If S' is a terminal state, set the value function target y to R. Otherwise, set it to
To set the discount factor γ, use the
DiscountFactor
option.Compute the difference ΔQ between the value function target and the current Q(S,A;ϕ) value.
Update the critic using the learning rate α. Specify the learning rate when you create the critic by setting the
LearnRate
option in therlCriticOptimizerOptions
property within the agent options object.For table-based critics, update the corresponding Q(S,A) value in the table.
For all other types of critics, compute the gradients Δϕ of the loss function with respect to the parameters ϕ. Then, update the parameters based on the computed gradients. In this case, the loss function is the square of ΔQ.
Set the observation S to S'.
References
[1] Sutton, Richard S., and Andrew G. Barto. Reinforcement Learning: An Introduction. Second edition. Adaptive Computation and Machine Learning. Cambridge, Mass: The MIT Press, 2018.
See Also
Objects
rlQAgent
|rlQAgentOptions
|rlQValueFunction
|rlVectorQValueFunction
|rlSARSAAgent
|rlLSPIAgent
|rlDQNAgent