using Q learning agent for continuous observation space
Show older comments
Hello,
I have a reinforcement learning problem where the observation is the error of closed loop feedback and it is continuous, and discrete action space.
but Im a little bit confused about making critic using rlQValueRepresentation which its syntax mostly uses either a table or deep neural network,
and they are inappropriate for my work, as I didnt find any example like this in Mathworks website, Is there anyone who can help me on this?
Answers (1)
Stephan
on 16 Jun 2020
0 votes
You also are allowed to write a custom critic function:
Categories
Find more on Environments in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!