Reinforcement Learning experience buffer length and parallelisation toolbox
Show older comments
When parallelisation is used when training a DDPG agent with the following settings:
trainOpts.UseParallel = true;
trainOpts.ParallelizationOptions.Mode = 'async';
trainOpts.ParallelizationOptions.StepsUntilDataIsSent = -1;
trainOpts.ParallelizationOptions.DataToSendFromWorkers = 'Experiences';
Does the the parallel simulations have their own experience buffer? This could take up more memory hence I am hoping that only one experience buffer is stored to update the critic network.
From the documentations, it seems like there will only be one experience buffer as the experiences are sent back to the host.
Accepted Answer
More Answers (0)
Categories
Find more on Deep Learning Toolbox in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!