I'm working on a RL project using the Reinforcement Learning Designer, the PPO agent, and a custom environment. To help with diagnostics during the training process I set up some extra plots that track some statistics related to my agent's performance (separate from the RL designer's built in plot that shows reward, average reward, and q0). These plots are intended to capture statistics about the given episode regarding the final state of the environment as well as important moves made during the episode. This data is saved to environment properties inside the step() function during an episode, and then written to plots at the end of an episode in the reset() function.
classdef custEnv < rl.env.MATLABEnvironment
numKeyMoves = 0;
numTotalMoves = 0;
numEpisodes = 0;
function [Observation, Reward, IsDone, LoggedSignals] = step(this, Action)
this.numKeyMoves = this.numKeyMoves + 1;
this.numTotalMoves = this.numTotalMoves + 1;
function InitialObservation = reset(this)
ha = gca(this.Figure1);
plot(ha, this.numKeyMoves / this.numTotalMoves, this.episodeNum)
this.numKeyMoves = 0;
this.numTotalMoves = 0;
this.episodeNum = this.episodeNum + 1;
this.Figure1 = figure('Visible', 'on', 'HandleVisibility', 'off');
This has worked well to date, but now I'd like to use parallel computing to speed up future training sessions. However it seems like the parallel workers aren't interacting with the step() and reset() functions in the same way, so the plots I've created are never getting populated with any data once parallel training is enabled. Is there a way to recreate the funcitonality above with parallel training, or any other stable way to similarly track environment properties as episodes progress?