How can I continue training my network from the previous state?

33 views (last 30 days)
I recently trained a YOLOv2 model for 10 epochs, but the results weren't as good as I had hoped. To improve the model's performance, I plan to increase the total number of training epochs to 20. However, I don't want to start the training process from the beginning again, as this would mean losing the progress made during the initial 10 epochs.
Instead, I would like to continue the training from the point where it left off after the first 10 epochs. This way, I can resume the process and train the model for an additional 10 epochs, building on the progress already made.
Could anyone guide me on how to continue training from the saved state of the model, rather than starting anew?

Accepted Answer

Pavan Sahith
Pavan Sahith on 12 Aug 2024
Hello Harvey,
To resume training your YOLOv2 model from where you left off, you can utilize the previously trained 'detector' object as the starting point for further training. This approach allows you to resume training from the last saved state, retaining all learned weights and biases, and incrementally enhancing your model without starting from scratch.
When you initially train the model and obtain the detector object, you can save this object to a file, which can then be used as an input for subsequent training sessions.
Here's a sample workflow:
Initially, after training first 10 epochs, you can save the trained `detector` object to a file:
[detector, info] = trainYOLOv2ObjectDetector(dsTrain, lgraph, opts);
save('trainedDetector.mat', 'detector');
then you can load the `detector` object from the file and use it as the network input To resume training for an additional 10 epochs:
load('trainedDetector.mat', 'detector');
[detector, info] = trainYOLOv2ObjectDetector(dsTrain, detector.Network, opts);
Make sure to replace `'trainedDetector.mat'` with the actual path to your saved detector file.
Consider referring to this very similar MATLAB Answer post, which can help you as an example
Hope it would help you moving forward

More Answers (1)

Matt J
Matt J on 12 Aug 2024
Edited: Matt J on 12 Aug 2024
Just call trainnet() again, feeding it your net in its current state, rather than an untrained layer graph.
You may wish to adjust the learning rate to account for the LearnRateDrop period that was in force during the first 10 epochs.

Categories

Find more on Sequence and Numeric Feature Data Workflows in Help Center and File Exchange

Products


Release

R2024a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!