Overhead when using optimization: parfor loops or UseParallel 'always' for fmincon?
Show older comments
I am currently running an optimization problem using fmincon, and I have access to a cluster. I am unsure about the best way to parallelize my code, and seems I have two options.
First, a bit more detail: the objective function that I feed into fmincon performs some numerical integration, and is thus rather costly to run. In fact, the objective performs this numerical integration for each of about 600 observations. It seems that I might achieve a significant speed increase if I run these loops in parallel using parfor.
However, all the documentation I can currently find recommends using the UseParallel 'always' options to parallelize the computation of the numerical gradient used in the optimization. Currently, my parameter space is relatively small (8-dimensions), but future iterations of the problem will have a 20-30 dimensional parameter space.
It seems that the first option (parfor) actually requires a lot of overhead to run, and this really bogs down performance. The UseParallel option seems to offer marginal improvements over the non-parallelize version of the code, but I want more. Is the overhead parfor just prohibitively costly?
My code takes the form:
[x ...] = fmincon(@zambia_SL,initial,constraints,options,passed parameters)
...
function ML = zambial_SL(initial,passed parameters)
(par)for i=1:obs %obs = 600 or so
costly independent executions here
end
ML = g(computed vectors from above)
Accepted Answer
More Answers (0)
Categories
Find more on Problem-Based Nonlinear Optimization in Help Center and File Exchange
Products
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!