fmincon
has five algorithm options:
'interiorpoint'
(default)
'trustregionreflective'
'sqp'
'sqplegacy'
'activeset'
Use optimoptions
to set
the Algorithm
option at the command line.
Recommendations 


'interiorpoint'
handles large,
sparse problems, as well as small dense problems. The algorithm satisfies
bounds at all iterations, and can recover from NaN
or Inf
results.
It is a largescale algorithm; see LargeScale vs. MediumScale Algorithms. The algorithm can
use special techniques for largescale problems. For details, see InteriorPoint Algorithm in fmincon
options
.
'sqp'
satisfies bounds at all iterations.
The algorithm can recover from NaN
or Inf
results.
It is not a largescale algorithm; see LargeScale vs. MediumScale Algorithms.
'sqplegacy'
is similar to 'sqp'
,
but usually is slower and uses more memory.
'activeset'
can take large steps,
which adds speed. The algorithm is effective on some problems with
nonsmooth constraints. It is not a largescale algorithm; see LargeScale vs. MediumScale Algorithms.
'trustregionreflective'
requires
you to provide a gradient, and allows only bounds or linear equality
constraints, but not both. Within these limitations, the algorithm
handles both large sparse problems and small dense problems efficiently.
It is a largescale algorithm; see LargeScale vs. MediumScale Algorithms. The algorithm can
use special techniques to save memory usage, such as a Hessian multiply
function. For details, see TrustRegionReflective
Algorithm in fmincon
options
.
For descriptions of the algorithms, see Constrained Nonlinear Optimization Algorithms.
fsolve
has three algorithms:
'trustregiondogleg'
(default)
'trustregion'
'levenbergmarquardt'
Use optimoptions
to set
the Algorithm
option at the command line.
Recommendations 


'trustregiondogleg'
is the only
algorithm that is specially designed to solve nonlinear equations.
The others attempt to minimize the sum of squares of the function.
The 'trustregion'
algorithm is
effective on sparse problems. It can use special techniques such as
a Jacobian multiply function for largescale problems.
For descriptions of the algorithms, see Equation Solving Algorithms.
fminunc
has two algorithms:
'quasinewton'
(default)
'trustregion'
Use optimoptions
to set
the Algorithm
option at the command line.
Recommendations 

For help if the minimization fails, see When the Solver Fails or When the Solver Might Have Succeeded. 
For descriptions of the algorithms, see Unconstrained Nonlinear Optimization Algorithms.
lsqlin
has three algorithms:
'interiorpoint'
, the default
'trustregionreflective'
'activeset'
Use optimoptions
to set
the Algorithm
option at the command line.
Recommendations 

For help if the minimization fails, see When the Solver Fails or When the Solver Might Have Succeeded. 
For descriptions of the algorithms, see LeastSquares (Model Fitting) Algorithms.
lsqcurvefit
and lsqnonlin
have
two algorithms:
'trustregionreflective'
(default)
'levenbergmarquardt'
Use optimoptions
to set
the Algorithm
option at the command line.
Recommendations 

For help if the minimization fails, see When the Solver Fails or When the Solver Might Have Succeeded. 
For descriptions of the algorithms, see LeastSquares (Model Fitting) Algorithms.
linprog
has three algorithms:
'dualsimplex'
, the default
'interiorpointlegacy'
'interiorpoint'
Use optimoptions
to set
the Algorithm
option at the command line.
Recommendations 

Use the For help if the minimization fails, see When the Solver Fails or When the Solver Might Have Succeeded. 
Often, the 'dualsimplex'
and 'interiorpoint'
algorithms
are fast, and use the least memory.
The 'interiorpointlegacy'
algorithm
is similar to 'interiorpoint'
, but 'interiorpointlegacy'
can
be slower, less robust, or use more memory.
For descriptions of the algorithms, see Linear Programming Algorithms.
quadprog
has three algorithms:
'interiorpointconvex'
(default)
'trustregionreflective'
'activeset'
Use optimoptions
to set
the Algorithm
option at the command line.
Recommendations 

For help if the minimization fails, see When the Solver Fails or When the Solver Might Have Succeeded. 
For descriptions of the algorithms, see Quadratic Programming Algorithms.
An optimization algorithm is large scale when it uses linear algebra that does not need to store, nor operate on, full matrices. This may be done internally by storing sparse matrices, and by using sparse linear algebra for computations whenever possible. Furthermore, the internal algorithms either preserve sparsity, such as a sparse Cholesky decomposition, or do not generate matrices, such as a conjugate gradient method.
In contrast, mediumscale methods internally create full matrices and use dense linear algebra. If a problem is sufficiently large, full matrices take up a significant amount of memory, and the dense linear algebra may require a long time to execute.
Don't let the name “large scale” mislead you; you can use a largescale algorithm on a small problem. Furthermore, you do not need to specify any sparse matrices to use a largescale algorithm. Choose a mediumscale algorithm to access extra functionality, such as additional constraint types, or possibly for better performance.
Interiorpoint algorithms in fmincon
, quadprog
, lsqlin
,
and linprog
have many good characteristics, such
as low memory usage and the ability to solve large problems quickly.
However, their solutions can be slightly less accurate than those
from other algorithms. The reason for this potential inaccuracy is
that the (internally calculated) barrier function keeps iterates away
from inequality constraint boundaries.
For most practical purposes, this inaccuracy is usually quite small.
To reduce the inaccuracy, try to:
Rerun the solver with smaller StepTolerance
, OptimalityTolerance
,
and possibly ConstraintTolerance
tolerances (but
keep the tolerances sensible.) See Tolerances and Stopping Criteria).
Run a different algorithm, starting from the interiorpoint
solution. This can fail, because some algorithms can use excessive
memory or time, and all linprog
and some quadprog
algorithms
do not accept an initial point.
For example, try to minimize the function x when
bounded below by 0. Using the fmincon
default interiorpoint
algorithm:
options = optimoptions(@fmincon,'Algorithm','interiorpoint','Display','off'); x = fmincon(@(x)x,1,[],[],[],[],0,[],[],options)
x = 2.0000e08
Using the fmincon
sqp
algorithm:
options.Algorithm = 'sqp';
x2 = fmincon(@(x)x,1,[],[],[],[],0,[],[],options)
x2 = 0
Similarly, solve the same problem using the linprog
interiorpointlegacy
algorithm:
opts = optimoptions(@linprog,'Display','off','Algorithm','interiorpointlegacy'); x = linprog(1,[],[],[],[],0,[],1,opts)
x = 2.0833e13
Using the linprog
dualsimplex
algorithm:
opts.Algorithm = 'dualsimplex';
x2 = linprog(1,[],[],[],[],0,[],1,opts)
x2 = 0
In these cases, the interiorpoint algorithms are less accurate, but the answers are quite close to the correct answer.