When the Solver Fails
Too Many Iterations or Function Evaluations
The solver stopped because it reached a limit on the number of iterations or function evaluations before it minimized the objective to the requested tolerance. To proceed, try one or more of the following.
1. Enable Iterative Display
Set the Display
option to 'iter'
.
This setting shows the results of the solver iterations.
To enable iterative display at the MATLAB® command line, enter
options = optimoptions('solvername','Display','iter');
Call the solver using the options
structure.
For an example of iterative display, see Interpret Result.
What to Look for in Iterative Display
See if the objective function (
Fval
orf(x)
orResnorm
) decreases. Decrease indicates progress.Examine constraint violation (
Max constraint
) to ensure that it decreases towards0
. Decrease indicates progress.See if the first-order optimality decreases towards
0
. Decrease indicates progress.See if the
Trust-region radius
decreases to a small value. This decrease indicates that the objective might not be smooth.
What to Do
If the solver seemed to progress:
Set
MaxIterations
and/orMaxFunctionEvaluations
to values larger than the defaults. You can see the default values in the Options table in the solver's function reference pages.Start the solver from its last calculated point.
If the solver is not progressing, try the other listed suggestions.
2. Relax Tolerances
If StepTolerance
or OptimalityTolerance
,
for example, are too small, the solver might not recognize when it
has reached a minimum; it can make futile iterations indefinitely.
To change tolerances at the command line, use optimoptions
as described in Set and Change Optimization Options.
The FiniteDifferenceStepSize
option (or DiffMaxChange
and DiffMinChange
options)
can affect a solver's progress. These options control the step size
in finite differencing for derivative estimation.
3. Start the Solver from Different Points
4. Check Objective and Constraint Function Definitions
For example, check that your objective and nonlinear constraint functions return the correct values at some points. See Check your Objective and Constraint Functions. Check that an infeasible point does not cause an error in your functions; see Iterations Can Violate Constraints.
5. Center and Scale Your Problem
Solvers run more reliably when each coordinate has about the same effect on the objective and constraint functions. Multiply your coordinate directions with appropriate scalars to equalize the effect of each coordinate. Add appropriate values to certain coordinates to equalize their size.
Example: Centering and Scaling. Consider minimizing 1e6*x(1)^2 + 1e-6*x(2)^2
:
f = @(x) 10^6*x(1)^2 + 10^-6*x(2)^2;
Minimize f
using the fminunc
'quasi-newton'
algorithm:
opts = optimoptions('fminunc','Display','none','Algorithm','quasi-newton'); x = fminunc(f,[0.5;0.5],opts) x = 0 0.5000
The result is incorrect; poor scaling interfered with obtaining a good solution.
Scale the problem. Set
D = diag([1e-3,1e3]); fr = @(y) f(D*y); y = fminunc(fr, [0.5;0.5], opts) y = 0 0 % the correct answer
Similarly, poor centering can interfere with a solution.
fc = @(z)fr([z(1)-1e6;z(2)+1e6]); % poor centering z = fminunc(fc,[.5 .5],opts) z = 1.0e+005 * 10.0000 -10.0000 % looks good, but... z - [1e6 -1e6] % checking how close z is to 1e6 ans = -0.0071 0.0078 % reveals a distance fcc = @(w)fc([w(1)+1e6;w(2)-1e6]); % centered w = fminunc(fcc,[.5 .5],opts) w = 0 0 % the correct answer
6. Provide Gradient or Jacobian
If you do not provide gradients or Jacobians, solvers estimate gradients and Jacobians by finite differences. Therefore, providing these derivatives can save computational time, and can lead to increased accuracy. The problem-based approach can provide gradients automatically; see Automatic Differentiation in Optimization Toolbox.
For constrained problems, providing a gradient has another advantage.
A solver can reach a point x
such that x
is
feasible, but finite differences around x
always
lead to an infeasible point. In this case, a solver can fail or halt
prematurely. Providing a gradient allows a solver to proceed.
Provide gradients or Jacobians in the files for your objective function and nonlinear constraint functions. For details of the syntax, see Writing Scalar Objective Functions, Writing Vector and Matrix Objective Functions, and Nonlinear Constraints.
To check that your gradient or Jacobian function is correct, use the checkGradients
function, as described in Checking Validity of Gradients or Jacobians.
If you have a Symbolic Math Toolbox™ license, you can calculate gradients and Hessians programmatically. For an example, see Calculate Gradients and Hessians Using Symbolic Math Toolbox.
For examples using gradients and Jacobians, see Minimization with Gradient and Hessian, Nonlinear Constraints with Gradients, Calculate Gradients and Hessians Using Symbolic Math Toolbox, Solve Nonlinear System Without and Including Jacobian, and Large Sparse System of Nonlinear Equations with Jacobian. For automatic differentiation in the problem-based approach, see Effect of Automatic Differentiation in Problem-Based Optimization.
7. Provide Hessian
Solvers often run more reliably and with fewer iterations when you supply a Hessian.
The following solvers and algorithms accept Hessians:
fmincon
interior-point
. Write the Hessian as a separate function. For an example, see fmincon Interior-Point Algorithm with Analytic Hessian.fmincon
trust-region-reflective
. Give the Hessian as the third output of the objective function. For an example, see Minimization with Dense Structured Hessian, Linear Equalities.fminunc
trust-region
. Give the Hessian as the third output of the objective function. For an example, see Minimization with Gradient and Hessian.
If you have a Symbolic Math Toolbox license, you can calculate gradients and Hessians programmatically. For an example, see Calculate Gradients and Hessians Using Symbolic Math Toolbox. To provide a Hessian in the problem-based approach, see Supply Derivatives in Problem-Based Workflow.
Converged to an Infeasible Point
Usually, you get this result because the solver was unable to
find a point satisfying all constraints to within the ConstraintTolerance
tolerance.
However, the solver might have located or started at a feasible point,
and converged to an infeasible point. If the solver lost feasibility,
see Solver Lost Feasibility. If quadprog
returns
this result, see quadprog Converges to an Infeasible Point
To proceed when the solver found no feasible point, try one or more of the following.
1. Check Linear Constraints |
2. Check Nonlinear Constraints |
1. Check Linear Constraints
Try finding a point that satisfies the bounds and linear constraints by solving a linear programming problem.
Define a linear programming problem with an objective function that is always zero:
f = zeros(size(x0)); % assumes x0 is the initial point
Solve the linear programming problem to see if there is a feasible point:
xnew = linprog(f,A,b,Aeq,beq,lb,ub);
If there is a feasible point
xnew
, usexnew
as the initial point and rerun your original problem.If there is no feasible point, your problem is not well-formulated. Check the definitions of your bounds and linear constraints. For details on checking linear constraints, see Investigate Linear Infeasibilities.
2. Check Nonlinear Constraints
After ensuring that your bounds and linear constraints are feasible (contain a point satisfying all constraints), check your nonlinear constraints.
Set your objective function to zero:
@(x)0
Run your optimization with all constraints and with the zero objective. If you find a feasible point
xnew
, setx0 = xnew
and rerun your original problem.If you do not find a feasible point using a zero objective function, use the zero objective function with several initial points.
If you find a feasible point
xnew
, setx0 = xnew
and rerun your original problem.If you do not find a feasible point, try using
fmincon
with theEnableFeasibilityMode
option set totrue
and theSubproblemAlgorithm
option set to'cg'
, as in Obtain Solution Using Feasibility Mode. Try several initial points with these options.If you still do not find a feasible point, try relaxing the constraints, discussed next.
Try relaxing your nonlinear inequality constraints, then tightening them.
Change the nonlinear constraint function
c
to returnc-
Δ, where Δ is a positive number. This change makes your nonlinear constraints easier to satisfy.Look for a feasible point for the new constraint function, using either your original objective function or the zero objective function.
If you find a feasible point,
Reduce Δ
Look for a feasible point for the new constraint function, starting at the previously found point.
If you do not find a feasible point, try increasing Δ and looking again.
If you find no feasible point, your problem might be truly infeasible, meaning that no solution exists. Check all your constraint definitions again.
Solver Lost Feasibility
If the solver started at a feasible point, but converged to an infeasible point, try the following techniques.
Try a different algorithm. The
fmincon
'sqp'
and'interior-point'
algorithms are usually the most robust, so try one or both of them first.Tighten the bounds. Give the highest
lb
and lowestub
vectors that you can. This can help the solver to maintain feasibility. Thefmincon
'sqp'
and'interior-point'
algorithms obey bounds at every iteration, so tight bounds help throughout the optimization.
quadprog
Converges to an Infeasible Point
Usually, you get this message because the linear constraints are inconsistent, or are nearly
singular. To check whether a feasible point exists, create a linear programming
problem with the same constraints and with a zero objective function vector
f
. Solve using the linprog
'dual-simplex'
algorithm:
options = optimoptions('linprog','Algorithm','dual-simplex'); x = linprog(f,A,b,Aeq,beq,lb,ub,options)
If linprog
finds no feasible point, then
your problem is truly infeasible.
If linprog
finds a feasible point, then
try a different quadprog
algorithm. Alternatively,
change some tolerances such as StepTolerance
or ConstraintTolerance
and
solve the problem again.
Problem Unbounded
The solver reached a point whose objective function was less than the objective limit tolerance.
Your problem might be truly unbounded. In other words, there is a sequence of points xi with
lim f(xi) = –∞.
and such that all the xi satisfy the problem constraints.
Check that your problem is formulated correctly. Solvers try to minimize objective functions; if you want a maximum, change your objective function to its negative. For an example, see Maximizing an Objective.
Try scaling or centering your problem. See Center and Scale Your Problem.
Relax the objective limit tolerance by using
optimoptions
to reduce the value of theObjectiveLimit
tolerance.
fsolve Could Not Solve Equation
fsolve
can fail to solve an equation for
various reasons. Here are some suggestions for how to proceed:
Try Changing the Initial Point.
fsolve
relies on an initial point. By giving it different initial points, you increase the chances of success.Check the definition of the equation to make sure that it is smooth.
fsolve
might fail to converge for equations with discontinuous gradients, such as absolute value.fsolve
can fail to converge for functions with discontinuities.Check that the equation is “square,” meaning equal dimensions for input and output (has the same number of unknowns as values of the equation).
Change tolerances, especially
OptimalityTolerance
andStepTolerance
. If you attempt to get high accuracy by setting tolerances to very small values,fsolve
can fail to converge. If you set tolerances that are too high,fsolve
can fail to solve an equation accurately.Check the problem definition. Some problems have no real solution, such as
x^2 + 1 = 0
. If you can accept a complex solution, try setting your initial point to a complex value.fsolve
does not attempt to find a complex solution when the initial point is real.