Setting options in fmincon

Seems like a simple question, but I have tried all kinds of variants and nothing works. The documanetation on this stinks. All kinds of examples given that give errors.
I just want to lower the optimality tolerance, because my fmincon is not finding the minimum (which is zero) even though its getting to within the default OptimalityTolerance. I need it closer to zero than that. I will probably need to do the same with the StepTolerance.
I have tried using OptimalityTolerance instead of TolFun as well. Neither works.
If I run this as below, and then do a optimoptions('fmincon') at the command line the OptimalityTolerance stays at 1e-6 and my results are no different no matter what I use.
x0=log(1e-20)*ones(5,1)'; lb=log(1e-20)*ones(5,1)'; ub=log(1e-10)*ones(5,1)';
fun = @(x)abs(log(A)-log(cosT)+log(alp(1))+x(1)+log(sum(exp(x-x(1)+log(alp)-log(alp(1)))))-log(ro3^(-5/3)));
options = optimoptions(@fmincon,'TolFun',1e-7);
[x,fval] = fmincon(fun,x0,[],[],[],[],lb,ub,[],options)
optimoptions('fmincon')

Answers (2)

TolFun is to be used with optimset. With optimoptions use OptimalityTolerance
opts = optimoptions('fmincon','OptimalityTolerance',1e-12);
Matt J
Matt J on 23 Sep 2019
Edited: Matt J on 23 Sep 2019
If I run this as below, and then do a optimoptions('fmincon') at the command line the OptimalityTolerance stays at 1e-6 and my results are no different no matter what I use.
Doing optimoptions('fmincon') at the command line does not give any information about what settings were used in a previous optimization. It will just display the default fmincon settings at the command line. The real option settings seen by fmincon are those contained in the options object which you created and passed to fmincon. Those settings were correctly made:
>> options
options =
fmincon options:
Options used by current Algorithm ('interior-point'):
(Other available algorithms: 'active-set', 'sqp', 'sqp-legacy', 'trust-region-reflective')
Set properties:
OptimalityTolerance: 1.0000e-07
Default properties:
Algorithm: 'interior-point'
CheckGradients: 0
ConstraintTolerance: 1.0000e-06
Display: 'final'
FiniteDifferenceStepSize: 'sqrt(eps)'
FiniteDifferenceType: 'forward'
HessianApproximation: 'bfgs'
HessianFcn: []
HessianMultiplyFcn: []
HonorBounds: 1
MaxFunctionEvaluations: 3000
MaxIterations: 1000
ObjectiveLimit: -1.0000e+20
OutputFcn: []
PlotFcn: []
ScaleProblem: 0
SpecifyConstraintGradient: 0
SpecifyObjectiveGradient: 0
StepTolerance: 1.0000e-10
SubproblemAlgorithm: 'factorization'
TypicalX: 'ones(numberOfVariables,1)'
UseParallel: 0
As for why your results are not changing, we cannot see what is happening because you have not provided enough of the input variables to run the code. However, it may be that another stopping criterion was reached before the OptimalityTolerance criterion.
You should also be mindful that the OptimalityTolerance is not a way of specifying the distance from the minimum value that you would like the algorithm to stop. No option setting can specify that directly. The OptimalityTolerance sets a tolerance on the first order optimality measure as described here,

6 Comments

Thanks. I see now that it does change the options, but my results are not changing. Here is the full code. If I use fevela(xtrue) I get zero, where fmincon is stopping at feval(x)=1.77e-7 which is not close enough to the actaul min.
close all; clear;
lam=635e-9;
k=2*pi/lam;
Theta=60*pi/180;
cosT=cos(Theta);
delZ=100;
Hmax=600;
zk=[delZ:delZ:Hmax-delZ];
cn2=[1e-12 1e-13 1e-14 1e-14 1e-14];
xtrue=log(cn2);
alp=(1-(zk/Hmax)).^(5/3);
ak=cn2.*alp;
A=0.423*(k^2)*delZ;
ro=((A/cosT)*sum(ak))^-0.6
logrom53=log(A)-log(cosT)+log(sum(ak));
ro2=(exp(logrom53))^-0.6
MaxTerm=log(ak(3)); %max(log(ak));
logro3m53=log(A)-log(cosT)+MaxTerm+log(sum(exp(log(ak)-MaxTerm)));
ro3=(exp(logro3m53))^-0.6
x0=log(1e-20)*ones(5,1)';
lb=log(1e-20)*ones(5,1)';
ub=log(1e-10)*ones(5,1)';
fun = @(x)abs(log(A)-log(cosT)+log(alp(1))+x(1)+log(sum(exp(x-x(1)+log(alp)-log(alp(1)))))-log(ro3^(-5/3)));
options = optimoptions(@fmincon,'OptimalityTolerance',1e-12,'StepTolerance',1e-14);
[x,fval] = fmincon(fun,x0,[],[],[],[],lb,ub,[],options)
semilogy(exp(x))
hold on
semilogy(cn2)
Activity feed says I got another answer from you, but there is nothing here except my comment.
Perhaps I am trying to do something beyond what fmincon is supposed to do. Instead of minimizing my function as a function of a single scalar variable x, I am trying to minimize it using a vector x. Is this outside the scope of fmincon?
Well, you have 2 problems. The first problem is that fun(x) is not differentiable at points where fun(x)=0. This is because the abs() operation in your objective function is not differentiable. fmincon is a derivative-based solver, so this violates one of its assumptions.
The second problem is that you have multiple local minima, which you can see by plotting a line connecting the fmincon output x with your xtrue,
L=@(alpha) fun(x+alpha*(xtrue-x));
fplot(@(t) arrayfun(L,t),[-1.2,1.2])
untitled.png
Let's take a look at the message fmincon displayed (I'm running in release R2019a.)
Local minimum possible. Constraints satisfied.
fmincon stopped because the size of the current step is less than
the value of the step size tolerance and constraints are
satisfied to within the value of the constraint tolerance.
<stopping criteria details>
Certain sections of the exit message are hyperlinks to sections of the documentation, including the entire first line. The section of the documentation to which that line links itself links to another page in the documentation that provides several potential reasons why fmincon may have returned a local minimum.
As for optimoptions returning an options object that does not reflect the changes made by a previous optimoptions call, as Matt J indicated that's the correct behavior. optimoptions creates an object that you can pass into an optimization function. It does not change "global options" or anything like that, though you're not the first person to assume that's how it functions.
Sorry, I am new to defining functions with the @ symbol so I am having a hard time following what you did with
L=@(alpha) fun(x+alpha*(xtrue-x));
fplot(@(t) arrayfun(L,t),[-1.2,1.2])
but looks like you just scaled xtrue-x by -1.2 to 1.2 and plotted my fun for those values of x?
Are all of the elements of x just scaled by the same amount for this? That may point back to my last question of whether or not fmincon is even suited for my problem. I am trying to find the best set of x values where the 5 different elements of x are independent of one another (not just a scaled version of the starting guess). Or said another way, I am trying to find the best fit of [x]=[x(1) x(2) x(3) x(4) x(5)] so that feval(fun,x)=0 and there is no relationship or scaling that defines the different elements. This will require fmincon to find the 5 dimensional minimum. I think all of the examples I have seen are 1 dimensional where x is a single scalar value.
Matt J
Matt J on 24 Sep 2019
Edited: Matt J on 24 Sep 2019
Yes, fmincon will search for the 5-dimensional minimum (up to any constraints you specify, of course).
The plot of L(t) is deliberately a restriction of your 5-dimensional objective function to a 1-dimensional cross-sectional line passing through x and xtrue: one-dimensional plots are easier to visualize than 5-dimensional plots. The point was just to show that x and xtrue lie in different basins of the graph of your objective function. Because of the initial guess that you chose, fmincon is caught in the basin of x and cannot reach xtrue. Changing the optimoptions will not help with that.
Also, the plot further demonstrates that your objective has no derivative at either minima. That is likely why fmincon is having trouble converging to better than 1e-7.

Sign in to comment.

Products

Release

R2019a

Asked:

on 23 Sep 2019

Edited:

on 24 Sep 2019

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!