Main Content

Nonlinear Constraints with Gradients

This example shows how to solve a nonlinear problem with nonlinear constraints using derivative information.

Ordinarily, minimization routines use numerical gradients calculated by finite-difference approximation. This procedure systematically perturbs each variable in order to calculate function and constraint partial derivatives. Alternatively, you can provide a function to compute partial derivatives analytically. Typically, when you provide derivative information, solvers work more accurately and efficiently.

Objective Function and Nonlinear Constraint

The problem is to solve

minxf(x)=ex1(4x12+2x22+4x1x2+2x2+1),

subject to the constraints

x1x2-x1-x2-1.5x1x2-10.

Because the fmincon solver expects the constraints to be written in the form c(x)  0, write your constraint function to return the following value:

c(x)=[x1x2-x1-x2+1.5-10-x1x2].

Objective Function with Gradient

The objective function is

f(x)=ex1(4x12+2x22+4x1x2+2x2+1).

Compute the gradient of f(x) with respect to the variables x1 and x2.

f(x)=[f(x)+exp(x1)(8x1+4x2)exp(x1)(4x1+4x2+2)].

The objfungrad helper function at the end of this example returns both the objective function f(x) and its gradient in the second output gradf. Set @objfungrad as the objective.

fun = @objfungrad;

Constraint Function with Gradient

The helper function confungrad is the nonlinear constraint function; it appears at the end of this example.

The derivative information for the inequality constraint has each column correspond to one constraint. In other words, the gradient of the constraints is in the following format:

[c1x1c2x1c1x2c2x2]=[x2-1-x2x1-1-x1].

Set @confungrad as the nonlinear constraint function.

nonlcon = @confungrad;

Set Options to Use Derivative Information

Indicate to the fmincon solver that the objective and constraint functions provide derivative information. To do so, use optimoptions to set the SpecifyObjectiveGradient and SpecifyConstraintGradient option values to true.

options = optimoptions('fmincon',...
    'SpecifyObjectiveGradient',true,'SpecifyConstraintGradient',true);

Solve Problem

Set the initial point to [-1,1].

x0 = [-1,1];

The problem has no bounds or linear constraints, so set those argument values to [].

A = [];
b = [];
Aeq = [];
beq = [];
lb = [];
ub = [];

Call fmincon to solve the problem.

[x,fval] = fmincon(fun,x0,A,b,Aeq,beq,lb,ub,nonlcon,options)
Local minimum found that satisfies the constraints.

Optimization completed because the objective function is non-decreasing in 
feasible directions, to within the value of the optimality tolerance,
and constraints are satisfied to within the value of the constraint tolerance.
x = 1×2

   -9.5473    1.0474

fval = 
0.0236

The solution is the same as in the example Nonlinear Inequality Constraints, which solves the problem without using derivative information. The advantage of using derivatives is that solving the problem takes fewer function evaluations while gaining robustness, although this advantage is not obvious in this example. Using even more derivative information, as in fmincon Interior-Point Algorithm with Analytic Hessian, gives even more benefit, such as fewer solver iterations.

Helper Functions

This code creates the objfungrad helper function.

function [f,gradf] = objfungrad(x)
f = exp(x(1))*(4*x(1)^2+2*x(2)^2+4*x(1)*x(2)+2*x(2)+1);
% Gradient of the objective function:
if nargout  > 1
    gradf = [ f + exp(x(1)) * (8*x(1) + 4*x(2)), 
    exp(x(1))*(4*x(1)+4*x(2)+2)];
end
end

This code creates the confungrad helper function.

function [c,ceq,DC,DCeq] = confungrad(x)
c(1) = 1.5 + x(1) * x(2) - x(1) - x(2); % Inequality constraints
c(2) = -x(1) * x(2)-10; 
% No nonlinear equality constraints
ceq=[];
% Gradient of the constraints:
if nargout > 2
    DC= [x(2)-1, -x(2);
        x(1)-1, -x(1)];
    DCeq = [];
end
end

Related Topics