Main Content

Second-Order Cone Programming Algorithm

Definition of Second-Order Cone Programming

A second-order cone programming problem has the form

minxfTx

subject to the constraints

Asc(i)xbsc(i)dscT(i)xγ(i)AxbAeqx=beqlbxub.

f, x, b, beq, lb, and ub are vectors, and A and Aeq are matrices. For each i, the matrix Asc(i), the vectors bsc(i) and dsc(i), and the scalar γ(i) are in a second-order cone constraint that you create using secondordercone.

In other words, the problem has a linear objective function and linear constraints, as well as a set of second-order cone constraints of the form Asc(i)xbsc(i)dscT(i)xγ(i).

coneprog Algorithm

The coneprog solver uses the algorithm described in Andersen, Roos, and Terlaky [1]. This method is an interior-point algorithm similar to the Interior-Point linprog Algorithm.

Standard Form

The algorithm starts by placing the problem in standard form. The algorithm adds nonnegative slack variables so that the problem has the form

minxfTx

subject to the constraints

Ax=bxK.

The solver expands the sizes of the linear coefficient vector f and linear constraint matrix A to account for the slack variables.

The region K is the cross product of Lorentz cones Equation 1 and the nonnegative orthant. To convert each convex cone

Asc(i)xbsc(i)dscT(i)xγ(i)

to a Lorentz cone Equation 1, create a column vector of variables t1, t2, …, tn+1:

t1=dTxγt2:(n+1)=Ascxbsc.

Here, the number of variables n for each cone i is the number of rows in Asc(i). By its definition, the variable vector t satisfies the inequality

t2:(n+1)t1.(1)

Equation 1 is the definition of a Lorentz cone in (n+1) variables. The variables t appear in the problem in place of the variables x in the convex region K.

Internally, the algorithm also uses a rotated Lorentz cone in the reformulation of cone constraints, but this topic does not address that case. For details, see Andersen, Roos, and Terlaky [1].

When adding slack variables, the algorithm negates variables, as needed, and adds appropriate constants so that:

  • Variables with only one bound have a lower bound of zero.

  • Variables with two bounds have a lower bound of zero and, using a slack variable, have no upper bound.

  • Variables without bounds are placed in a Lorentz cone with a slack variable as the constrained variable. This slack variable is not part of any other expression, objective or constraint.

Dual Problem

The dual cone is

K*={s:sTx0xK}.

The dual problem is

maxybTy

such that

ATy+s=f

for some

sK*.

A dual optimal solution is a point (y,s) that satisfies the dual constraints and maximizes the dual objective.

Homogeneous Self-Dual Formulation

To handle potentially infeasible or unbounded problems, the algorithm adds two more variables τ and κ and formulates the problem as homogeneous (equal to zero) and self-dual.

Axbτ=0ATy+sfτ=0fTx+bTyκ=0(2)

along with the constraints

(x;τ)K¯,(s;κ)K¯*.(3)

Here, K¯ is the cone K adjoined with the nonnegative real line, which is the space for (x;τ). Similarly K¯* is the cone K* adjoined with the nonnegative real line, which is the space for (s;κ). In this formulation, the following lemma shows that τ is the scaling for feasible solutions, and κ is the indicator of an infeasible problem.

Lemma ([1] Lemma 2.1)

Let (x, τ, y, s, κ) be a feasible solution of Equation 2 along with the constraints in Equation 3.

  • xTs + τκ = 0.

  • If τ > 0, then (x, y, s)/τ is a primal-dual optimal solution of the standard form second-order cone problem.

  • If κ > 0, then at least one of these strict inequalities holds:

    bTy > 0

    fTx < 0.

    If the first inequality holds, then the standard form, primal second-order cone problem is infeasible. If the second inequality holds, then the standard form, dual second-order cone problem is infeasible.

In summary, for feasible problems, the variable τ scales the solution between the original standard form problem and the homogeneous self-dual problem. For infeasible problems, the final iterate (x, y, s, τ, κ) provides a certificate of infeasibility for the original standard form problem.

Start Point

The start point for the iterations is the feasible point:

  • x = 1 for each nonnegative variable, 1 for the first variable in each Lorentz cone, and 0 otherwise.

  • y = 0.

  • s = (1,0,…,0) for each cone, 1 for each nonnegative variable.

  • τ = 1.

  • κ = 1.

Central Path

The algorithm attempts to follow the central path, which is the parameterized solution to the following equations for γ decreasing from 1 toward 0.

Axbτ=γ(Ax0bτ0)ATy+scτ=γ(ATy0+s0fτ0)fTx+bTyκ=γ(fTx0+bTy0κ0)XSe=γμ0eτκ=γμ0.(4)
  • Each variable with a 0 subscript indicates the start point of the variable.

  • The variables X and S are arrow head matrices formed from the x and s vectors, respectively. For a vector x = [x1,x2,…,xn], the arrow head matrix X has the definition

    X=mat(x)=[x1x2:nTx2:nx1I].

    By its definition, X is symmetric.

  • The variable e is the vector with a 1 in each cone coordinate corresponding to the x1 Lorentz cone coordinate.

  • The variable μ0 has the definition

    μ0=x0Ts0+τ0κ0k+1,

    where k is the number of nonzero elements in x0.

The central path begins at the start point and ends at an optimal solution to the homogeneous self-dual problem.

Andersen, Roos, and Terlaky [1] show in Lemma 3.1 that the complementarity condition xTs = 0, where x and s are in a product of Lorentz cones L, is equivalent to the condition

XiSiei=SiXiei=0

for every cone i. Here Xi = mat(xi), xi is the variable associated with the Lorentz cone i, Si = mat(si), and ei is the unit vector [1,0,0,…,0] of the appropriate dimension. This discussion shows that the central path satisfies the complementarity condition at its end point.

Search Direction

To obtain points near the central path as the parameter γ decreases from 1 toward 0, the algorithm uses Newton's method. The variables to find are labeled (x, τ, y, s, κ). Let dx represent the search direction for the x variables, and so on. Then the Newton step solves the following linear system, derived from Equation 4.

Adxbdτ=(γ1)(Ax0bτ0)ATdy+dsfdτ=(γ1)(ATy0+s0fτ0)fTdx+bTdydκ=(γ1)(fTx0+bTy0κ)X0ds+S0dx=X0S0e+γμ0eτ0dκ+κ0dtau=τ0κ0+γμ0.

The algorithm obtains its next point by taking a step in the d direction.

[x1τ1y1s1κ1]=[x0τ0y0s0κ0]+α[dxdτdydsdκ]

for some step α[0,1].

For both numerical stability and accelerated convergence, the algorithm scales the step according to a suggestion in Nesterov and Todd [8]. Also, the algorithm corrects the step according to a variant of Mehrotra's predictor-corrector [7]. (For further details, see Andersen, Roos, and Terlaky [1].)

Step Solver Variations

The preceding discussion relates to the LinearSolver option with the value 'augmented' specified. The solver has other values that change the step calculation to suit different types of problems.

  • 'auto' (default) — coneprog chooses the step solver:

    • If the problem is sparse, the step solver is 'prodchol'.

    • Otherwise, the step solver is 'augmented'.

  • 'normal' — The solver uses a variant of the 'augmented' step that is suitable when the problem is sparse. See Andersen, Roos, and Terlaky [1].

  • 'schur' — The solver uses a modified Schur complement method for handling a sparse problem with a few dense columns. This method is also suitable for large cones. See Andersen [2].

  • 'prodchol' — The solver uses the methods described in Goldfarb and Scheinberg ([4] and [5]) for handling a sparse problem with a few dense columns. This method is also suitable for large cones.

Iterative Display and Stopping Conditions

At each iteration k, the algorithm computes three relative convergence measures:

  • Primal infeasibility

    InfeasPrimalk=Axkbτkmax(1,Ax0bτ0).

  • Dual infeasibility

    InfeasDualk=ATyk+skfτkmax(1,ATy0+s0fτ0).

  • Gap infeasibility

    InfeasGapk=|fTxk+bTykκk|max(1,|fTx0+bTy0κ0|).

You can view these three statistics at the command line by specifying iterative display.

options = optimoptions('coneprog','Display','iter');

All three should approach zero when the problem is feasible and the solver converges. For a feasible problem, the variable κk approaches zero, and the variable τk approaches a positive constant.

One stopping condition is somewhat related to the gap infeasibility. The stopping condition is when the following optimality measure decreases below the optimality tolerance.

Optimalityk=|fTxkbTyk|τk+|bTyk|=|fTxk/τkbTyk/τk|1+|bTyk/τk|.

This statistic measures the precision of the objective value.

The solver also stops and declares the problem to be infeasible under the following conditions. The three relative infeasibility measures are less than c = ConstraintTolerance, and

τkcmax(1,κk).

If bTyk > 0, then the solver declares that the primal problem is infeasible. If fTxk < 0, then the solver declares that the dual problem is infeasible.

The algorithm also stops when

μkcμ0

and

τkcmax(1,κk).

In this case, coneprog reports that the problem is numerically unstable (exit flag -10).

The remaining stopping condition occurs when at least one infeasibility measure is greater than ConstraintTolerance and the computed step size is too small. In this case, coneprog reports that the search direction became too small and no further progress could be made (exit flag -7).

References

[1] Andersen, E. D., C. Roos, and T. Terlaky. On implementing a primal-dual interior-point method for conic quadratic optimization. Math. Program., Ser. B 95, pp. 249–277 (2003). https://doi.org/10.1007/s10107-002-0349-3

[2] Andersen, K. D. A modified schur-complement method for handling dense columns in interior-point methods for linear programming. ACM Transactions on Mathematical Software (TOMS), 22(3):348–356, 1996.

[3] Ben-Tal, Aharon, and Arkadi Nemirovski. Convex Optimization in Engineering: Modeling, Analysis, Algorithms. (1998).

[4] Goldfarb, D. and K. Scheinberg. A product-form cholesky factorization method for handling dense columns in interior point methods for linear programming. Mathematical Programming, 99(1):1–34, 2004.

[5] Goldfarb, D. and K. Scheinberg. Product-form cholesky factorization in interior point methods for second-order cone programming. Mathematical Programming, 103(1):153–179, 2005.

[6] Luo, Zhi-Quan, Jos F. Sturm, and Shuzhong Zhang. Duality and Self-Duality for Conic Convex Programming. (1996).

[7] Mehrotra, Sanjay. “On the Implementation of a Primal-Dual Interior Point Method.” SIAM Journal on Optimization 2, no. 4 (November 1992): 575–601. https://doi.org/10.1137/0802028.

[8] Nesterov, Yu. E., and M. J. Todd. “Self-Scaled Barriers and Interior-Point Methods for Convex Programming.” Mathematics of Operations Research 22, no. 1 (February 1997): 1–42. https://doi.org/10.1287/moor.22.1.1.

See Also

|

Related Topics