Main Content

Quadratic Programming Algorithms

Quadratic Programming Definition

Quadratic programming is the problem of finding a vectorxthat minimizes a quadratic function, possibly subject to linear constraints:

min x 1 2 x T H x + c T x (1)

such thatA·xb,Aeq·x=beq,lxu.

interior-point-convexquadprogAlgorithm

Theinterior-point-convexalgorithm performs the following steps:

Note

The algorithm has two code paths. It takes one when the Hessian matrixHis an ordinary (full) matrix of doubles, and it takes the other whenHis a sparse matrix. For details of the sparse data type, seeSparse Matrices. Generally, the algorithm is faster for large problems that have relatively few nonzero terms when you specifyHassparse. Similarly, the algorithm is faster for small or relatively dense problems when you specifyHasfull.

Presolve/Postsolve

The algorithm first tries to simplify the problem by removing redundancies and simplifying constraints. The tasks performed during the presolve step can include the following:

  • Check if any variables have equal upper and lower bounds. If so, check for feasibility, and then fix and remove the variables.

  • Check if any linear inequality constraint involves only one variable. If so, check for feasibility, and then change the linear constraint to a bound.

  • Check if any linear equality constraint involves only one variable. If so, check for feasibility, and then fix and remove the variable.

  • Check if any linear constraint matrix has zero rows. If so, check for feasibility, and then delete the rows.

  • Determine if the bounds and linear constraints are consistent.

  • Check if any variables appear only as linear terms in the objective function and do not appear in any linear constraint. If so, check for feasibility and boundedness, and then fix the variables at their appropriate bounds.

  • Change any linear inequality constraints to linear equality constraints by adding slack variables.

If the algorithm detects an infeasible or unbounded problem, it halts and issues an appropriate exit message.

The algorithm might arrive at a single feasible point, which represents the solution.

If the algorithm does not detect an infeasible or unbounded problem in the presolve step, and if the presolve has not produced the solution, the algorithm continues to its next steps. After reaching a stopping criterion, the algorithm reconstructs the original problem, undoing any presolve transformations. This final step is the postsolve step.

For details, see Gould and Toint[63].

Generate Initial Point

The initial pointx0for the algorithm is:

  1. Initializex0toones(n,1), wherenis the number of rows inH.

  2. For components that have both an upper bounduband a lower boundlb, if a component ofx0is not strictly inside the bounds, the component is set to(ub + lb)/2.

  3. For components that have only one bound, modify the component if necessary to lie strictly inside the bound.

  4. Take a predictor step (seePredictor-Corrector), with minor corrections for feasibility, not a full predictor-corrector step. This places the initial point closer to thecentral pathwithout entailing the overhead of a full predictor-corrector step. For details of the central path, see Nocedal and Wright[7], page 397.

Predictor-Corrector

The sparse and full interior-point-convex algorithms differ mainly in the predictor-corrector phase. The algorithms are similar, but differ in some details. For the basic algorithm description, see Mehrotra[47].

算法首先将线性不平等的ities Ax <= b into inequalities of the form Ax >= b by multiplying A and b by -1. This has no bearing on the solution, but makes the problem of the same form found in some literature.

Sparse Predictor-Corrector.Similar to thefminconinterior-point algorithm, the sparseinterior-point-convexalgorithm tries to find a point where theKarush-Kuhn-Tucker (KKT)conditions hold. For the quadratic programming problem described inQuadratic Programming Definition, these conditions are:

H x + c A e q T y A ¯ T z = 0 A ¯ x b ¯ s = 0 A e q x b e q = 0 s i z i = 0 , i = 1 , 2 , ... , m s 0 z 0.

Here

  • A ¯ is the extended linear inequality matrix that includes bounds written as linear inequalities. b ¯ is the corresponding linear inequality vector, including bounds.

  • sis the vector of slacks that convert inequality constraints to equalities.shas lengthm, the number of linear inequalities and bounds.

  • zis the vector of Lagrange multipliers corresponding tos.

  • yis the vector of Lagrange multipliers associated with the equality constraints.

该算法从新手预测的第一步n-Raphson formula, then computes a corrector step. The corrector attempts to better enforce the nonlinear constraintsizi= 0.

Definitions for the predictor step:

  • rd, the dual residual:

    r d = H x + c A e q T y A ¯ T z .

  • req, the primal equality constraint residual:

    r e q = A e q x b e q .

  • rineq, the primal inequality constraint residual, which includes bounds and slacks:

    r i n e q = A ¯ x b ¯ s .

  • rsz, the complementarity residual:

    rsz=Sz.

    Sis the diagonal matrix of slack terms,zis the column matrix of Lagrange multipliers.

  • rc, the average complementarity:

    r c = s T z m .

In a Newton step, the changes inx,s,y, andz, are given by:

( H 0 A e q T A ¯ T A e q 0 0 0 A ¯ I 0 0 0 Z 0 S ) ( Δ x Δ s Δ y Δ z ) = ( r d r e q r i n e q r s z ) . (2)

However, a full Newton step might be infeasible, because of the positivity constraints onsandz. Therefore,quadprogshortens the step, if necessary, to maintain positivity.

Additionally, to maintain a “centered” position in the interior, instead of trying to solvesizi= 0, the algorithm takes a positive parameterσ, and tries to solve

sizi=σrc.

quadprogreplacesrszin the Newton step equation withrsz+ ΔsΔzσrc1, where1is the vector of ones. Also,quadprogreorders the Newton equations to obtain a symmetric, more numerically stable system for the predictor step calculation.

After calculating the corrected Newton step, the algorithm performs more calculations to get both a longer current step, and to prepare for better subsequent steps. These multiple correction calculations can improve both performance and robustness. For details, see Gondzio[4].

Full Predictor-Corrector.The full predictor-corrector algorithm does not combine bounds into linear constraints, so it has another set of slack variables corresponding to the bounds. The algorithm shifts lower bounds to zero. And, if there is only one bound on a variable, the algorithm turns it into a lower bound of zero, by negating the inequality of an upper bound.

A ¯ is the extended linear matrix that includes both linear inequalities and linear equalities. b ¯ is the corresponding linear equality vector. A ¯ also includes terms for extending the vectorxwith slack variablessthat turn inequality constraints to equality constraints:

A ¯ x = ( A e q 0 A I ) ( x 0 s ) ,

wherex0means the originalxvector.

The KKT conditions are

H x + c A ¯ T y v + w = 0 A ¯ x = b ¯ x + t = u v i x i = 0 , i = 1 , 2 , ... , m w i t i = 0 , i = 1 , 2 , ... , n x , v , w , t 0. (3)

To find the solutionx, slack variables and dual variables toEquation 3, the algorithm basically considers a Newton-Raphson step:

( H A ¯ T 0 I I A ¯ 0 0 0 0 I 0 I 0 0 V 0 0 X 0 0 0 W 0 T ) ( Δ x Δ y Δ t Δ v Δ w ) = ( H x + c A ¯ T y v + w A ¯ x b ¯ u x t V X W T ) = ( r d r p r u b r v x r w t ) , (4)

whereX,V,W, andTare diagonal matrices corresponding to the vectorsx,v,w, andtrespectively. The residual vectors on the far right side of the equation are:

  • rd, the dual residual

  • rp, the primal residual

  • rub, the upper bound residual

  • rvx, the lower bound complementarity residual

  • rwt, the upper bound complementarity residual

The algorithm solvesEquation 4by first converting it to the symmetric matrix form

( D A ¯ T A ¯ 0 ) ( Δ x Δ y ) = ( R r p ) , (5)

where

D = H + X 1 V + T 1 W R = r d X 1 r v x + T 1 r w t + T 1 W r u b .

All the matrix inverses in the definitions ofDandRare simple to compute because the matrices are diagonal.

To deriveEquation 5fromEquation 4, notice that the second row ofEquation 5is the same as the second matrix row ofEquation 4. The first row ofEquation 5comes from solving the last two rows ofEquation 4for Δvand Δw, and then solving for Δt.

To solveEquation 5, the algorithm follows the essential elements of Altman and Gondzio[1]. The algorithm solves the symmetric system by an LDL decomposition. As pointed out by authors such as Vanderbei and Carpenter[2], this decomposition is numerically stable without any pivoting, so can be fast.

After calculating the corrected Newton step, the algorithm performs more calculations to get both a longer current step, and to prepare for better subsequent steps. These multiple correction calculations can improve both performance and robustness. For details, see Gondzio[4].

The fullquadprogpredictor-corrector algorithm is largely the same as that in thelinprog'interior-point'algorithm, but includes quadratic terms as well. SeePredictor-Corrector.

References

[1] Altman, Anna and J. Gondzio.Regularized symmetric indefinite systems in interior point methods for linear and quadratic optimization. Optimization Methods and Software, 1999.Available for download here.

[2] Vanderbei, R. J. and T. J. Carpenter.Symmetric indefinite systems for interior point methods. Mathematical Programming 58, 1993. pp. 1–32.Available for download here.

Stopping Conditions

The predictor-corrector algorithm iterates until it reaches a point that is feasible (satisfies the constraints to within tolerances) and where the relative step sizes are small. Specifically, define

ρ = max ( 1 , H , A ¯ , A e q , c , b ¯ , b e q )

The algorithm stops when all of these conditions are satisfied:

r p 1 + r u b 1 ρ TolCon r d ρ TolFun r c TolFun,

where

r c = max i ( min ( | x i v i | , | x i | , | v i | ) , min ( | t i w i | , | t i | , | w i | ) ) .

rcessentially measures the size of the complementarity residualsxvandtw, which are each vectors of zeros at a solution.

Infeasibility Detection

quadprogcalculates amerit functionφat every iteration. The merit function is a measure of feasibility.quadprogstops if the merit function grows too large. In this case,quadprogdeclares the problem to be infeasible.

The merit function is related to the KKT conditions for the problem—seePredictor-Corrector. Use the following definitions:

ρ = max ( 1 , H , A ¯ , A e q , c , b ¯ , b e q ) r eq = A eq x b eq r ineq = A ¯ x b ¯ s r d = H x + c + A eq T λ eq + A ¯ T λ ¯ ineq g = 1 2 x T H x + c T x b ¯ T λ ¯ ineq b eq T λ eq .

The notation A ¯ and b ¯ means the linear inequality coefficients, augmented with terms to represent bounds for the sparse algorithm. The notation λ ¯ ineq similarly represents Lagrange multipliers for the linear inequality constraints, including bound constraints. This was calledzinPredictor-Corrector, and λ eq was calledy.

The merit functionφis

1 ρ ( max ( r eq , r ineq , r d ) + g ) .

If this merit function becomes too large,quadprogdeclares the problem to be infeasible and halts with exit flag-2.

trust-region-reflectivequadprogAlgorithm

Many of the methods used in Optimization Toolbox™ solvers are based ontrust regions,a simple yet powerful concept in optimization.

To understand the trust-region approach to optimization, consider the unconstrained minimization problem, minimizef(x), where the function takes vector arguments and returns scalars. Suppose you are at a pointxinn-space and you want to improve, i.e., move to a point with a lower function value. The basic idea is to approximatefwith a simpler functionq, which reasonably reflects the behavior of functionfin a neighborhoodNaround the pointx. This neighborhood is the trust region. A trial stepsis computed by minimizing (or approximately minimizing) overN. This is the trust-region subproblem,

min s { q ( s ) , s N } . (6)

The current point is updated to bex+siff(x+s) <f(x); otherwise, the current point remains unchanged andN, the region of trust, is shrunk and the trial step computation is repeated.

The key questions in defining a specific trust-region approach to minimizingf(x) are how to choose and compute the approximationq(defined at the current pointx), how to choose and modify the trust regionN, and how accurately to solve the trust-region subproblem. This section focuses on the unconstrained problem. Later sections discuss additional complications due to the presence of constraints on the variables.

In the standard trust-region method ([48]), the quadratic approximationqis defined by the first two terms of the Taylor approximation toFatx; the neighborhoodNis usually spherical or ellipsoidal in shape. Mathematically the trust-region subproblem is typically stated

min { 1 2 s T H s + s T g such that D s Δ } , (7)

wheregis the gradient offat the current pointx,His the Hessian matrix (the symmetric matrix of second derivatives),Dis a diagonal scaling matrix, Δ is a positive scalar, and ‖ . ‖ is the 2-norm. Good algorithms exist for solvingEquation 7(see[48]); such algorithms typically involve the computation of all eigenvalues ofHand a Newton process applied to thesecular equation

1 Δ 1 s = 0.

Such algorithms provide an accurate solution toEquation 7. However, they require time proportional to several factorizations ofH. Therefore, for large-scale problems a different approach is needed. Several approximation and heuristic strategies, based onEquation 7, have been proposed in the literature ([42]and[50]). The approximation approach followed in Optimization Toolbox solvers is to restrict the trust-region subproblem to a two-dimensional subspaceS([39]and[42]). Once the subspaceShas been computed, the work to solveEquation 7is trivial even if full eigenvalue/eigenvector information is needed (since in the subspace, the problem is only two-dimensional). The dominant work has now shifted to the determination of the subspace.

The two-dimensional subspaceSis determined with the aid of apreconditioned conjugate gradient process described below. The solver definesSas the linear space spanned bys1ands2, wheres1is in the direction of the gradientg, ands2is either an approximateNewton direction, i.e., a solution to

H s 2 = g , (8)

or a direction ofnegative curvature,

s 2 T H s 2 < 0. (9)

The philosophy behind this choice ofSis to force global convergence (via the steepest descent direction or negative curvature direction) and achieve fast local convergence (via the Newton step, when it exists).

A sketch of unconstrained minimization using trust-region ideas is now easy to give:

  1. Formulate the two-dimensional trust-region subproblem.

  2. SolveEquation 7to determine the trial steps.

  3. Iff(x+s) <f(x), thenx=x+s.

  4. Adjust Δ.

These four steps are repeated until convergence. The trust-region dimension Δ is adjusted according to standard rules. In particular, it is decreased if the trial step is not accepted, i.e.,f(x+s) ≥f(x). See[46]and[49]for a discussion of this aspect.

Optimization Toolbox solvers treat a few important special cases off特殊功能:非线性最小二乘s, quadratic functions, and linear least-squares. However, the underlying algorithmic ideas are the same as for the general case. These special cases are discussed in later sections.

The subspace trust-region method is used to determine a search direction. However, instead of restricting the step to (possibly) one reflection step, as in the nonlinear minimization case, a piecewisereflective line search is conducted at each iteration. See[45]for details of the line search.

Preconditioned Conjugate Gradient Method

A popular way to solve large, symmetric, positive definite systems of linear equationsHp= –gis the method of Preconditioned Conjugate Gradients (PCG). This iterative approach requires the ability to calculate matrix-vector products of the formH·vwherevis an arbitrary vector. The symmetric positive definite matrixMis apreconditionerforH. That is,M=C2, whereC–1HC–1is a well-conditioned matrix or a matrix with clustered eigenvalues.

In a minimization context, you can assume that the Hessian matrixHis symmetric. However,His guaranteed to be positive definite only in the neighborhood of a strong minimizer. Algorithm PCG exits when it encounters a direction of negative (or zero) curvature, that is,dTHd≤ 0. The PCG output directionpis either a direction of negative curvature or an approximate solution to the Newton systemHp= –g. In either case,phelps to define the two-dimensional subspace used in the trust-region approach discussed inTrust-Region Methods for Nonlinear Minimization.

Linear Equality Constraints

Linear constraints complicate the situation described for unconstrained minimization. However, the underlying ideas described previously can be carried through in a clean and efficient way. The trust-region methods in Optimization Toolbox solvers generate strictly feasible iterates.

The general linear equality constrained minimization problem can be written

min { f ( x ) such that A x = b } , (10)

whereAis anm-by-nmatrix (mn). Some Optimization Toolbox solvers preprocessAto remove strict linear dependencies using a technique based on the LU factorization ofAT[46]. HereAis assumed to be of rankm.

The method used to solveEquation 10differs from the unconstrained approach in two significant ways. First, an initial feasible pointx0is computed, using a sparse least-squares step, so thatAx0=b. Second, Algorithm PCG is replaced with Reduced Preconditioned Conjugate Gradients (RPCG), see[46], in order to compute an approximate reduced Newton step (or a direction of negative curvature in the null space ofA). The key linear algebra step involves solving systems of the form

[ C A ˜ T A ˜ 0 ] [ s t ] = [ r 0 ] , (11)

where A ˜ approximatesA(small nonzeros ofAare set to zero provided rank is not lost) andCis a sparse symmetric positive-definite approximation toH, i.e.,C=H. See[46]for more details.

Box Constraints

The box constrained problem is of the form

min { f ( x ) such that l x u } , (12)

wherelis a vector of lower bounds, anduis a vector of upper bounds. Some (or all) of the components oflcan be equal to –∞ and some (or all) of the components ofucan be equal to ∞. The method generates a sequence of strictly feasible points. Two techniques are used to maintain feasibility while achieving robust convergence behavior. First, a scaled modified Newton step replaces the unconstrained Newton step (to define the two-dimensional subspaceS). Second,reflections are used to increase the step size.

The scaled modified Newton step arises from examining the Kuhn-Tucker necessary conditions forEquation 12,

( D ( x ) ) 2 g = 0 , (13)

where

D ( x ) = diag ( | v k | 1 / 2 ) ,

and the vectorv(x) is defined below, for each1 ≤in:

  • Ifgi< 0andui< ∞thenvi=xiui

  • Ifgi≥ 0andli> –∞thenvi=xili

  • Ifgi< 0andui= ∞thenvi= –1

  • Ifgi≥ 0andli= –∞thenvi= 1

The nonlinear systemEquation 13is not differentiable everywhere. Nondifferentiability occurs whenvi= 0. You can avoid such points by maintaining strict feasibility, i.e., restrictingl<x<u.

The scaled modified Newton stepskfor the nonlinear system of equations given byEquation 13is defined as the solution to the linear system

M ^ D s N = g ^ (14)

at thekth iteration, where

g ^ = D 1 g = diag ( | v | 1 / 2 ) g , (15)

and

M ^ = D 1 H D 1 + diag ( g ) J v . (16)

HereJvplays the role of the Jacobian of |v|. Each diagonal component of the diagonal matrixJvequals 0, –1, or 1. If all the components oflanduare finite,Jv= diag(sign(g)). At a point wheregi= 0,vimight not be differentiable. J i i v = 0 is defined at such a point. Nondifferentiability of this type is not a cause for concern because, for such a component, it is not significant which valuevitakes. Further, |vi| will still be discontinuous at this point, but the function|vigiis continuous.

Second,reflections are used to increase the step size. A (single) reflection step is defined as follows. Given a steppthat intersects a bound constraint, consider the first bound constraint crossed byp; assume it is theith bound constraint (either theith upper orith lower bound). Then the reflection steppR=pexcept in theith component, wherepRi= –pi.

active-setquadprogAlgorithm

After completing a presolve step, theactive-setalgorithm proceeds in two phases.

  • Phase 1 — Obtain a feasible point with respect to all constraints.

  • 阶段2 -迭代降低目标函数while maintaining a list of the active constraints and maintaining feasibility in each iteration.

Theactive-setstrategy (also known as a projection method) is similar to the strategy of Gill et al., described in[18]and[17].

Presolve Step

The algorithm first tries to simplify the problem by removing redundancies and simplifying constraints. The tasks performed during the presolve step can include the following:

  • Check if any variables have equal upper and lower bounds. If so, check for feasibility, and then fix and remove the variables.

  • Check if any linear inequality constraint involves only one variable. If so, check for feasibility, and then change the linear constraint to a bound.

  • Check if any linear equality constraint involves only one variable. If so, check for feasibility, and then fix and remove the variable.

  • Check if any linear constraint matrix has zero rows. If so, check for feasibility, and then delete the rows.

  • Determine if the bounds and linear constraints are consistent.

  • Check if any variables appear only as linear terms in the objective function and do not appear in any linear constraint. If so, check for feasibility and boundedness, and then fix the variables at their appropriate bounds.

  • Change any linear inequality constraints to linear equality constraints by adding slack variables.

If the algorithm detects an infeasible or unbounded problem, it halts and issues an appropriate exit message.

The algorithm might arrive at a single feasible point, which represents the solution.

If the algorithm does not detect an infeasible or unbounded problem in the presolve step, and if the presolve has not produced the solution, the algorithm continues to its next steps. After reaching a stopping criterion, the algorithm reconstructs the original problem, undoing any presolve transformations. This final step is the postsolve step.

For details, see Gould and Toint[63].

Phase 1 Algorithm

In Phase 1, the algorithm attempts to find a pointxthat satisfies all constraints, with no consideration of the objective function.quadprogruns the Phase 1 algorithm only if the supplied initial pointx0is infeasible.

To begin, the algorithm tries to find a point that is feasible with respect to all equality constraints, such asX = Aeq\beq. If there is no solutionxto the equationsAeq*x = beq, then the algorithm halts. If there is a solutionX, the next step is to satisfy the bounds and linear inequalities. In the case of no equality constraints setX = x0, the initial point.

Starting fromX, the algorithm calculatesM = max(A*X – b, X - ub, lb – X). IfM <= options.ConstraintTolerance, then the pointXis feasible and the Phase 1 algorithm halts.

IfM > options.ConstraintTolerance, the algorithm introduces a nonnegative slack variableγfor the auxiliary linear programming problem

min x , γ γ

such that

A x γ b Aeq x = beq lb γ x ub + γ γ ρ .

Here,ρis theConstraintToleranceoption multiplied by the absolute value of the largest element inAandAeq. If the algorithm findsγ= 0 and a pointxthat satisfies the equations and inequalities, thenxis a feasible Phase 1 point. If there is no solution to the auxiliary linear programming problemxwithγ= 0, then the Phase 1 problem is infeasible.

To solve the auxiliary linear programming problem, the algorithm setsγ0= M + 1, setsx0=X, and initializes the active set as the fixed variables (if any) and all the equality constraints. The algorithm reformulates the linear programming variablespto be the offset ofxfrom the current pointx0, namelyx=x0+p. The algorithm solves the linear programming problem by the same iterations as it takes in Phase 2 to solve the quadratic programming problem, with an appropriately modified Hessian.

Phase 2 Algorithm

In terms of a variabled, the problem is

min d n q ( d ) = 1 2 d T H d + c T d , A i d = b i , i = 1 , ... , m e A i d b i , i = m e + 1 , ... , m . (17)

Here,Airefers to theith row of them-by-nmatrixA.

During Phase 2, an active set A ¯ k , which is an estimate of the active constraints (those on the constraint boundaries) at the solution point.

The algorithm updates A ¯ k at each iterationk, forming the basis for a search directiondk. Equality constraints always remain in the active set A ¯ k . The search directiondkis calculated and minimizes the objective function while remaining on any active constraint boundaries. The algorithm forms the feasible subspace fordkfrom a basisZkwhose columns are orthogonal to the estimate of the active set A ¯ k (that is, A ¯ k Z k = 0 ). Therefore, a search direction, which is formed from a linear summation of any combination of the columns ofZk, is guaranteed to remain on the boundaries of the active constraints.

The algorithm forms the matrixZkfrom the lastnlcolumns of the QR decomposition of the matrix A ¯ k T , wherelis the number of active constraints andl < n. That is,Zkis given by

Z k = Q [ : , l + 1 : n ] , (18)

where

Q T A ¯ k T = [ R 0 ] .

After findingZk, the algorithm looks for a new search directiondkthat minimizesq(d), wheredk在活动的零空间约束。That is,dkis a linear combination of the columns ofZk: d ^ k = Z k p for some vectorp.

Viewing the quadratic as a function ofpby substituting fordk, gives

q ( p ) = 1 2 p T Z k T H Z k p + c T Z k p . (19)

Differentiating this equation with respect topyields

q ( p ) = Z k T H Z k p + Z k T c . (20)

q(p)is referred to as the projected gradient of the quadratic function because it is the gradient projected in the subspace defined byZk. The term Z k T H Z k is called the projected Hessian. Assuming the projected Hessian matrix Z k T H Z k is positive semidefinite, the minimum of the functionq(p) in the subspace defined byZkoccurs whenq(p) = 0, which is the solution of the system of linear equations

Z k T H Z k p = Z k T c . (21)

The algorithm then takes a step of the form

x k + 1 = x k + α d k ,

where

d k = Z k p .

Due to the quadratic nature of the objective function, only two choices of step lengthαexist at each iteration. A step of unity alongdkis the exact step to the minimum of the function restricted to the null space of A ¯ k . If the algorithm can take such a step without violating the constraints, then this step is the solution to the quadratic program (Equation 18). Otherwise, the step alongdkto the nearest constraint is less than unity, and the algorithm includes a new constraint in the active set at the next iteration. The distance to the constraint boundaries in any directiondkis given by

α = min i { 1 , ... , m } { ( A i x k b i ) A i d k } ,

which is defined for constraints not in the active set, and where the directiondkis towards the constraint boundary, that is, A i d k > 0 , i = 1 , ... , m .

When the active set includesnindependent constraints, without location of the minimum, the algorithm calculates the Lagrange multipliersλk, which satisfy the nonsingular set of linear equations

A ¯ k T λ k = c + H x k . (22)

If all elements ofλkare nonnegative,xkis the optimal solution of the quadratic programming problemEquation 1. However, if any component ofλkis negative, and the component does not correspond to an equality constraint, then the minimization is not complete. The algorithm deletes the element corresponding to the most negative multiplier and starts a new iteration.

Sometimes, when the solver finishes with all nonnegative Lagrange multipliers, the first-order optimality measure is above the tolerance, or the constraint tolerance is not met. In these cases, the solver attempts to reach a better solution by following the restart procedure described in[1]. In this procedure, the solver discards the current set of active constraints, relaxes the constraints a bit, and constructs a new set of active constraints while attempting to solve the problem in a manner that avoids cycling (repeatedly returning to the same state). If necessary, the solver can perform the restart procedure several times.

Note

Do not try to stop the algorithm early by setting large values of theConstraintToleranceandOptimalityToleranceoptions. Generally, the solver iterates without checking these values until it reaches a potential stopping point, and only then checks to see whether the tolerances are satisfied.

Occasionally, theactive-set算法可以检测当一个职业有困难blem is unbounded. This issue can occur if the direction of unboundednessvis a direction where the quadratic termv'Hv= 0. For numerical stability and to enable a Cholesky factorization, theactive-setalgorithm adds a small, strictly convex term to the quadratic objective. This small term causes the objective function to be bounded away from –Inf. In this case, theactive-setalgorithm reaches an iteration limit instead of reporting that the solution is unbounded. In other words, the algorithm halts with exit flag0instead of –3.

References

[1] Gill, P. E., W. Murray, M. A. Saunders, and M. H. Wright.A practical anti-cycling procedure for linearly constrained optimization.Math. Programming 45 (1), August 1989, pp. 437–474.

Warm Start

When you run thequadprogorlsqlin'active-set'algorithm with a warm start object as the start point, the solver attempts to skip many of the Phase 1 and Phase 2 steps. The warm start object contains the active set of constraints, and this set can be correct or close to correct for the new problem. Therefore, the solver can avoid iterations to add constraints to the active set. Also, the initial point might be close to the solution for the new problem. For more information, seeoptimwarmstart.

Baidu
map