## Quick Reference

A widely used and successful approach to solving constrained optimization problems, that is minimize *F*(*x*), *x* = (*x*_{1},*x*_{2},…,*x*_{n})^{T}, where *F*(*x*) is a given objective function of *n* real variables, subject to the *t* nonlinear constraints on the variables, *c** _{i}*(

*x*) = 0,

*i*= 1,2,…,

*t*Inequality constraints are also possible. A solution of this problem is also a stationary point (a point at which all the partial derivatives vanish) of the related function of

*x*and λ,

*L*(

*x*,λ) =

*F*(

*x*) − Σλ

*c*

_{i}*(*

_{i}*x*), λ = (λ

_{1},λ

_{2},…,λ

*) A quadratic approximation to this function is now constructed that along with linearized constraints forms a quadratic programming problem – i.e., the minimization of a function quadratic in the variables, subject to linear constraints. The solution of the original optimization problem, say*

_{t}*x*✻, is now obtained from an initial estimate and solving a sequence of updated quadratic programs; the solutions of these provide improved approximations, which under certain conditions converge to

*x*✻.

minimize *F*(*x*), *x* = (*x*_{1},*x*_{2},…,*x*_{n})^{T},

*c** _{i}*(

*x*) = 0,

*i*= 1,2,…,

*t*

*L*(*x*,λ) = *F*(*x*) − Σλ* _{i}*c

*(*

_{i}*x*),

λ = (λ_{1},λ_{2},…,λ* _{t}*)

*Subjects:*
Computing.

## Related content in Oxford Index

##### Reference entries

Users without a subscription are not able to see the full content. Please, subscribe or login to access all content.