- primal constraints: \(f_i(x)\leq 0, i=1,\cdots,m, \; h_i(x) = 0, i=1,\cdots,p\)
- dual constraints: \(\lambda\succeq 0\)
- complementary slackness: \(\lambda_i f_i(x) = 0, i=1,\cdots,m \)
- gradient of Lagrangian with respect to \(x\) vanishes:
\[\nabla f_0(x) + \sum_{i=1}^m \lambda_i \nabla f_i(x) + \sum_{i=1}^p \nu_i\nabla h_i(x) = 0\]
For convex problems
Key: When the primal problem is convex, the KKT conditions are also sufficient for points to be primal and dual optimal.
If \(\tilde{x},\tilde{\lambda},\tilde{\nu}\) satisfy KKT for a convex problem (i.e. \(f_i\) convex, \(h_i\) affine). Then they are optimal, with zero duality gap:
- from complementary slackness: \(f_0(\tilde{x})=L(\tilde{x},\tilde{\lambda},\tilde{\nu})\)
- from 4th condition (and convexity): \(g(\tilde{x},\tilde{\lambda},\tilde{\nu}) =L(\tilde{x},\tilde{\lambda},\tilde{\nu})\)
hence, \(f_0(\tilde{x}) = g(\tilde{x},\tilde{\lambda},\tilde{\nu})\)
If Slater's condition is satisfied:
\(x\) is optimal iff there exist \(\lambda,\nu\) that satisfy KKT conditions
- Slater implies strong duality, therefore dual optimum is attained
- generalizes optimality condition \(\nabla f_0(x) = 0\) for unconstrained problem
Key: Many algorithms for convex optimization are conceived as methods for solving the KKT conditions.
No comments:
Post a Comment