What is the point of KKT conditions for constrained optimization? In other words, how is the best way to use them. I have seen examples in different contexts, but miss a short overview of the procedure, in like one or two sentences.
Should we use them to find the optimal solution of a constrained problem? The reason I am very confused is that one of conditions in KKT, already requires the constraints of the original problem to hold. The question is if we knew how to impose constraints in first place, then why look at KKT conditions?
Or should we use another one of KKT conditions first, i.e. only set the gradient of Lagrangian to zero and extract the solutions from that, and then check if the inequality and equality constraints hold?
I deeply appreciate if you could clarify.
"The KKT conditions play an important role in optimization. In a few special cases it is possible to solve the KKT conditions (and therefore, the optimization problem) analytically. More generally, many algorithms for convex optimization are conceived as, or can be interpreted as, methods for solving the KKT conditions."
– David Feb 26 '17 at 22:32