I'm trying to get a better grip of conic programming and the relations between primal and dual problems.
Given a convex problem in standard form, e.g. $\min_x f(x)$ subject to $f_i(x)\le0$, one standard approach (discussed e.g. in these pdf notes) is to introduce the Lagrangian function $$L(x,\lambda) \equiv f(x) + \sum_i \lambda_i f_i(x), $$ and then argue that if $L(x,\lambda)\ge\alpha$ for all $x$, then the solution to the primal problem is at least $\alpha$.
Consider now a standard conic programming problem, in the form (using the notation from these other pdf notes): $$\max_x \{ \langle a,x\rangle, \,\, \phi(x)=b, \,\, x\in K \},$$ where $\phi$ is a linear function, and $K$ a closed convex cone. To argue that the dual problem leads a solution larger than the primal one, the argument they use is a bit different: they show that $$\langle a,x\rangle \le \langle y,b\rangle$$ whenever $\phi^*(y)-b\in K^*$, where $\phi^*$ is the adjoint of $\phi$, and $K^*$ the dual cone of $K$.
I've read that duality in conic programming should also be derivable via the Lagrangian approach (it is mentioned, but not further elaborated, on top of page 7 in these other pdf notes). How would we actually do this, explicitly? I'm not quite seeing how defining a Lagrangian like $L(x,\lambda)=\langle a,x\rangle+\lambda^T (\phi(x)-b)$ will eventually lead me to the dual problem. Mostly because I'm not sure how to encode the $x\in K$ constraint in it.
A related question is How to derive the dual of a conic programming problem, $\min_{x\in L}\{c^T x: \,\, Ax-b\in K\}$?