So I know that for the problem:
$$ \begin{align*} \text{minimize} \quad & f_0(x) \\ \text{subject to} \quad & f_i(x) \leq 0, \quad i = 1, 2, \ldots, m \\ \end{align*} $$
We have the following necessary sufficient KKT conditions, when we assume strong duality, and that the problem is convex.
$$ \begin{align} \nabla_x \mathcal{L}(x^*, \lambda^*) &= 0 \\ \lambda_i^* f_i(x^*) &= 0, \quad i = 1, 2, \ldots, m \\ \lambda_i^* &\geq 0, \quad i = 1, 2, \ldots, m \end{align} $$
Where:
$$ \mathcal{L}(x, \lambda) := f_0(x) + \sum_{i=1}^m \lambda_i f_i(x) $$
I have seen in a few places that this can be generalised for problems where the constraints are non-differentiable. For example, the Lasso problem, which has $L_1$ constraints.
But I want an exact description of the conditions, for the non-differentiable case.
Is the gradient in the following KKT condition replaced with a sub-differential?
\begin{align} \nabla_x \mathcal{L}(x^*, \lambda^*) &= 0 \end{align}
Any references would be useful, with pointers to the exact place in those references. So far been trying to look in Nonlinear Optimization by Andrzej Ruszczynski