Questions tagged [non-smooth-optimization]

For questions related to non-smooth optimization.

Non-smooth optimization means that the cost or constraints are not differentiable.
The $l_1, l_{\infty}$ norms are examples. The $l_1$ norm is not differentiable on the axes, the $l_{\infty}$ is not differentiable on the 'diagonals'. Another, less trivial, is the maximum singular value of a matrix.

Nonsmooth optimization typically deals with highly structured problems, but problems which arise differently, or are modeled or cast differently, from ones for which many of the mainline numerical methods, involving gradient vectors and Hessian matrices, have been designed.

43 questions
7
votes
1 answer

When do two functions have the same subdifferentials?

For two functions $f$ and $g$, if $\nabla f(x) = \nabla g(x)$, $f = g + c$ for some constant $c$. Does the same hold if the gradient is replaced by the (convex) subdifferential, ie $\partial f(x) = \partial g(x)$ for all $x$ ? And, as a stronger…
6
votes
2 answers

Generalizing Lagrange multipliers to use the subdifferential

Background: This is a followup to the question Lagrange multipliers with non-smooth constraints. Lagrange multipliers can be used for constrained optimization problems of the form $$\min_{\vec x} f(\vec x) \text{ such that } g(\vec x) = 0$$ Briefly,…
6
votes
1 answer

Proof that generalized directional derivative is upper semicontinuous

In "Nonsmooth Optimization" by Mäkela and Neittaanmäki the definition of the generalized directional derivative is given as follows: Definition 3.1.1 (Clarke). Let $f: \mathbf{R}^{n} \rightarrow \mathbf{R}$ be locally Lipschitz at a point $x \in…
5
votes
0 answers

Second order necessary and sufficient conditions for convex nonsmooth optimization

For convex smooth optimization, first and second order necessary and sufficient conditions are well known. Does such standard second order necessary and sufficient conditions exist for convex nonsmooth optimization. For first order, we have the…
4
votes
2 answers

Lagrange multipliers with non-smooth constraints

I read in a textbook a passing comment that Lagrange multipliers are not applicable if there are points of non-differentiability in the constraints (even if the constraints are continuous). For example, in the following problem: $\min_{\boldsymbol…
4
votes
0 answers

Uses of nonsmooth analysis in mathematical research

To give some context: I am aware of the uses of Convex Analysis (and its applications in Convex Optimization), I have been studying (for a while) the developments of Nonsmooth Analysis (and its applications in Nonsmooth Optimization) as traced by…
4
votes
3 answers

Global optimization of non-smooth function

I have a number of functions (see for example two of them down below), and I need to find their global optimum for each of them. They are non-smooth, but they are always funnel-shaped, exhibiting a large minimum. If you zoom out, (e.g. when the x…
3
votes
1 answer

Optimization with parametric constraints: solution maps

For constrained optimization problems $$ \begin{array}{ll} \min\limits_{x \in \mathbb R^n} & f(p, x) \\ \text{s.t.} & x \in C \end{array} $$ where $p \in \mathbb R$ is a parameter, we can deduce, under suitable convexity conditions, existence and…
2
votes
0 answers

KKT conditions for nonsmooth convex problems

What are the KKT conditions for a non-smooth convex function? Is the vanishing gradient of Lagrangian, replaced by $0$ in sub-differential of the Lagrangian, and all other things remain the same? I suppose that will give necessary and sufficient…
2
votes
1 answer

Subgradient method for nonconvex nonsmooth function

Gradient descent or stochastic gradient descent are frequently used to find stationary points (and in some cases even to local minimum) of a nonconvex function. I was wondering if the same can be said about subgradient method. Can we say that a…
2
votes
0 answers

Chain rule for composite function with inner function non-smooth

Let $f:\mathbb{R}^2\mapsto\mathbb{R}$ be given by $f(w_1,w_2)=\frac{1}{2}(1-w_2\sigma(w_1))^2$, where $\sigma(x)=\max\{x,0\}$ is the ReLU function. I want to compute the Clarke subdifferential of $f$ when $w_1=0$. Is chain rule valid here? I know…
2
votes
1 answer

Minimizing a composite non-differentiable convex function over a $2$-norm ball

I am searching for (works on) methods for solving the composite differentiable and non-differentiable convex problem: $$ \min_{x \in B} f(x) + g(x),$$ where $B$ is a $2$-norm ball, ie: $x \in B \iff \|x\|_2 \leq C, C>0$; and the functions satisfy:…
2
votes
1 answer

Nonsmooth optimization approximation

Say I want to minimize a real valued, nonsmooth function $f(x)$ (gradient is not defined at some minima). Further let $$ g(x, \epsilon) $$ be a smooth approximation of $f(x)$ with $$ \lim_{\epsilon\rightarrow0} \,g(x,\epsilon) = f(x) \,. $$ Let the…
1
vote
1 answer

Gradient of "smoothed" version of function $f$

Let $f: \mathbb{R}^d \to \mathbb{R}$ be continuous, but possibly not differentiable. Let $v$ be a random vector selected uniformly from within the $\ell_2$ unit ball. Let $u$ be a random vector selected uniformly from the surface of the $\ell_2$…
1
vote
0 answers

Nesterov Accelerated Gradient method for convex non-smooth objective functions

I need to solve an optimization problem involving an Extreme Learning Machine $z=W_2\sigma(W_1 x)$, where the weight matrix for the hidden layer $W_1$ is a fixed random matrix, $\sigma()$ is the activation function, and the output weight matrix…
1
2 3