I'm using gradient descent $$x_i=x_{i-1}-\gamma\nabla f(x_{i-1})\tag1$$ to minimize a function $f$. Now, I've observed that I got stuck in a local minimum when $x_0$ was chosen in an unlucky way.
Is there any mechanism to detect that we've got stuck in a local minimum and escape it in some way?
I've come up with a simple idea, which actually works, but I guess it is quite inefficient. What I'm doing is replacing $(1)$ by $$x_i=x_{i-1}-\gamma_i\frac{\nabla f(x_{i-1})}{\left\|\nabla f(x_{i-1})\right\|}\tag2,$$ where $\gamma_i=\gamma_{i-1}/2$ and $\gamma_0=1$. Now, after every iteration $i$, I choose randomly a $\tilde\gamma$ in $[0,1)$, compute $$\tilde x_i=x_i-\tilde\gamma\frac{\nabla f(x_i)}{\left\|\nabla f(x_i)\right\|}\tag3$$ and if $f(\tilde x_i)<f(x_i)$, then I set $\gamma_i=\tilde\gamma$ and $x_i=\tilde x_i$.
I'm sure there is a smarter way.