Let us consider the centered finite difference approximation of the first derivative of a smooth function
$$f'(x_i) = \frac{f(x+h) - f(x-h)}{2h}$$
It's well known that if we do a $\text{loglog}$ plot of the error vs. the stepsize, we will have that after a certain value of $h \approx 10^{-5}$, the error will start growing. What I wanted to understand, is how such value can be predicted.
I found this answer by user @LutzLehmann, where he wrote that, since function evaluations will produce noise, the error is a combination of the approximation error and that noise, i.e. something like
$$\frac{M_0\mu}h+M_3h^2$$
where $M_0$ is the magnitude of the function evaluation and $M_3$ the magnitude of the evaluation of the third derivative.
Then, he wrotes that the error will be minimal if $h \approx \mu^{\frac{1}{3}}$ (i.e. about $10^{-5}$ if $\mu$ is the machine double precision), under the hypothesis that $M_0$ and $M_3$ are about equal.
I can't understand "how" he find that value: I mean, let's say $M_0=M_3=M$. Then we have that the error is about $$M( \mu + h^3)$$
If I want to minimize, I would say $h^3 = - \mu$, which implies a negative step, which makes no sense. Why doesn't he have that minus sign?
$$\frac{M_0\mu}h+M_3h^2$$
so it seems that the noise is just in the evaluation of the function, not in its derivatives. Why is that?
– andereBen Aug 17 '20 at 15:23Thank you really much.
The really last point: when I think about rounding errors in the evaluation of the function $f(x)$, I always write $f(x)(1+\delta)$, with $|\delta| \leq \mu$. Is this the same think you do when you write $f(x)=f(x) \mu$?
– andereBen Aug 17 '20 at 17:30because the sums and differences are replaced by absolute value. Then, he uses $$|f(x)- f(x)(1+\delta)| = |f(x) (- \delta)| \leq |f(x)| \mu$$
I should have all the details now, thanks for the fruitful discussion
– andereBen Aug 17 '20 at 17:55