0

I was reading Scientific Computing, An Introductory Survey, by Michael Heath. In the Example 1.11, he made a Finite Difference Aproximation, with the usual approximation : $f’(x)\neq \frac{f(x+h)-f(x)}{h}$. He postules the following:

We want $h$ to be small so that the approximation will be accurate, but if h is too small, then $\textrm{fl}(x + h)$ may not differ from $\textrm{fl}(x)$. Even if $\textrm{fl}(x + h)\neq \textrm{fl}(x)$, we might still have $\textrm{fl}(f(x + h)) = \textrm{fl}(f(x))$ if $f$ is slowly varying. In any case, we can expect some cancellation in computing the difference $f (x + h) − f (x).$ Thus, there is a trade-off between truncation error and rounding error in choosing the size of h. If the relative error in the function values is bounded by ε, then the rounding error in the approximate derivative value is bounded by $2ε|f(x)|/h$. The Taylor series expansion $$f (x + h) = f (x) + f ′ (x)h + f ′′ (x)\frac{h^2}{2} + · · ·$$ gives an estimate of $M h/2$ for the truncation error, where M is a bound for $|f′′(x)|.$

I would like to know how could he get the rounding error, I can’t get the result he is giving. Any idea would be great

RES
  • 13
  • What is your question exactly? The cited text seems rather clear, you get a systematic error from the Taylor series and a random error of a known magnitude from the evaluation in floating-point operations. You can improve the performance at the lower bound of usable step sizes by dividing by $((x+h)-x)$ instead of by $h$. For some visuals, see https://math.stackexchange.com/questions/2019573/4th-order-accurate-difference-formula-less-accurate-than-2nd-order-formula. – Lutz Lehmann Mar 13 '23 at 14:42
  • My question is how does he gets the results of $2\epsilon |f(x)|/h$ – RES Mar 13 '23 at 14:59
  • Cross-posted to https://scicomp.stackexchange.com/questions/42603/finite-difference-approximation-error – Lutz Lehmann Mar 16 '23 at 08:37

1 Answers1

0

My question is how does he gets the results of $2ϵ|f(x)|/h$

Each floating point operation is designed to have a relative error at or a small multiple of the machine precision $\mu$, $fl(a+b)=(a+b)(1+\delta)$, $fl(\phi(a))=\phi(a)(1+\delta)$ etc., $|\delta|\le\mu$. At general points this behavior roughly accumulates for composite functions. This idea is obviously wrong at roots of composite functions or where inside the evaluation algorithm catastrophic cancellation occurs.

So you get $$fl(f(x))=f(x)(1+ϵ_1),\\fl(f(x+h))=f(x+h)(1+ϵ_2),$$$|ϵ_i|\le ϵ= L\mu$, with $L$ the number of elementary operations in the evaluation algorithm.

As said, as exact bound this is mostly wrong, but it is precise enough to get good error estimates on a logarithmic magnitude scale, see 4th order accurate difference formula less accurate than 2nd order formula? for a visualization.

Lutz Lehmann
  • 131,652