I'm asked to find if the fixed-point iteration
$$x_{k+1} = g(x_k)$$
converges for the fixed points of the function $$g(x) = x^2 + \frac{3}{16}$$ which I found to be $\frac{1}{4}$ and $\frac{3}{4}$.
In this short video by Wen Shen,
it's explained how to find these fixed-points and to see if a fixed-point iteration converges. My doubt is related to find if a fixed point iteration converges for a certain fixed point.
At more or less half of the video, she comes up with the following relation for the error
$$e_{k+1} = |g'(\alpha)| e_k$$
where $\alpha \in (x_k, r)$, by the mean value theorem, and because $g$ is continuous and differentiable.
If $|g'(\alpha)| < 1$, then the fixed-point iteration converges.
I think I agree with this last statement, but when she tries to see if the fixed point iteration converges for a certain root of a certain function, she simply finds the derivative of that function and plugs in it the root.
I don't understand why this is equivalent to $$e_{k+1} = |g'(\alpha)| e_k$$
Can someone fire up some light on my brain? (lol, can I say this?)