I'm working through the book Numerical Linear Algebra by Trefethen and Bau. In Lecture 27 (and exercise 27.5), the following claim is made about the inverse iteration algorithm:
Let $ A $ be a real, symmetric matrix. Solving the system $ (A - \mu I) w = v^{(k-1)} $ at step $ k $ is poorly conditioned if $ \mu $ is approximately an eigenvalue of $ A $. However, this does not cause an issue for the inverse iteration algorithm if it is solved with a backward stable algorithm which outputs $ \tilde{w} $ such that $ (A - \mu I + \delta M) \tilde{w} = v^{(k-1)}$ where $ \frac{\|\delta M\|}{\|M\|} = O(\epsilon_\text{machine}) $. The reason is that even though $ w $ and $ \tilde{w} $ are not close, $ \frac{w}{\|w\|} $ and $ \frac{\tilde{w}}{\|\tilde{w}\|} $ are.
The same issue occurs in the Rayleigh quotient iteration where at each step $ \mu $ is updated with a more accurate estimate of an eigenvalue of $ A $.
I completely understand why the system is poorly conditioned when $ \mu $ is approximately an eigenvalue of $ A $. I am attempting to prove the remainder of the claim or at least understand why it should be true. Applying the definitions of backward stability and the condition of the problem don't lead anywhere beyond the usual bound for the accuracy: $ \frac{\|w - \tilde{w} \|}{\|w\|} = O(\kappa(A - \mu I) \epsilon_\text{machine}) = O(1) $ for $ \mu $ near an eigenvalue of $ A $. I suspect that I need to use the fact that $ A $ is normal to move forward, but I don't see how.
Any help is appreciated. Thanks!
Links to Wikipedia articles on 1. Inverse iteration 2. Rayleigh quotient iteration