2

Here $x_n $ are real and $a_n$ are positive, and we have a finite summation.

The picture is very clear.

But what numerical algorithm is stable and efficient? Supposed $x_n $ are ordered in the increasing order, then between $x_n $ and $x_{n+1}$ there must be a root. One can use the bisection method, but it is slow. Or one can try Newton's method, but it is not necessarily stable.

S. Kohn
  • 1,128

3 Answers3

6

I do not know the context of your problem but we worked a lot over years the problem of the solutions of the so-called Underwood equation which appear in shortcut distillation problems. It uses to write $$\sum_{i=1}^n \frac{\alpha_i\, z_i}{\alpha_i- \theta}=1-q$$ where the $\alpha_i> 0$ and $z_i >0$ and $n$ can be very large (potentially up to thousands) and $q$ is given.

In chemical process design, this equation has to be solved zillions of times (optimization problems with hundreds of variables). Because of that, we needed very fast, stable and robust solutions.

Our most recent work was published in $2014$ in this paper (you can also find it here) where we proposed rapid and robust solution methods using convex transformations. Beside, and this is a key point, for any root, we proposed simple and efficient starting guesses which,typically, make that very few iterations are required (this is illustrated in the first figure showing that the starting guess is almost the solution).

I consider that the paper is sufficient clear and detailed (with several examples) for helping you. In the case you have any problem, feel free to contact me (my e-mail address is in my profile).

Edit

If you want something simpler, considering for example (easy to generalize) $$f(x)=\sum_{i=1}^6 \frac{a_i}{x- b_i}-1$$ for the root between $b_1$ and $b_{2}$ consider instead $$g_{(1,2)}(x)=(x-b_1)(x-b_2) f(x)$$ which is $$g_{(1,2)}(x)=a_1 (x-b_2)+a_2 (x-b_1)-(x-b_1) (x-b_2)+$$ $$(x-b_1) (x-b_2) \left(\frac{a_3}{x-b_3}+\frac{a_4}{x-b_4}+\frac{a_5}{x-b_5}+\frac{a_6}{x-b_6}\right)$$ and then $$g_{(1,2)}(b_1)=a_1 (b_1-b_2)\qquad \text{and} \qquad g_{(1,2)}(b_2)=-a_2 (b_1-b_2)$$ and now use for example subroutine rtsafe from "Numerical Recipes" that utilizes a combination of bisection and Newton-Raphson steps (this is required since, between the bounds $b_1$ and $b_2$, function $g_{(1,2)}(x)$ goes through an extremum); the code is here. This works quite well without any convergence problem (but it is much less efficient than what was proposed in our paper). As you can see, the simple idea is just to remove the asymptotes (this is the so-called Leibovici & Neoschil method which has been widely used for this class of problems during the last $26$ years).

You could even restrict the search to the interval $(b_1,x_*)$ or $(x_*,b_2)$ where $x_*=\frac{a_1b_2+a_2b_1}{a_1+a_2}$ obtained by linear interpolation (you just need to check the value of $g_{(1,2)}(x_*)$).

  • (+1) as deserved. – Jack D'Aurizio Jan 19 '19 at 16:52
  • @JackD'Aurizio. if you have time to waste, would you accept to read the paper and be critical ? Thanks. – Claude Leibovici Jan 19 '19 at 17:09
  • I will do it, sure. I was just wondering: has anyone ever proposed a hybrid between your method and methods for simultaneous root finding through companion matrices? – Jack D'Aurizio Jan 19 '19 at 17:17
  • Also: do you recall the first time we discussed about this problem here on MSE? I am not managing to find that relevant thread. – Jack D'Aurizio Jan 19 '19 at 17:21
  • @JackD'Aurizio. You are totally wrong ! Big difference : rhs=0. It is https://math.stackexchange.com/questions/543020/searching-for-tighter-bounds . For sure, I remember. I spoke at noon about this answer from you to my coworker and coauthor of the paper. Cheers? – Claude Leibovici Jan 19 '19 at 18:29
  • Of course you're right, but the idea of gaining convexity from a suitable separation of the terms defining $f(x)$ still applies to the $\text{RHS}\neq 0$ case, and can probably be used both for gaining good initial guesses and for ensure the stability of Newton's method or the secant-tangent method. – Jack D'Aurizio Jan 19 '19 at 18:36
  • Sorry, but I cannot see why studying $g$ is simpler than $f$. What specific property does $g$ have? – S. Kohn Mar 09 '19 at 08:24
  • @S.Kohn. Since the asymptotes have been removed, much less problems, faster convergence and so on. To that, add the initial estimates of the solution. Have you been able to look at the paper. Cheers. – Claude Leibovici Mar 09 '19 at 09:01
  • @ClaudeLeibovici I am now reading your paper. It is not so easy to follow. In equation (5), the variable $a$ suddenly pops up. – S. Kohn Mar 09 '19 at 09:09
  • @S.Kohn. This is referring to a previous paper on a "similar" topic $(rhs=0)$. We did publish many papers in this specific area because of the very high importance of these equations in the industrial area (oil and gas mainly). My fist paper in this topic was in 1993. – Claude Leibovici Mar 09 '19 at 09:43
4

This looks to me like the secular equation which is used in divide and conquer algorithm for the symmetric eigenvalue problem so it is widely studied and efficient and stable implementation should be available. Here is an overview paper: A numerical comparison of methods for solving secular equations

Here are some slides from Golub for secular equation.

4

Extended comment: This problem looks close to the central relation for the Durand-Kerner method: For a polynomial of degree $n$ and $n$ root approximations $z_1,...,z_n$ consider the partial fraction decomposition $$ \frac{p(x)}{\prod_{j=1}^n(x-z_j)}=1-\sum\frac{w_j}{x-z_j}. $$ Then by multiplying with $x-z_m$ and setting $x=z_m$ one finds $$w_m=-\frac{p(z_m)}{\prod_{j\ne m}(z_m-z_j)},$$ and the next root approximations are $z_j'=z_j+w_j$.

V. Pan published several papers/tech reports on fast computation of this iteration, acceleration of convergence beyond the Durand-Kerner, Aberth-Ehrlich methods, multi-pole expansion,... that make extensive use of the first equation.

Lutz Lehmann
  • 131,652