2

I want to approximate the zeros of the following system of polynomial equations using the Newton-Raphson method:

\begin{align} f(x,y) &= x + \frac13 y^9 + \frac19 x^{243} + \frac{1}{27} y^{2187} =0\\ g(x,y) &= y + \frac13 x^{27} + \frac19 y^{243} + \frac{1}{27} x^{6561} =0 \end{align}

I have used the following Matlab code with initial guess $(-1,-1)$ and with error tolerance $0.01$:

close all; clc,clear
% Newton Raphson solution of two nonlinear algebraic equations
xy = [-1 -1]; % initial guesses
iter=0;
maxiter=100;
xy_N = 0.1;
TOL = 0.01;
error1 = [1, 1];
f = zeros(iter, 2);
error = zeros(iter, 2);
% begin iteration
while xy_N>TOL
iter=iter+1;
x = xy(1);
y = xy(2);
% calculate the functions
f(iter,:) = [x+(1/3)*y^9+(1/9)*x^243+(1/27)*y^2187;
y+(1/3)*x^27+(1/9)*y^243+(1/27)*x^6561];
% calculate the Jacobian
J= [1+27*x^242,  3*y^8+81*y^2186; 9*x^26+24*x^6560, 1+27*y^242];
xy_new = xy - ((J)\f(iter,:)')';
error1=abs(xy_new-xy);
error(iter,:)=error1;
% Calculate norm:
xy_N = norm(f(iter,:));
XYN(iter) = xy_N;
xy=xy_new;
%iter=iter+1;
if (iter > maxiter)
fprintf('Did not converge! \n', maxiter);
break
end
end
if iter<maxiter
f(iter+1:end,:)=[];
end
% Print results
fprintf('Number of iterations is:  %f \n ', iter)
fprintf('Final values of [x, y] =   %f  %f \n', xy(end, :));
fprintf('Final values of EQN:  %f  %f \n', f(end, :));
fprintf('Final values of ERROR:  %f  %f \n', error(end, :));

But it is giving error. The following display:

Warning: Matrix is singular to working precision. Warning: Matrix is singular, close to singular or badly scaled.
Results may be inaccurate. RCOND = NaN.
Number of iterations is:  4.000000
Final values of [x, y] =   NaN  NaN
Final values of EQN:  NaN  NaN
Final values of ERROR:  NaN  NaN

Could you please help me fix the code to ensure that convergence is attained? You can suggest more flexible code to solve my system.


Edit: Since the Jacobian is not invertible, I am adding one more term to each function in order to check my program as follows:

\begin{align} f(x,y) &= x + \frac13 y^9 + \frac19 x^{243} + \frac{1}{27} y^{2187} +\frac{1}{81}x^{59049}=0\\ g(x,y) &= y + \frac13 x^{27} + \frac19 y^{243} + \frac{1}{27} x^{6561} +\frac{1}{81}y^{59049}=0 \end{align} I hope now the Jacobian will be non-singular.

MAS
  • 10,898
  • Nonlinear is too broad. Polynomial is accurate and richer. – Rodrigo de Azevedo Apr 30 '22 at 12:27
  • 1
    If you care about the solution per se (which maybe you don't, and I would understand if you don't), fsolve solves this equation immediately with your initial guess. – Ian Apr 30 '22 at 12:43
  • @Ian, fsolves? I really don't know this function. I would request you to make an answer that would help me – MAS Apr 30 '22 at 13:11
  • It's just Matlab's built in solver for algebraic equations, or one of several of them. It's quite simple to learn to use. – Ian Apr 30 '22 at 13:17
  • According to this answer by Cesareo, a clear figure shows the zeroes will be $(0,0),\approx(-1,1),\approx(1,-1),\approx(-1,-1)$. But why don't my code converge to these zeros ? Can someone modify my code ? – MAS Apr 30 '22 at 23:18
  • Adding an even higher degree term just makes the problem worse. The main part of the issue here is that the numerical domain of this function is something like $[-1.1,1.1] \times [-1.4,1.4]$. If you leave there you get infinities from floating point arithmetic and the method crashes. Adding an even higher exponent term will just make this get even closer to $[-1,1] \times [-1,1]$. – Ian May 02 '22 at 13:16
  • You have two problems of different kind mixed in one unclear post. Please ask one clear question. Which is the one system of algebraic equations to be "solved"? – dan_fulea May 16 '22 at 13:00

2 Answers2

1

Problem seems to be that the Jacobian is not invertible in the fourth iteration. This is because I think the NR-method wants to converge to the solution $x:= (-1.0005, 1.0014)^\top$ but $J(x)$ is not invertible. The convergence results for the NR-method usually all apply to non-degenerate solutions $y$, i.e. $\det{J(y)} \neq 0$ (of course you cannot check that in advance).

One suggestion that comes to mind is just to store the iterations in which $\texttt{J}$ is invertible and use that old $\texttt{J}$ when it is not invertible in the current iteration (so called frozen NR-method). Even though I do not know for certain if this converges, chance is certainly higher.
(Fun fact: This would actually converge locally - though not as fast as normal NR-method - if $J(x)$ was invertible)

Another one is using the pseudo-inverse of $J$ instead of the $\texttt{\\}$ operator.

  • I suspect you have the wrong interpretation here, actually. You're right that degeneracy at the solution will slow convergence, but generally the forward error decays together with the determinant of the Jacobian in that scenario, causing increments to remain not too big. Think of the 1D example $f(x)=x^2$ with $x_0=1$. The Newton iteration is $x_{n+1}=x_n-x_n^2/2x_n=x_n/2$; it's not quadratic convergence, but your program only complains when you're pretty much already there anyway. Eddy Piedad's answer suggests the method never actually gets anywhere near the solution. – Ian Apr 30 '22 at 12:48
  • I see, fair enough, you are right of course. What I meant though: We know, that if the solution is non-degenerate, then there is a small neighborhood in which the Jacobian is non-singular. – Hyperbolic PDE friend Apr 30 '22 at 12:50
  • @Meowdog, how did you find $(-1.0005, 1.0014)$ ? Has this appeared in an iteration? I think that is good point. Because I also checked another similar polynomial system converges to such point using Sage code. – MAS Apr 30 '22 at 13:20
  • 1
    @learner : As the powers used are now odd, you will need initial approximation values with opposite sign, for instance $(x,y)=(1,-1)$ or $(x,y)=(-1,1)$ to get even somewhat close to zero. – Lutz Lehmann Apr 30 '22 at 17:33
  • @LutzLehmann, thanks. $(0,0)$ is a trivial solution. I am looking the zeroes $\approx (\pm 1, \mp 1)$ as Meowdog indicates. But my above code is not efficient in the higher exponent functions – MAS Apr 30 '22 at 17:56
  • @learner : I meant in the function values. If you add quantities of the same sign, you move away from zero. – Lutz Lehmann Apr 30 '22 at 19:53
  • @LutzLehmann, ohh. Yes. Thanks. But the higher exponents making the code inefficient. Have you successfully tried the above code? – MAS Apr 30 '22 at 22:37
  • @LutzLehmann, you can look at the answer by Cesareo, a clear figure shows the zeroes will be $(0,0),\approx(-1,1),\approx(1,-1),\approx(-1,-1)$. But why don't my code converge to these zeros ? Can you modify my code ? – MAS Apr 30 '22 at 23:15
  • @learner I just used matlab's $\texttt{fsolve}$ to find that point – Hyperbolic PDE friend May 01 '22 at 09:34
  • @Meowdog, would you like to write the fsolves code in your answer? – MAS May 01 '22 at 14:46
  • 1
    It is just: f=@(x) [x(1)+x(2)^3/3+x(1)^243/9+x(2)^2187/27;x(2)+x(1)^27/3+x(2)^243/9+x(1)^6561/27]; fsolve(f,[-1;-1]) – Ian May 02 '22 at 13:07
  • @Ian: Do you know how fsolve works? I am actually curious... – Hyperbolic PDE friend May 02 '22 at 18:13
  • It has several algorithms to choose from but they are fairly ordinary optimization algorithms: trust region, dogleg, etc. – Ian May 02 '22 at 18:19
  • Ah, i see. Thanks! – Hyperbolic PDE friend May 02 '22 at 18:24
1

I have simulated your code step-by-step. On the third iteration, it blowed up.

>>test_math
Number of iterations is:  1.000000 
 Final values of [x, y] =   -0.958268  -0.996274 
Final values of EQN:  -1.481481  -1.481481 
Final values of ERROR:  0.041732  0.003726 
Number of iterations is:  2.000000 
 Final values of [x, y] =   2.610643  -1.777055 
Final values of EQN:  -1.280603  -1.280603 
Final values of ERROR:  3.568911  0.780782 
Warning: Matrix is singular to working precision. 
 In test_math (line 21) 
Number of iterations is:  3.000000 
 Final values of [x, y] =   NaN  NaN 
Final values of EQN:  -Inf  -Inf 
Final values of ERROR:  NaN  NaN 

I have investigated the result and I found out that on the 3rd iteration, the Jacobian matrix has inf value meaning there can be division by zero.

1st iteration

J =
28    84
33    28

2nd iteration

J =
1.0009    2.9349
2.9710   11.9402

3rd iteration

J =
1.0e+102*1.9243       Inf
   Inf    0.0000

This is probably happen when $x$ or $y$ approaches zero and with very huge exponent.

  • Your individual iterations seem to suggest the method is actually going the wrong way; the nearby (relatively) solution is up and just a hair to the left, but it goes to the right instead. – Ian Apr 30 '22 at 12:42
  • That makes me wonder what $J$ and the forward error look like after an iteration. – Ian Apr 30 '22 at 12:48
  • @Ian I edited and included the Jacobian of the first two iterations. I hope this helps. – Eddy Piedad Apr 30 '22 at 15:07
  • @EddyPiedad, we can see from the answer of Cesareo, that the zeroes are $(0,0), \approx(-1,1), \approx(1,-1),\approx(-1,-1)$. Why don't my code work to converge there ? – MAS Apr 30 '22 at 23:09
  • I suppose you both have different system of equations. The other answer here explained why. You may use FSOLVE https://www.mathworks.com/help/optim/ug/fsolve.html as suggested by Meowdog. – Eddy Piedad May 02 '22 at 09:51