2

Consider the equation

$$a^x + b^x = c^x$$

or equivalently if $c\ne 0$

$$(a/c)^x + (b/c)^x = 1$$

with $a, b, c \in \mathbb{R}$.

I am interested in how you find a value of $x$ to solve such equations in general or determine there is no real solution.

If $a=4, b=6, c=9$, for example, you can get a closed form solution by solving a quadratic equation.

IV_
  • 7,902
  • 1
    https://www.quora.com/What-is-the-general-solution-to-a-x-b-x-c-x – Mathxx Oct 17 '22 at 04:08
  • 6
    If $a/c,b/c>1$ it is relatively easy to show there's a solution. Similarly for $a/c,b/c<1.$ There is almost certainly no algebraic or other closed form for the solution, you can only do some numeric method. – Thomas Andrews Oct 17 '22 at 04:21
  • 1
    If $a/c\leq 1\leq b/c$ or the other order, you can show there is no solution, This assumes $a,b,c$ are positive. – Thomas Andrews Oct 17 '22 at 04:24
  • What's your precise assumption about $a$, $b$ and $c$, and on $x$? r – Taladris Oct 19 '22 at 09:56
  • @Taladris $a,b,c$ are all real numbers. I am particularly interested in the cases where there is a real valued solution for $x$, –  Oct 19 '22 at 11:11
  • @graffe: Can they be negative? – Taladris Oct 19 '22 at 11:55
  • @Taladris yes they can –  Oct 19 '22 at 14:16
  • 1
    That makes things trickier since when $a^x$ is defined only for integers and rationals of the form $x=p/q$ with q odd (assuming $p/q$ irreducible), so you can't use continuity to justify the existence of a solution. – Taladris Oct 19 '22 at 14:27
  • 1
    I think that I found something funny. Look at my update. – Claude Leibovici Oct 27 '22 at 03:27
  • 1
    Have a look at https://math.stackexchange.com/questions/4562625/approximate-inverse-of-k-frac-log-1-t-log-t I think that we have something quite intresting – Claude Leibovici Oct 27 '22 at 06:29

5 Answers5

5

I prefer to add a separate answer for a more formal solution.

$$f(x)=a^x+b^x-1$$ but now with $0 < b < a < 1$

Let $b^x=t$ and $k=\frac{\log(a)}{\log(b)} \in (0, 1)$ and the equation becomes $$t^k+t-1=0$$ which admits a exact solution (have a look here) given by an infinite summation

$$t = \sum_{n=0}^\infty (-1)^n\frac{ \Gamma (k n+1)}{n! \,\Gamma ((k-1) n+2)}\quad \implies \quad x=\frac{\log(t)}{\log(b)}$$

Using $a=\frac 1e$, $b=\frac 1 \pi$, $k=\frac{1}{\log (\pi )}$ and $100$ terms for the summation, $$t=0.4765927549881905506 \implies x=0.6473954467736222713$$ while the exact solution is $x=0.6473954467736222709$

If $k$ is the reciprocal of an integer, $t$ is given in terms of a bunch of more than nasty hypergeometric functions.

For this case, working with $$g(t)=t^k+t-1$$ and not with its logarithmic transform which seems to be less linear, we have $$t_0=2^{-1/k}\implies t_1=\frac{k+1}{k\,2^{\frac{1}{k}} +2}$$

For $k=\frac{1}{\log (\pi )}$ as above, this gives $t_1=0.476553$ (the solution being $0.476593$) from which $x_1=0.647468$ (the solution being $0.647395$). This looks quite promising.

Edit

For the particular case where $a=\frac 69$ and $b=\frac 49$ , $k=\frac 12$ $$t = \sum_{n=0}^\infty (-1)^n\, \frac{\Gamma \left(\frac{n}{2}+1\right)}{n! \,\Gamma \left(2-\frac{n}{2}\right)}=\frac{3-\sqrt{5}}{2} \quad \implies \quad x=\frac{\cosh ^{-1}\left(\frac{3}{2}\right)}{\log \left(\frac{9}{4}\right)}$$

Update

Looking differently at the problem, we look for the inverse of $$k=\frac{\log (1-t)}{\log (t)}$$ The plot of $t$ as a function of $k$ is not appealing but the plot of $t$ as a function of $\log(k)$ is very interesting (a sigmoid function) and it seems that a rather good approximation could be $$t\sim\frac 1 {1+k^{-\log_2 (\phi )}}$$

For $k=\frac{1}{\log (\pi )}$, it gives $t=0.476557$ while the exact solution is $t=0.476593$. The first iterate of Newton method leads to $t=\color{red}{0.4765927549}06$

3

Partial answer.

Considering that you look for the zero of function $$f(x)=a^x+b^x-1$$ I shall assume, without any loss of generality $a > b$ and $b >0$ (in order to stay in the real domain).

For the time being, I shall assume that $b>1$ and exclude the cases where $a$ and $b$ are in such a ratio that the problem to a polynomial of degree $\leq 4$ which can be solved with readicals. In fact, there is no solution if $b<1$.

Defining a few quantities

$$x_a=-\frac{\log (2)}{\log (a)} \qquad x_b=-\frac{\log (2)}{\log (b)}\qquad x_{\text{min}}=\min (x_a,x_b)\qquad x_{\text{max}}=\max (x_a,x_b)$$

the solution is such that $$x_{\text{min}} < x <x_{\text{max}}$$ Over this range, assuming that there is a solution, $$g(x)=\log(a^x+b^x)$$ should be quite close to linearity (which is good for any root finding method).

What is important is that, to one of these two values $x_0$, one of them will be such that $$g(x_0) \times g(x_0)~ >~ 0$$

In fact $\color{red}{x_0=x_a}$

So, by Darboux theorem, starting iterations with $x_0$, Newton method will converge without any overshoot of the solution.

The first iterate is, as usual, $$x_1=x_0-\frac {g(x_0)} {g'(x_0)}\qquad \qquad g'(x)=\frac{a^x \log (a)+b^x \log (b)}{a^x+b^x}$$ Trying for $a=\pi$ and $b=e$ $$\left( \begin{array}{cc} n & x_n\\ 0 & -0.605512 \\ 1 & -0.647391 \\ 2 & -0.647395 \\ \end{array} \right)$$

If we consider the cited case of $$4^x+6^x=9^x$$ which is equivalent to $a=\frac 94$ and $b=\frac 96$

$$\left( \begin{array}{cc} n & x_n\\ 0 & -1.183011 \\ 1 & -1.186814 \\ \end{array} \right)$$

We could also use Halley method with

$$x_{1} = x_0 - \frac {2 g(x_0) g'(x_0)} {2 {[g'(x_0)]}^2 - g(x_0) g''(x_0)} \qquad \qquad g''(x)=\frac{a^x b^x (\log (a)-\log (b))^2}{\left(a^x+b^x\right)^2}$$ which is better than the one given by Newton but which is a overestimate of the solution.

Householder method

$$x_1= x_0 - \frac{6g(x_0)\,g'(x_0)^2-3g(x_0)^2\,g''(x_0)}{6g'(x_0)^3-6g(x_0)\,g'(x_0)\,g''(x_0)+g(x_0)^2\,g'''(x_0)}$$

$$g'''(x)=-\frac{a^x b^x \left(a^x-b^x\right) (\log (a)-\log (b))^3}{\left(a^x+b^x\right)^3}$$ would not improve at all.

So, use Newton method as described above. A couple of iterations would be sufficient.

1

Claude already gave two answers and shared a very amazing article. I wanted to share my work.

Given $a^x+b^x=c^x$, let $y=\left(\frac{b}{c}\right)^x$ and $k=\frac{\ln c-\ln a}{\ln c-\ln b}$ we already obtained $$y^k+y=1.\;\;(*)$$ Without loss of generality $0<a<b<c$, hence $k>1$ an the equation $(*)$ has a solution in $(0,1)$. We can solve it numerically. Choose $y_0\in (0,1)$.

(NRM) Newton-Raphson Method's Iteration: $y_{n+1}=\large{\frac{(k-1)y_n^k+1}{ky_n^{k-1}+1}}$.

(FPM) Fixed Point Method's Iteration: $y_{n+1}=\sqrt[k]{1-y_n}$.

Example: $2^x+3^x=5^x$. I chose $y_0=0.9$. WA gave $y_3=0.6$ in NRM, but $y_{18}\approx 0.59$ in FPM. I rounded decimals. FPM is not stable. NRM is faster. So, $y=(0.6)^x=0.6$ gives the solution $x=1$.

Bob Dobbs
  • 15,712
1

P.S.: I canged my original method because there was a flaw in the technique as $Y\vert y$ pointed out. This new method is a simplification of the problem to a solvable form.

For this equation to make any sense, you need to restrict $a$, $b$, and $c$ to non negative values as $(-2)^x$ for example is not a defined function in $\mathbb{R}$.

Then we rewrite this equation as $$u^x+v^x=1$$ The continuation is a technique the changes this equation into a "polynomial" equation. We use the following property: $$k^{\ln(x)}$$ $$=\left(e^{\ln(k)}\right)^{\ln(x)}$$ $$=\left(e^{\ln(x)}\right)^{\ln(k)}$$ $$=x^{\ln(k)}$$ so to solve the equation $$u^x+v^x=1$$ let $x=\ln(y)$, we get $$u^{\ln(y)}+v^{\ln(y)}=1$$ $$\Rightarrow y^{\ln(u)}+y^{\ln(v)}=1$$ now if we know $u$ and $v$, we solve for $y$ using the plethora of techniques available.

Sam
  • 1,275
  • You make it look so simple! –  Oct 24 '22 at 19:36
  • @graffe thank you – Sam Oct 24 '22 at 19:41
  • Could you say something about which techniques would be most appropriate? –  Oct 26 '22 at 05:34
  • 1
    @graffe you need to use numerical methods to approximate a solution. luckily since the variable is no longer an exponent, calculations would be much easier. depending on $u$ and $v$ you may use iterative numerical methods. Here is a Newton method that is a personal favorite. – Sam Oct 26 '22 at 07:39
1

a) Impossibility of algebraic solving

Because, in the general case, the equation is a polynomial equation in dependence of algebraically independent monomials ($a^x,b^x,c^x)$ and without a univariate factor, the equation cannot be solved for $x$ by only rearranging it by applying only finite numbers of elementary functions/operations we can read from the equations.

Other tricks, Special functions, numerical or series solutions could help.

b) For the interested reader

$$a^x+b^x=c^x$$ $$\left(\frac{a}{c}\right)^x+\left(\frac{b}{c}\right)^x=1$$ $\frac{a}{c}\to u,\frac{b}{c}\to v$: $$u^x+v^x=1$$ $$e^{\ln(u)x}+e^{\ln(v)x}=1$$ $x\to\frac{\ln(t)}{ln(u)},\frac{\ln(v)}{\ln(u)}\to\alpha$: $$t+e^{\ln(t)\alpha}=1$$ $$t+t^\alpha-1=0$$

This is a form similar to a trinomial equation. A closed-form solution can be obtained using confluent Fox-Wright Function $\ _1\Psi_1$ therefore.

Belkić, D.: All the trinomial roots, their powers and logarithms from the Lambert series, Bell polynomials and Fox–Wright function: illustration for genome multiplicity in survival of irradiated cells. J. Math. Chem. 57 (2019) 59-106

IV_
  • 7,902