3

The question is motivated by the definition of self-similarity dimension for self-similar sets:

Let $M \subset \mathbb R^d$ be self-similar. That is, there are $T_1, \ldots, T_m \subsetneqq M$ and similarity maps $\alpha_1, \ldots, \alpha_m$ with ratios $0 < c_i < 1$ such that $\cup_{i=1}^m T_i =M$ and for all $i=1,\ldots,m$ holds $\alpha_i(T_i)=M.$ We call $d \in \mathbb R$ satisfying $c_1^d+\ldots+c_n^d=1$ the similarity dimension of $M$.

How do we solve $c_1^d+\ldots+c_n^d=1$ for $d$?

Leo
  • 7,800
  • 5
  • 33
  • 68
  • 1
    Analytically or numerically? Is there a reason to have hope in the former case? – anon May 27 '14 at 03:14
  • @seaturtles: As far as I can tell, this is not how the dimension of calculated in practice but I had no idea how to approach the equation. I was looking for an analytical solution. – Leo May 27 '14 at 03:15
  • @MarkMcClure: I myself am always irritated when I see old answers (even accepted ones) with no upvote. But I can assure you that the only reason I haven't upvoted the answers is that I didn't have time to carefully go through them and I find it a bit weird to upvote before reading. – Leo May 28 '14 at 02:53
  • Thanks - that sounds to believable. Glad to hear it, actually, because you've been asking some interesting questions! – Mark McClure May 28 '14 at 03:00

3 Answers3

6

This is just an expansion on the previous two answers.

Using Newton's method

As Claude points out, the solution to $c_1^d+\ldots+c_n^d=1$ may be estimated numerically by using Newton's method to find a root of the function $$ f(d)=\left(\sum _{i=1}^n c_i^d \right)-1. $$ In fact, it may be proved that this works using any positive initial starting value $d_0$.

To see this, first note that for any $c\in(0,1)$, the function $g(d)=c^d$ is continuous and strictly decreasing with $g(0)=1$ and $$\lim_{d\rightarrow\infty}g(d)=0.$$ It follows that $f$ is continuous and monotone decreasing with $f(0)=n>0$ and $$\lim_{d\rightarrow\infty}f(d)=-1.$$ By the intermediate value theorem, $f$ has a positive root. That root is unique since $f$ is strictly decreasing.

Finally, $g(d)$ (and, therefore, $f(d)$) has a positive second derivative. It is therefore convex (often called concave up in calculus). Under these conditions it can be proved that Newton's method will converge to the unique root. This is fairly easy to see, if you understand the "follow the tangent" approach to Newton's method and I've tried to illustrate this in the picture below. It's also proved on page four of this paper.

enter image description here

Analytic solution

As the other answer points out, the equation is easy to solve in the case that $c_1=\dots=c_n=c$. I disagree with the statement that, "This is by far the most common case in practice," however. Perhaps the case $c_1=\dots=c_n=c$ occurs often in illustrative examples because it is the easiest case to solve and because of it's clear connection with box-counting dimension. The whole point of the formula $c_1^d+\ldots+c_n^d=1$, however, is that there are plenty of examples that do not fit this scheme. I suppose that one could argue similarly that self-similar sets are not typical of fractals in general but, they appear quite common because we understand them.

At any rate, we can generally find some sort of analytic solution of $c_1^d+\ldots+c_n^d=1$ precisely when the $c_i$s are exponentially commensurable, i.e. they can all be expressed as common integer powers of the same base. As a simple example, consider $$\left(\frac{1}{2}\right)^d + \left(\frac{1}{4}\right)^d = 1.$$ This equation might represent the fractal dimension of a Cantor type set obtained by replacing an interval with two pieces, one scaled by the factor $1/2$ and the other by the fact $1/4$, and then iterating that procedure. Now, since $4=2^2$, the left hand side can be rewritten as $$\left(\frac{1}{2}\right)^d + \left(\frac{1}{4}\right)^d = \left(\frac{1}{2}\right)^d + \left(\frac{1}{2}\right)^{2d} = \left(\frac{1}{2}\right)^d + \left(\left(\frac{1}{2}\right)^{d}\right)^2.$$ Thus, the equation can be rewritten $$\left(\frac{1}{2}\right)^d + \left(\left(\frac{1}{2}\right)^{d}\right)^2=1.$$ Substituting $q=(1/2)^d$, we get the quadratic $q^2+q-1=0$ which has the unique positive solution $q=\left(\sqrt{5}-1\right)/2.$ The solution to the original equation is, therefore, $$d=\frac{\log\left(\frac{1}{2}\left(\sqrt{5}-1\right)\right)}{\log(1/2)}.$$

A somewhat trickier example is provided by the example $$\left(\frac{1}{8}\right)^d + \left(\frac{1}{4}\right)^d = 1.$$ This leads to the cubic $q^3+q^2-1=0$. The solution is therefore $$d=\frac{\log(\lambda)}{\log(1/2)}\approx 0.405685,$$ where $$\lambda = \frac{1}{3} \left(-1+\sqrt[3]{\frac{25}{2}-\frac{3 \sqrt{69}}{2}}+\sqrt[3]{\frac{1}{2} \left(25+3 \sqrt{69}\right)}\right) \approx 0.75487$$ is the largest root of $q^3+q^2-1=0$. Note that we have an analytic expression in terms of the root of a polynomial. In this case, $\lambda$ can be be expressed in terms of roots while in other cases, it might not be.

Mark McClure
  • 31,496
  • Very nice analysis ! Cheers. – Claude Leibovici May 28 '14 at 07:43
  • In the spirit of what you wrote about Newton, $f(0)=n-1>0$, $f'(0)<0$ and $f''(0)>0$ is suffficient to prove that Newton will always converge without any overshoot of the solution if we start at $d=0$ even if it can be a poor estimate of the solution. – Claude Leibovici May 29 '14 at 07:47
5

For the general case where all $c_i$ are not identical, as answered by words that end in GRY, there is no analytical solution and I suppose that only numerical method could be used. Newton iterative scheme is probably the simplest way for solving this equation setting $$f(d)=\sum _{i=1}^n c_i^d-1$$ $$f'(d)=\sum _{i=1}^n c_i^d \log (c_i)$$ and, as usual, $$d_{n+1}=d_n-\frac {f(d_n)}{f'(d_n)}$$ Using the example used by words that end in GRY in his answer and being very lazy, let us start iterating at $d_0=0$. The successive iterates are $0.558111$, $0.765304$, $0.787654$, $0.787885$ which is the solution for six significant figures.

Let us repeat the same example with ten terms (reciprocals of integers from $2$ to $11$) and repeat the same calculations starting iterations at $d_0=0$. The successive iterates are $0.514218$, $0.992152$, $1.34585$, $1.49664$, $1.51723$, $1.51756$ which is the solution for six significant figures.

We could have been less lazy and save one iteration since, starting at $d_0=0$, the first iterate is just given by $$d_1=-\frac{n-1}{\sum _{i=1}^n \log (c(i))}$$ and then to start with this estimate.

Added later to this answer

As clearly shown by Mark McClure, Newton iterations will converge to the solution provided that we start iterating at an estimate smaller than the solution. This can be justified (if still needed) by the fact that $f(0)=n-1>0$, $f'(0)<0$ and $f''(0)>0$.

As shown is my second example, the estimate I used is quite far from the solution. But this can be improved quite significantly if instead of Newton (quadratic convergence), we use Halley (cubic convergence) or Householder (quartic convergence) schemes. For this example, the estimates generated at $d=0$ are respectively $0.5142$, $0.7665$ and $1.3937$.

For the example used by words that end in GRY in his answer, the corresponding estimates generated at $d=0$ would be $0.5581$, $0.7048$ and $0.7875$.

0

If $c_1=\dots=c_n=c$, then $d=\dfrac{\log n}{\log (1/c)}$. This is by far the most common case in practice.

In general, it's a transcendental equation with no analytic solution. For example, take $$\frac{1}{2^x} + \frac{1}{3^x} = 1 $$ The solution is $x = 0.7878849110258697836285559172984347382691...$ which is found only numerically by Wolfram Alpha, and is not recognized by the Inverse Symbolic Calculator. So, numerics is all we can do with it.