8

The question I was originally trying to solve is:

$$ S_1 = \lim_{n \to \infty} \sqrt[n+1]{(n+1)!} - \sqrt[n]{n!}. $$

I approached this using Stirling's approximation for factorials, which is valid because the approximation is asymptotically equivalent as $n \to \infty$. Specifically, we know:

$$ \lim_{n \to \infty} \frac{n!}{n^n e^{-n} \sqrt{2n \pi}} = 1. $$

Thus, we can safely replace $n!$ with $n^n e^{-n} \sqrt{2n \pi}$ for large $n$. Applying this substitution yields the correct result for $S_1$.

Curiously, I tried to apply the same logic to another limit:

$$ S_2 = \lim_{n \to \infty} \left( n - \frac{n}{e} \left( 1 + \frac{1}{n} \right)^n \right). $$

Here, I thought I could replace $\left(1 + \frac{1}{n}\right)^n$ with $e$, since:

$$ \lim_{n \to \infty} \frac{\left(1 + \frac{1}{n}\right)^n}{e} = 1. $$

This substitution gave me $S_2 = 0$. However, the actual result for $S_2$ is $\frac{1}{2}$. So, what went wrong? Why does replacing work correctly for $S_1$, but not for $S_2$? What subtlety am I missing here?

Hamza Ayub
  • 172
  • 7
  • 3
    Have you tried writing down an equation with an error term, such as $(1+1/n)^n - e = \epsilon(n)$? Then $$n-\frac ne(1+1/n)^n=n-\frac ne(e+\epsilon(n)) = -\frac ne\epsilon(n).$$ Now what? Analyze your first one carefully, too. – Ted Shifrin Jan 25 '25 at 19:12
  • 9
    Here's a simpler example: of course $1 + \frac{1}{n}$ itself is asymptotic to $1$. But we can't replace $1 + \frac{1}{n}$ with $1$ in the limit $\lim_{n \to \infty} \left( 1 + \frac{1}{n} \right)^n$. – Qiaochu Yuan Jan 25 '25 at 19:24
  • 1
    See related https://math.stackexchange.com/a/1783818/72031 – Paramanand Singh Jan 27 '25 at 04:53

4 Answers4

12

What is true more precisely is that we have an additive error bound

$$\left( 1 + \frac{1}{n} \right)^n = e + O \left( \frac{1}{n} \right)$$

and the form of the additive error is important. In this problem you need to multiply by $\frac{n}{e}$ and doing so gives

$$\frac{n}{e} \left( 1 + \frac{1}{n} \right)^n = n + O(1)$$

meaning this approximation now incurs some bounded additive error, which means it cannot be used to determine the value of the limit $S_2$ (although it does correctly suggest that the limit exists). The limit of the ratios is not enough information to determine whether or not this will happen in a given situation. To compute $S_2$ correctly you need to know the next term in the asymptotic expansion (in fact determining $S_2$ is equivalent to determining the next term in the asymptotic expansion), which turns out to be

$$\left( 1 + \frac{1}{n} \right)^n = e - \frac{e}{2n} + O \left( \frac{1}{n^2} \right).$$

In the first situation $S_1$ can be computed by using Stirling's approximation to get an additive error bound of the form

$$\sqrt[n]{n!} = \frac{n}{e} + \frac{\log 2 \pi n}{2e} + o(1)$$

which will give $S_1 = \frac{1}{e}$. For computing this limit you can basically ignore additive error of the size any function $f(n)$ such that $\lim_{n \to \infty} f(n+1) - f(n) = 0$, which basically means any function growing sublinearly. So more or less the dominant part $\left( \frac{n}{e} \right)^n$ of Stirling's approximation is enough, but you got a little lucky.

So in general it just depends on what is happening to the function. Potentially the entire asymptotic expansion may be necessary, because the limit you're trying to evaluate may e.g. subtract the first $k$ terms in the asymptotic expansion and then require the $k+1$-th term. For example, if a problem asked you to evaluate the limit

$$\lim_{n \to \infty} n^2 \left( \left( 1 + \frac{1}{n} \right)^n - e + \frac{e}{2n} \right)$$

that's equivalent to asking for the quadratic term in the asymptotic expansion, which turns out to be $\frac{11e}{24n^2}$.


A comment on how to find the asymptotic expansion of $f(n) = \left( 1 + \frac{1}{n} \right)^n$. It would in principle be possible to do this using the binomial theorem but I think it gets pretty messy and annoying. We can do it more easily by taking the logarithm to get

$$\log f(n) = n \log \left( 1 + \frac{1}{n} \right) = \sum_{k=0}^{\infty} (-1)^{k-1} \frac{x^k}{k+1} = 1 - \frac{1}{2n} + \frac{1}{3n^2} \mp \dots $$

which can be exponentiated term-by-term, e.g. in WolframAlpha. For example the cubic term is given by expanding

$$f(n) = \exp \left( 1 - \frac{1}{2n} + \frac{1}{3n^2} - \frac{1}{4n^3} \pm \dots \right) = e - \frac{e}{2n} + \frac{11e}{24n^2} - \frac{7e}{16n^3} \pm \dots.$$

Qiaochu Yuan
  • 468,795
  • I have downvoted by mistake. This can be fixed only if the answer is edited. – Paramanand Singh Jan 27 '25 at 05:03
  • Perhaps you misinterpreted my comment. The down vote was generated by an accidental touch gesture while scrolling on my mobile device. By the time I noticed this the vote was locked and unless the post is edited the vote can not be reversed. – Paramanand Singh Jan 27 '25 at 16:42
  • I can see that you have updated the post and the I have reversed my vote to a +1. – Paramanand Singh Jan 27 '25 at 16:43
  • @Paramanand: oh, my apologies, I thought you meant "I have downvoted because I saw a mistake." In fact you prompted me to look for a mistake and I found a small one in the discussion of $S_1$! – Qiaochu Yuan Jan 27 '25 at 16:47
3

I feel like the following should be pointed out :

Recall the equivalent notation $u_n \sim v_n$ when $u_n = v_n(1 + \varepsilon_n)$ for some sequence $\varepsilon_n$ converging to $0$. Then you can always multiply equivalents:

Theorem If $u_n \sim u'_n$ and $v_n \sim v'_n$, then $$u_n v_n \sim u'_n v'_n$$

So in a product, $u_n$ and $v_n$ can safely be replaced by $u'_n$ and $v'_n$ for large $n$. However, there is no such general result for the addition of equivalents, i.e., in general, one does not have $$u_n \sim u'_n \text{ and } v_n \sim v'_n \implies u_n + v_n \sim u'_n + v'_n.$$ So, for additions, we cannot safely replace $u_n$ and $v_n$ by $u'_n$ and $v'_n$ for large $n$. For a counter-example $$n + \sqrt{n} \sim n \,\text{ and }\, -n + \sqrt{n} \sim -n \quad \text{but}\quad n + \sqrt{n} + \big(-n + \sqrt{n}\big) = 2\sqrt{n}$$ which is not equivalent to $n + (-n)$.

This means that it is not a valid argument to solve $S_1$ by substracting the two Stirling approximations for each term. Valid approaches can be found on this site, e.g. Limit of the sequence $a_n=\sqrt[n+1]{(n+1)!}-\sqrt[n]{n!}$

Martin
  • 595
1

There is a specific hammer for the first limit. Stirling's approximation in this case is more of a distraction (though it provides the right value of the limit more as a coincidence that as a rigorous strategy).

Proposition(O. Carja): Let $(a_n:n\in\mathbb{N})$ be a sequence of positive numbers such that

  1. $\lim_n\frac{a_n}{n}=A>0$,
  2. $\lim_n\Big(\frac{a_{n+1}}{a_n}\Big)^n=B\in\overline{\mathbb{R}}$.

Then $L=\lim_n(a_{n+1}-a_n)$ exists and equals $A\log B$.

These types of results are very well known to Romanian students and have a long tradition since the early 1900's (Look for Lalescu's problem). A proof of this result in MSE can be found here.

In the setting of the OP, $a_n=\sqrt[n]{n!}$ (the original Lalescu problem).

  1. $\lim_n\frac{a_n}{n}=\frac{\sqrt[n]{n!}}{n}=\sqrt[n]{\frac{n!}{n^n}}=\lim_n\frac{(n+1)!}{(n+1)^{n+1}}\frac{n^n}{n!}=\lim_n\frac{1}{\big(1+\tfrac1n\big)^n}=\frac1e$
  2. $\lim_n\Big(\frac{a_{n+1}}{a_n}\Big)^n=\lim_n\frac{((n+1)!)^{\frac{n}{n+1}}}{n!}=\lim_n\frac{n+1}{n}\frac{n+1}{\sqrt[n+1]{(n+1)!}}=\lim_n\frac{n+1}{n}\frac{n+1}{a_{n+1}}=e$.

Hence $\lim_n(a_{n+1}-a_n)=\frac1e\log e=\frac1e$.


To justify the use of substitution of a sequence $a_n$ by another, say $b_n$, with $a_n\sim b_n$ in the context of the problem, I think one should also checked that $\lim_n\Big(\frac{b_{n+1}}{b_n}\Big)^n=\lim_n\Big(\frac{a_{n+1}}{a_n}\Big)^n=B\in\overline{\mathbb{R}}$. In which case, by Carja's Theorem $$\lim(b_{n+1}-b_n)=\lim(a_{n+1}-a_n)$$. This happens to be the case for Lalescu's problem using Stirling approximation.


As for the second limit in the OP, the issue is that $\Big(1+\frac1n\Big)^n=e-\frac{e}{2}\frac1n +o(1/n)$ and so, $$n\left(1-\frac{\Big(1+\tfrac1n\Big)^n}{e}\right)\xrightarrow{n\rightarrow\infty}\frac12$$ Indeed, consider the function $f:[0,\infty)\rightarrow\mathbb{R}$ given by $f(0):=e$, and $f(h)=(1+h)^{1/h}=\exp\left(\frac{\log(1+h)}{h}\right)$ for $h\neq0$. $f$ is continuous on $[0,\infty)$ and differentiable on $(0,\infty)$. By the mean value theorem $$\lim_{h\rightarrow0+}\frac{f(0)-f(h)}{eh}=-\frac1e\lim_{h\rightarrow0+}f'(h)$$ Applying L'Hospital rule \begin{align} f'(h)&=f(h)\left(\frac{\tfrac{h}{1+h}-\log(1+h)}{h^2}\right)\\ &=f(h)\left(\frac{h-(1+h)\log(1+h)}{(1+h)h^2} \right)\sim f(h)\frac{1-\log(1+h)-1}{2h+3h^2}\\ &~\sim -f(h)\frac1{1+h}\frac1{2+6h}\xrightarrow{h\rightarrow0}-\frac{e}{2} \end{align}


Although the asymptotic relation $a_n\sim b_n$ iff $\lim_n\frac{a_n}{b_n}=1$ behaves well under multiplication, that is $a_n\sim b_n$ and $c_n\sim d_n$ implies $a_nc_n\sim b_nd_n$, it does not do so under addition except under some additional assumptions. For example, if $a_n\sim b_n$, $c_n\sim d_n$, $a_n\xrightarrow{n\rightarrow\infty}\infty$ and $\frac{c_n}{a_n}\xrightarrow{n\rightarrow\infty}0$ then $a_n+c_n\sim b_n + d_n$ for $$\frac{a_n+c_n}{b_n+d_n}=\frac{1+\tfrac{c_n}{a_n}}{\tfrac{b_n}{a_n}+\tfrac{d_n}{c_n}\tfrac{c_n}{a_n}}\xrightarrow{n\rightarrow\infty}1$$

Mittens
  • 46,352
-1

Too long for a comment.

Being deliberately provoking, I should state that, in real life, limits present a very limited interest.

When I was a student, my professors used to state that the limit of a function is the limit of its asymptotics and we were taught both at the same time (have a look here).

If you accept to keep this in mind, you probably understand the danger of replacing one function by another. If you look at my answers on this site, you will notice that, in expansions, I use more terms than required for the limit.

Your problem $$S_1 = \lim_{n \to \infty} \sqrt[n+1]{(n+1)!} - \sqrt[n]{n!}$$ is interesting and @Qiaochu Yuan already gave good explanations.

Consider $$f(p)= \sqrt[p]{p!}=\sqrt[p]{\Gamma(p+1)}\quad \implies \quad \log(f(p))=\frac 1 p\,\log (\Gamma (p+1))$$ Using Stirling approximation $$\log(f(p))=(\log (p)-1)+\frac{\log (p)+\log (2 \pi)}{2 p} +O\left(\frac{1}{p^2}\right)$$ Exponentiating $$f(p)=\frac{p}{e}+\frac{\log (p)+\log (\pi)+\log (2)}{2 e}+O\left(\frac{1}{p}\right)$$

Then $$f(n+1)-f(n)= \frac{1}{e}+\frac{1}{2 e n}+O\left(\frac{1}{n^2}\right)$$ which shows the limit and how it is approached. But it also provide a shortcut to evaluate $S_1$ before arriving at infinity (!!).

For example, if $n=100$, the above very truncated series gives $0.369719$ while the exact value is $0.369573$ (relative error of $0.04$%).