Legendre's conjecture: proposed by Adrien-Marie Legendre, states that there is a prime number between $n^2$ and $(n + 1)^2$ for every positive integer $n$. Has it been proved?
-
1http://en.wikipedia.org/wiki/Legendre's_conjecture (Claims that it remains unsolved) Also, see the answers here http://math.stackexchange.com/questions/245320/edmund-landaus-problems – Amzoti Jun 11 '13 at 15:59
-
3The next sentence in the Wikipedia artical you verbatim quote from gives the current answer. – Hagen von Eitzen Jun 11 '13 at 15:59
-
@Amzoti:Thank you! I have seen the Wikipedia before I ask this problem. I notice that: link, So I want to know if it is real. – Peng Jun 11 '13 at 16:19
-
@Amzoti:I find this paper form Asian Journal of Mathematics and Physics:An Elementary Proof of Legendre’s Conjecture. – Peng Jun 11 '13 at 16:48
2 Answers
The conjecture has not been proved. A weaker form of the conjecture would say that for any $x\ge0$ (not just integers), there is a prime between $x^2$ and $(x+2)^2$. Equivalently, the gap between $g_n=p_{n+1}-p_n$ is at most $4\sqrt{p_n}+4$. An even weaker conjecture is that there is some finite $k$ such that $g_n<k\sqrt{p_n}.$ But even this conjecture is not implied by the Riemann Hypothesis.
To be blunt: If Legendre's conjecture had actually been proved, the authors would not have to pay $100 to have the result published in a third-rate journal. You may wish to look at http://www.scottaaronson.com/blog/?p=304
- 32,999
I realize this is an old post, however...
Legendre's conjecture is still open to date. However, I recently completed my thesis, and as a corollary, show that there is always a prime between $n^2$ and $(n+1)^{2.000001}$... So close!
Of course one may also show $n^2 < p < (n+1)^{2+\varepsilon}$ for any $\varepsilon>0$.
I saw a few comments concerning Guerdes paper, which I emailed him about years ago. Ultimately the paper is flawed. It's been a few weeks since I saw it, however, so I will try my best to recall the error(s).
If referencing the 2013 paper, they say after (17)
$\pi(2n)-\pi(n)+\text{Sum}_1-\text{Sum}_2,$
which is where the error lies. You cannot make the subtraction summation 0 and still assume the inequality holds. Consider $a+b+c-d<f$, but $a+b$ is not necessarily less than $f$ (let $a=4, b=2, c=1, d=3,$ and $f=5$).
Hope it helps.
- 126
-
1Can't you just take epsilon to be small enough such that the difference between the expression with the exponent of 2 and the one with exponent of 2 plus epsilon is less then 1? If the claim holds for every positive epsilon this reasoning should show it holds for epsilon equals zero – Belgi May 03 '15 at 06:22
-
Alright, I looked into it a little more and will post the finding in multiple post if necessary due to the character limit for comments. Using P. Dusart's result that $\mid\vartheta(x)-x\mid<\frac{0.01x}{\log^2 x}$ where $\log^2 x=(\log x)^2$ we could try the following, but I'll use the general assumption that we `proved': $\mid\vartheta(x)-x\mid<\frac{cx}{\log^k x}$ for some $c\in\mathbb{R}, k\in\mathbb{N}$. – Kyle Balliet May 03 '15 at 21:55
-
Then for $n\in\mathbb{N}$ (in fact $n\in\mathbb{R}$), $\vartheta((n+1)^{2+\varepsilon})-\vartheta(n^2)>\left((n+1)^{2+\varepsilon}-c(n+1)^{2+\varepsilon}/(\log^k (n+1)^{2+\varepsilon})\right)-\left(n^2+ cn^2/(\log^k n^2)\right)\overset{?}{>}0$.
Assume it is greater than 0, then we need to show that
$1>\frac{c}{(2+\varepsilon)^k \log^k (n+1)}+{\left(\frac{n}{n+1}\right)}^2 \frac{c}{(n+1)^{\varepsilon} 2^k \log^k n}+{\left(\frac{n}{n+1}\right)}^2\frac{1}{(n+1)^{\varepsilon}}$.
– Kyle Balliet May 03 '15 at 22:03 -
Clearly the first two summands tend to 0 as $n\rightarrow\infty$, however ${\left(\frac{n}{n+1}\right)}^2$ tends to 1 as n tends to infinity. So if $\varepsilon=0$, then the inequality does not hold. So we need $\varepsilon>0$.
Choose any $\varepsilon>0$, then the first two summands in the inequality go to 0. Moreover, ${\left(\frac{n}{n+1}\right)}^2\frac{1}{(n+1)^{\varepsilon}}$ also goes to 0 as $n$ goes to infinity.
Thus, if we can determine $n_\varepsilon$ which satisfies the inequality, then for all $n\geq n_\varepsilon$: $\vartheta((n+1)^{2+\varepsilon})-\vartheta(n^2)>0$.
– Kyle Balliet May 03 '15 at 22:04 -
1Of course from $\mid\vartheta(x)-x\mid<\frac{0.01x}{\log^2 x}$ the result of $n^2<p<(n+1)^{2.000001}$ easily follows. It wouldn't be that much work to decrease the exponent to $2+10^{-10}$ and so on; it would just require a lot more base cases to be checked. – Kyle Balliet May 03 '15 at 22:09