As follows from my previous question, I've been playing with the Riemann hypothesis as a matter of recreational mathematics. In the process, I've come to a rather interesting recurrence, and I'm curious as to its name, its reductions, and its tractability towards the solvability of the gap between prime numbers.
Tersely speaking, we can define the gap between each prime number as a recurrence of preceding candidate primes. For example, for our base of $p_0 = 2$, the next prime would be:
$\qquad \displaystyle p_1 = \min \{ x > p_0 \mid -\cos(2\pi(x+1)/p_0) + 1 = 0) \}$
Or, as we see by plotting this out: $p_1 = 3$.
We can repeat the process for $n$ primes by evaluating each candidate prime recurring forward. Suppose we want to get the next prime, $p_2$. Our candidate function becomes:
$\qquad \displaystyle \begin{align} p_2 = \min\{ x > p_1 \mid f_{p_1}(x) + (&(-\cos(2\pi(x+1)/p_1) + 1) \\ \cdot &(-\cos(2\pi(x+2)/p_1) + 1)) = 0\} \end{align}$
Where:
$\qquad \displaystyle f_{p_1}(x) = -\cos(2\pi(x+1)/p_0) + 1$, as above.
It's easy to see that each component function only becomes zero on integer values, and it's equally easy to show how this captures our AND- and XOR-shaped relationships cleverly, by exploiting the properties of addition and multiplication in the context of a system of trigonometric equations.
The recurrence becomes:
$\qquad f_{p_0} = 0\\ \qquad p_0 = 2\\ \qquad \displaystyle f_{p_n}(x) = f_{p_{n-1}}(x) + \prod_{k=2}^{p_{n-1}} (-\cos(2\pi(x+k-1)/p_{n-1}) + 1)\\ \qquad \displaystyle p_n = \min\left\{ x > p_{n-1} \mid f_{p_n}(x) = 0\right\}$
... where the entire problem hinges on whether we can evaluate the $\min$ operator over this function in polynomial time. This is, in effect, a generalization of the Sieve of Eratosthenes.
Working Python code to demonstrate the recurrence:
from math import cos,pi
def cosProduct(x,p):
""" Handles the cosine product in a handy single function """
ret = 1.0
for k in xrange(2,p+1):
ret *= -cos(2*pi*(x+k-1)/p)+1.0
return ret
def nthPrime(n):
""" Generates the nth prime, where n is a zero-based integer """
# Preconditions: n must be an integer greater than -1
if not isinstance(n,int) or n < 0:
raise ValueError("n must be an integer greater than -1")
# Base case: the 0th prime is 2, 0th function vacuous
if n == 0:
return 2,lambda x: 0
# Get the preceding evaluation
p_nMinusOne,fn_nMinusOne = nthPrime(n-1)
# Define the function for the Nth prime
fn_n = lambda x: fn_nMinusOne(x) + cosProduct(x,p_nMinusOne)
# Evaluate it (I need a solver here if it's tractable!)
for k in xrange(p_nMinusOne+1,int(p_nMinusOne**2.718281828)):
if fn_n(k) == 0:
p_n = k
break
# Return the Nth prime and its function
return p_n,fn_n
A quick example:
>>> [nthPrime(i)[0] for i in range(20)]
[2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71]
The trouble is, I'm now in way over my head, both mathematically and as a computer scientist. Specifically, I am not competent with Fourier analysis, with defining uniform covers, or with the complex plane in general, and I'm worried that this approach is either flat-out wrong or hides a lurking horror of a 3SAT problem that elevates it to NP-completeness.
Thus, I have three questions here:
- Given my terse recurrence above, is it possible to deterministically compute or estimate the location of the zeroes in polynomial time and space?
- If so or if not, is it hiding any other subproblems that would make a polytime or polyspace solution intractable?
- And if by some miracle (1) and (2) hold up, what dynamic programming improvements would you make in satisfying this recurrence, from a high level? Clearly, iteration over the same integers through multiple functions is inelegant and quite wasteful.