5

Let $(a_n)_{n \ge 0}$ be a linear recurrence sequence taking only integer values. Then $a_n$ satisfies a recurrence with integer coefficients.

Notes:

This follows up an older question on this site. The title of the question mentioned "the defining relation". The counterexample was a constant sequence, that can satisfy many other linear recurrences ( multiply the polynomial $(T-1)$ by other poly). However, if a sequence with integral values satisfies an integral linear recurrence, then the defining relation is also integral ( follows from Gauss lemma, and a bit of linear algebra).

What progress I've made: I am able to show the following:

If we have $\alpha_1$, $\ldots$, $\alpha_{\ell}$ , $x_1$, $\ldots$, $x_{\ell}$ distinct complex numbers, such that

$$\sum \alpha_i x_i^n$$

are integral for all $n\ge 0$, then the $x_i$'s are all algebraic integers. From here we can show that the above sums (depending on $n$, and forming a recurrent sequence) satisfy an integral linear recurrence.

I haven't considered yet the case of sequences of the form :

$$\sum_{i=1}^{\ell} P_i(n) x_i^n$$

where $x_i$ are distinct, and $P_i(n)$ are polynomial in $n$, not $0$.

Any feedback would be appreciated!

$\bf{Added:}$

Some definitions for clarity:

A linear recurrence sequence is a sequence satisfying a linear recurrence, that is, there exists a $d$, and $c_k$, $1\le k \le d$ (apriori $c_k \in \mathbb{C}$) such that for all $n \ge d$ we have

$$a_n = \sum_{k=1}^d c_k a_{n-k} \ \ \ (*)$$

that is, $a_n$ dependes linearly on the previous $d$ terms, for all $n\ge d$.

We say that the recurrence has integers coefficients if all $c_0$, $\ldots$, $c_d$ are integers.

$\bf{Added:}$ A related question ( a particular case).

$\bf{Added:}$ Ewan Delanoy showed that $a_n$ satisfies a recurrence with rational coefficients. A big step forward.

I have a proof along this lines: say $K\subset L$ field and and $c_1$, $\ldots$ $c_d$ in $L$. Consider $V$ the $K$ span of $1, c_j$ in $L$ ( a finite dimensional $K$ vector subspace). There exists a $K$-linear projection $\pi$ from $V$ to $K$ with $\pi(1) = 1$ ( basic linear algebra).

Now consider an equality

$$a = \sum c_j a_j$$

with $a_j$, $a \in K$. Apply the map $\pi$ to it and get

$$a= \sum \pi(c_j) a_j$$

Moral: any linear dependence over $L$ of elements in $K$ produces a linear dependence over $K$ of said elements.

orangeskid
  • 56,630
  • How do you show that "If we have $\alpha_1$, $\ldots$, $\alpha_{\ell}$ , $x_1$, $\ldots$, $x_{\ell}$ distinct complex numbers, such that $\sum \alpha_i x_i^n$ are integral for all $n\ge 0$, then the $x_i$'s are all algebraic integers." ? As far as I can see, the linked question only shows this when the $\alpha_i$'s are equal to $1$. – Ewan Delanoy Jan 28 '23 at 06:54
  • @Ewan Delanoy: Yes, the linked one only solves the case of Newton sums. For the case with coefficients not nec $1$ I use the result from this question – orangeskid Jan 28 '23 at 07:01

2 Answers2

4

As already explained in the OP, we know that $(a_n)$ satisfies a linear recurrence with rational coefficients, so that

$$ D a_n = \sum_{k=1}^d c_k a_{n-k} \tag{1} $$

for some fixed $d\geq 1$, and some integers $D, c_1,\ldots,c_d$ (with $D\gt 0$), and any $n\geq 0$.

Clearly, we may assume that $d$ is minimal such a relation of the form (1) above holds. In turn, we may assume that for thid $d$, $D$ is minimal.

Definition 1. I call $D$ the denominator of the recurrence (1).

Our goal is then to show that the minimal $D$ is $1$. Suppose by contradiction that $D\gt 1$ ; then $D$ has a prime divisor $p$. Write $D=p^{r}D'$ where $r\geq 1$ and $p$ does not divides $D'$. Note that $p$ cannot divide all the $c_k$'s ($1\leq k \leq d$), for otherwise we could divide by $p$ in (1) and contradict the minimality of $D$.

So, there is a smallest index $t\in [|1..d|]$ such that $p$ does not divide $c_t$.

Definition 2. I call $t$ the principal index associated to the recurrence (1) and the prime $p$.

We are now going to use $p$-valuations. As usual, we define the $p$-valuation $\nu_p(x)$ of an integer $x$ as the exponent of $p$ in the prime factorization of $x$ (or $\nu_p(x)=\infty$ for $x=0$), and for a rational $\rho=\frac{x}{y}$ we define $\nu_p(\rho)=\nu_p(x)-\nu_p(y)$.

Definition 3. Let $(a_n)$ be a (not necessarily integer-valued) sequence satisfying (1) with $p\not\div c_1$ (in other words the principal index is equal to $1$). An index $j\geq 0$ is nice if we have $j\geq d-1$ and $\nu_p(a_j)\lt \nu_p(a_{j-k})$ for $1\leq k\leq d-1$.

The usefulness of this definition comes from the following :

Lemma. If an index $j$ is nice, then so is $j+1$ and furthermore we have $\nu_p(a_{j+1})=\nu_p(a_{j})-r$.

Corollary of lemma. If an index $j$ is nice, so are all the indices $\geq j$, and we have $\nu_p(a_n)=\nu_p(a_j)-r(n-j)$ for $n\geq j$. In particular, for large enough $n$, $a_n$ has negative $p$-valuation and is therefore not an integer.

Proof of lemma. Let $m=\nu_p(a_j)$. If $j$ is nice, all the $c_ka_{j+1-k}$ for $2\leq k\leq d$ have $p$-valuation at least $m+1$ ; the same is true of their sum $\sum_{k=2}^d c_ka_{j+1-k}$ ; also the term $c_1a_{j}$ has $p$-valuation exactly $m$, so the sum $p^rD'a_{j+1}=\sum_{k=1}^d c_ka_{j+1-k}$ has $p$-valuation exactly $m$. Since $p\not\mid D'$, we deduce $\nu_p(a_{j+1})=m-r$, which finishes the proof of the lemma.

So, thanks to the corolllary of the lemma, it will suffice to find a nice index $j$.

When $t\gt 1$, there is a linear recurrence with denominator $D^t$ and with principal index equal to $1$, shared by each of the subsequences $(a_{tn+1}),\ldots, (a_{tn+t-1})$ ; this allows us to use more or less the same argument when $t\gt 1$ or $t=1$. Here I will explain the details for $t=1$, inserting a note on how the proof can be adapted to the $t\gt 1$ case.

For $j\geq 0$, consider the vector $v_j=(a_{j},a_{j+2},\ldots,a_{j+(d-2)})$ which lives in ${\mathbb Z}^{d-1}$. The $d$ vectors $v_0,\ldots ,v_{d-1}$ live in a $\mathbb Q$-space of dimension $d-1$, so there are integers $\lambda_0,\lambda_1,\ldots,\lambda_{d-1}$ not all zero, with

$$ \sum_{k=0}^{d-1} \lambda_k v_k = 0 \tag{2} $$

Now, consider the sequence $(b_n)$ defined by $b_n=\sum_{k=0}^{d-1} \lambda_k a_{n+k}$. Then $b_n$ is an integer-valued sequence satisfying the same linear recurrence as $(a_n)$ :

$$ D b_n = \sum_{k=1}^d c_k b_{n-k} \tag{3} $$

And we also have $b_0=b_1=\ldots=b_{d-2}=0$ ; on the other hand, $(b_n)$ cannot be the null sequence (this would violate the minimality of $d$ in (1) ; in the $t\gt 1$ case, the two important facts are that (a) the same recurrence is shared by all the subsequences, and (b) this recurrence has the same degree $d$ as the original one). So, $b_{d-1}$ must be nonzero ; we can write $b_{d-1}=p^s \beta$ with $s\geq 0$ and $p$ does not divide $\beta$.

Then, the index $d-1$ is clearly nice for $(b_n)$. This brings the desired contradiction and finishes the proof.

An example: In answer to a comment, here is an illustration with the values $d=3,D=p=2,(c_1,c_2,c_3)=(1,2,2)$, so that $(b_n)$ satisfies $b_{n+3}=\frac{1}{2}b_{n+2}+b_{n+1}+b_n$.

If we start with $b_0=b_1=0, b_{2}=2022$ (say), then $b_2=(2^1) \times 1011$, $b_3=(2^0) \times 1011$, $b_4=(2^{-1}) \times 5055$ : the pattern is indeed the one predicted by the corollary of the lemma. To get the desired contradiction, we need look no further than $b_4$.

Ewan Delanoy
  • 63,436
  • I don't understand very well the argument, maybe you can explain it in a simplified setting: $(b_n)$ with a minimal equation of the form $b_{n+3} = 1/2 b_{n+2} + b_{n+1} + b_n$, and $b_0 = b_1 = 0$, $b_2 \ne 0$ ( so $p=2$) I wonder how the contradiction is obtained. – orangeskid Jan 31 '23 at 22:14
  • @orangeskid I have added a few details (definition 3 and a lemma), plus an illustration with your numerical values. Hopefully it's clearer now – Ewan Delanoy Feb 01 '23 at 13:17
  • I get it now, that is very nice! In this case the smallest value of $v_p$ on a segment $[0,n$ is achieved only at $n$ and is strictly decreasing. But what do we do if the other coefficients after the first are not $2$-integers? I will think more about it. – orangeskid Feb 01 '23 at 13:46
  • I am looking at the proof of the lemma. If $v_p(c_1) \le v_p(c_k)$ for all $k\ge 2$ then all is OK. I am not sure that it works if the coefficients $c_k$, $k\ge 2$ have $p$ valuations $< v_p(c_1)$, that is, the line with Proof of the lemma I do not see, maybe I'm just not getting it. – orangeskid Feb 01 '23 at 17:44
  • I thought about your idea of using shifts of your sequence. Basically we have an iso from polynomial of degree $d-1$ ( hence linear combinations of shifts $(a'n)$ ) and the initial $d$ terms, that can be chosen arbitrarily. Maybe that would show that we get some subsequence $v_p(a'{n_s})_s$'s decreasing to $-\infty$. – orangeskid Feb 01 '23 at 19:01
  • Here is how I see your idea: if we have a recurrence of degree $d$ rational but note integral ( focus only on a $p$ say) then for a good (generic?) initial value $(a_0, \ldots, a_{d-1})$ the $p$ denominators will explode. I am very happy to learn this. Whatever kinks I am sure they are fixable. One can even test with with a computer in particular cases. Great, thank you! – orangeskid Feb 01 '23 at 19:47
  • 1
    @orangeskid Regarding the proof of the lemma, you said (rightly) : "if $v_p(c_1)\leq v_p(c_k)$ for all $k\geq 2$ then all is OK". But this is indeed the case if all the $c_k$'s are integers and $p$ does not divide $c_1$. If, on the other hand, $p$ does divide the integer $c_1$, as I explain in this answer you have to consider subsequences to reduce the general case to this case. – Ewan Delanoy Feb 01 '23 at 19:50
  • Thank you, I've learned a lot from your answers! – orangeskid Feb 02 '23 at 23:00
1

I will add a sketch of a solution since the post is becoming rather long.

The solution consists of two parts. One is linear algebra, the other is more commutative algebra than number theory. The first part is dealt with in the posting. Here is the second part.

The key word: discrete valuations.

Let $(A, v)$ be a discrete valuation ring with field of fractions $K$. Consider a linear recurrent sequence of $(a_n)$ of elements of $A$ satisfying a recurrence of degree $d$:

$$a_{n+d} = \sum_{k=1}^d c_k a_{n+d-k}$$

for all $n\ge 0$ ( $c_k \in F$ are fixed). Then $(a_n)$ satisfies a recurrence of degree $\le d$ with coefficients in $A$.

Indeed, passing to a finite extension, we may assume that the roots $\gamma_k$ of the polynomial $x^d - \sum c_k x^{d-k}$ are in $K$. Let us show that only the integral roots matter. For that, we will now prove the following

Lemma: Let $P_1$, $\ldots$, $P_s$ polynomials in $A[x]$, of degree $d_1-1$, $\ldots$, $d_s-1$, and $\gamma_1$, $\ldots$, $\gamma_s$ in $F\backslash A$. Assume that for all $n\ge 0$ we have

$$a'_n \colon = \sum_{k=1}^s P_k(n) \gamma_k^n \in A$$

Then $a'_n = 0$ for all $n \ge 0$.

Indeed, $a'_n$ satisfies the linear recurrence given by the polynomial

$$\prod_{k=1}^s (x-\gamma_k)^{d_k}$$

Write the above polynomial as

$$x^m - (c'_1 x^{m-1} + \cdots +c'_m)$$

Notice that from the construction we have $c_m'$ has valuation strictly smaller that all the other coefficients.

Now, look again at the sequence $a'_n$. Assume that is is not the zero sequence. Then there exists elements of valuation $< \infty$. Consider an element of smaller possible valuation, let that be $a'_r$. But from the recurrence we conclude that $a'_r$ has valuation larger than the valuation of the min for the next $m$ terms of the sequence, contradiction.

A germ for the idea above: assume that an integer sequence satisfies a recurrence

$$a_{n+2} = \frac{3}{2} a_{n+1} - \frac{5}{4} a_n$$

Then $a_n = 0$ for all $n$.

$\bf{Added:}$ I like the solution of @Ewan Delanoy. I am trying to explain what I finally understood. In a few words: if the recurrence has rational and non-integer coefficients then the repeated application involves arbitrarily large denominators. Now, if the recurrence is also minimal, we can ensure that we have arbitrary initial conditions, that will get eventually non-integral elements of the sequences, hence a contradiction. Details below from what I understood.

  1. Consider a recurrence

$$a_{n+d} = \sum_{k=0}^d c_k a_{n+d-k}$$ with rational coefficients $c_k$. Write it in matrix form

$$A_{n+1} = C \cdot A_{n}$$

where $A_{n} = (a_{n+d-1}, a_{n+d-2}, \ldots, a_{n})$, and $M$ is a companion matrix (not hard to see). Then we have

$$A_{n+s} = C^s \cdot A_{n}$$

Assume that not all of the coefficients $c_k$ are integers. We want to show that the some denominators of $C^s$ will be unbounded as $s\to \infty$. Now, this may not be the case for every matrix $C$ with rational entries, some non-integral -- think of the conjugate of an integral matrix. However, the crucial thing here is that $C$ has a characteristic equation with some coefficients non-integral. Moreover, the characteristic equation of $M^s$ is has roots $x_i^s$, where $x_i$ are the roots of $x^d - (\sum c_k x_{d-k})$. It is enough to focus on one prime number. Here is the

Lemma: Consider $P(x)=x^d - (\sum c_k x_{d-k})$ in $\mathbb{Q}[x]$, with roots $x_i$, $1\le i \le d$ with some coefficients having $p$ in the denominator. Let $P_s(x)$ be the monic polynomial with roots $x_i^s$. Then for $s\to \infty$ some coefficients of $P_s$ have arbitrary large powers of $p$ in the denominator.

Proof: Consider all in a finite extension of $\mathbb{Q}$ containing all of the roots $x_i$. Let $|\cdot|$ an extension of the usual absolute $p$ value. Now we know that some root $x_i$ has $|x_i|>1$ ( otherwise all of the coefficients of $P(x)$ were $p$-integers. So $|x_i^s|\to \infty$ as $s\to \infty$. Now, since we have bounds of the roots in terms of bounds of the coefficients, it follows that the coefficients of $P_s(x)$ are unbounded as $s\to \infty$.

Note: Some of the coefficients of $P_s(x)$ can stay bounded. A more detailed analysis show that if $|c_1|_p > 1$ then $(|c_{1,s}|)_s$ is also unbounded ( the Newton sums).

  1. Say we have a recurrence $a_{n+d}= \sum_{k=1}^d c_k a_{n+d-k}$ that is minimal ( the sequence $(a_n)$ does not satisfy a recurrence of smaller degree. We then have a map from polynomials of degree $< d$ to sequences that satisfy the same recurrence as above $$\sum_{k=0}^{d-1} u_k x^k \mapsto (\sum u_k T^k)(a_n)= (a'_n)$$ where $T$ is the shift of sequences. Since the $d$-recurrence for $(a_n)$ is minimal, the map is injective. Look at it as a map from $(u_0, \ldots, u_{d-1}) $ to the initial $d$ terms of $(a'_n)$. It is again injective (easy to see). Now, over $\mathbb{Q}$ that means bijective ( being linear), while over $\mathbb{Z}$ this means of finite index in $\mathbb{Z}^d$.

  2. Putting 1. and 2. together : the sequence of matrices $(C^s)$ has entries with unbounded $p$-denominators from 1. From 2. we can choose an initial value at our convenience from some $N \cdot \mathbb{Z}^d$. It is easy now.

orangeskid
  • 56,630
  • @Ewan Delanoy: I am only considering the piece of the sequence formed by the roots that are not integers. That forms another recurrence sequence, with the last coefficient having a $v$ value smaller that all of the others. So it will not be the original recurrence, but another one,, satisfying an important condition w r to $v$, and we show that if all of the terms are integers, they must be $0$. Therefore, the piece of $(a_n)$ formed by the non-integral roots is $0$. ( integer means positive valuation. Recall: integral closure = intersection of valuation rings containing it). – orangeskid Feb 01 '23 at 13:51