We have a recurrence of the form $a_0t_n + a_1t_{n-1} + ... + a_kt_{n-k}= 0$, with $k$ initial conditions, i.e. for $a\leq i < a+k$, $t(i) = b_i$, where $b_i$ are constants.
There are infinite solutions possible to the equation $a_0t_n + a_1t_{n-1} + ... + a_kt_{n-k}= 0$, but my textbook suggest that by intelligent guesswork, we should look for solutions of the form $t_n = x^n$. What is the intuition here?
Supposing we are allowed to do this, we then get the equation $a_0x^n + a_1x^{n-1} + ... + a_kx^{n-k} = 0$. And this equation is true if and only if the characteristic equation $p(x) = a_0x^k + a_1x^{k-1} + ... + a_k = 0$, since we are omitting the trivial solution $x=0$. By the fundamental theorem of algebra, we know that a polynomial of degree $k$ has $k$ roots. Therefore, we can re-write the characteristic equation as $p(x) = \prod_{i=1}^k (x-r_i)$, where the $r_i$'s are the only solutions to the equation $p(x) =0$. For a given $r_i$, we have $x=r_i$ is a solution to $p(x)$, therefore $t_n = x^n = r_i^n$ is a solution to the equation $a_0t_n + a_1t_{n-1} + ... + a_kt_{n-k}= 0$. So far, so good.
However, other than the linear homogeneous equation, we also have $k$ initial conditions. Therefore to find $t_n$ satisfying these initial conditions too, we go on to set $t_n=\sum_{i=1}^k c_ir_i^n$ for any choice of constants $c_1,c_2,...,c_k$, since any linear combination of solutions is still a solution to our recurrence. With the $k$ initial conditions, we are then able to solve for $c_1,c_2,...,c_k$ and find $t_n$. The next part is what confuses me. The textbook claims that our recurrence has "only" solutions of this form provided all the $r_i$ are distincts. I don't understand what this means. If we had multiple roots, then we would have $t_n=\sum_{i=1}^s c_ir_i^n$, where $s<k$. We would solve for $c_1,c_2,...,c_s$ with $k$ initial conditions, which would give us $t_n$. Why wouldn't this be correct?