I have an algorithm for converting a two-variable function $F(x,y)$ into a sum of products of single-variable functions $F(x,y) = \sum_i g_i(x)h_i(y)$. I am attempting to determine whether (or when) the $g_i$ produced in this way are all linearly independent.
The algorithm is as follows:
- If $F(x,y)$ is identically zero, then we're done—the decomposition needs no terms.
- Otherwise, pick a point $(a,b)$ where $F(a,b)\neq 0$. Define $g(x)\equiv F(x,b)$ and define $h(y) = F(a, y)/F(a,b)$. The functions $g(x)$ and $h(y)$ are the first two factors in the decomposition.
- To find the remaining factors, recursively apply this algorithm to the new function $G(x,y)\equiv F(x,y)-g(x)h(y)$.
- The algorithm proceeds, generating successive terms $g_i,h_i$, stopping whenever $F(x,y) - \sum_i g_ih_i = 0$, i.e. when $F(x,y)=\sum_ig_ih_i$.
What I've tried
I think it might be true that each function $g_k$ is a linear combination of $F(x,b_1),\ldots,F(x,b_k)$, in which case the $g_k$ are linearly independent if and only if the $F(x,b_i)$ are.
I also know the theorem that functions $g_1,\ldots,g_n$ are linearly independent if and only if there exist points $x_1,\ldots,x_n$ such that the determinant $\det([g_i(x_j)]_{i,j}) \neq 0$. But I haven't been able to make the calculation do anything useful as of yet.
I wonder if I might be able to use the points $a_1,\ldots,a_n$ (generated by the algorithm as places where the function is nonzero) as the points required for the theorem, but I don't see how they carry the right properties to prove linear independence.
ETA: I think the $g_i(x)$ might have different zeroes. That is, I suspect $g_i(a_j)$ is zero whenever $i\geq k$ and nonzero whenever $i<k$, which suggests a kind of proof by induction that if any linear combination of them is zero, then the coefficient on $g_1$ must be zero, hence on $g_2$, hence on $g_3$, etc.