1

In this post, there are good perspectives on the diagonalization of the continuous convolution operation. However, something about it troubles me deeply, and I'd like to formalize my concerns here with bi-infinite discrete vectors.

If I'm not mistaken, every discrete linear map (even an infinite dimensional) can be written as a matrix in some basis. Now let $A_{f}(g)(x) = f \ast g(x) = \sum_{y \in \mathbb{Z}} f(y)g(x-y), x \in \mathbb{Z}$ be the linear map of convolving bi-infinite vectors $f,g \in l_{2}(\mathbb{Z}, \mathbb{C})$ as described in the post in the analogous case of continuous functions. Now, for $e_{s}(x) = e^{i2\pi s x}$, $$ \begin{align*} A_{f}(e_{s})(x) &= \sum_{y \in \mathbb{Z}} f(y)e_{s}(x-y) \\ &= \sum_{y \in \mathbb{Z}} f(y)e^{i2\pi s (x-y)} \\ &= e^{i2\pi s x} \sum_{y \in \mathbb{Z}} f(y)e^{-i2\pi s y} \\ &= e^{i2\pi s x} F(s) \\ &= F(s)e_{s}(x) \end{align*} $$ i.e. $e_{s}$ is an eigenvector of $A_{f}$ with as an eigenvalue the Fourier transform $F(s)$ of $f$. Also, it is clear to me that $\sum_{y \in \mathbb{Z}} f(y)g(x-y)$ can be written as a Toeplitz matrix times $f$: $$ \begin{equation*} \begin{pmatrix} \ddots & \vdots & \vdots & \vdots & \vdots & \vdots \\ \cdots & g(0) & g(-1) & g(-2) & g(-3) & \cdots \\ \cdots & g(1) & \boxed{g(0)} & g(-1) & g(-2) & \cdots \\ \cdots & g(2) & g(1) & g(0) & g(-1) & \cdots \\ \cdots & g(3) & g(2) & g(1) & g(0) & \cdots \\ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots \end{pmatrix} \begin{pmatrix} \cdots \\ f(-1) \\ f(0) \\ f(1) \\ f(2) \\ \cdots \end{pmatrix} \end{equation*} $$ where I've boxed the 0,0 entry of the matrix for clarity.

Now, the thing that troubles me here is that $g = e_s$ is an eigenvector of $A_f$ but in the matrix representation $f$ becomes the vector and $g$ becomes the matrix, and in addition the eigenvalues come from the vector $f$, (the Fourier transform $F(s)$). So, the roles of linear map and vector in a generic eigenvalue problem $Ag = \lambda g$ are sort of switched, which to me is really confusing. I know that linear maps satisfy the requirements of the definition of a vector and "normal" vectors can be interpreted as linear maps (linear functionals), but there is something going on here that prevents me from getting the full picture.

How can we interprete $e_s$ (a vector) as an eigenvector in the problem, when it is converted to a matrix as it's "role" in the convolution is inherently two-dimensional because of a two-parameter input $x$ and $y$?

In another words, in the convolution eigenvalue problem, how can we interprete $g$ as a vector and $f$ as a linear map when $g$ is "two-dimensional" and $f$ is "one-dimensional"?

qwerty
  • 97
  • Letting $z = x - y$ so that $y = x - z$, we see that $$\sum_{y\in \Bbb Z} f(y)g(x-y) = \sum_{z\in \Bbb Z}f(x-z)g(z).$$ $f$ and $g$ are exactly the same sort of object, and the convolution is symmetric: $fg = gf$. In writing $A_f(g)$, you chose to treat $f$ as a linear operator acting on $g$. In writing out your Toplitz matrix, you chose to treat $g$ as a linear operator acting on $f$. Either view is correct, and by the symmetry of convolution, the result of either action is the same. – Paul Sinclair Dec 13 '24 at 14:46
  • My confusion was that whichever expression we choose, variables $x$ and $y$ or variables $x$ and $z$, i.e., whichever function $f$ or $g$ is treated to have two inputs, then in either case, the function having the two-parameter input "becomes" the eigenvector, since it relies on the property of the $e_s$ function to "transform" summation "into" multiplication. And similarly, the function having one input "becomes" the eigenvalue. (splitting this comment in two because of the length requirement) – qwerty Dec 16 '24 at 08:28
  • So, in writing $A_f(g)$ when I chose to treat $f$ as a linear operator acting on $g$, which resulted in $g = e_s$ becoming the eigenvector, then that choice necessarily implies that $g$ be treated as the "two-dimensional" object (i.e. the matrix) and $f$ as the one-dimensional object (i.e. the vector), since otherwise we can't exploit the property of $e_s$ "transforming" the two-parameter input "into" multiplication. And the same would hold if were to choose the linear operator $A_g$ acting on $f$: in the Toeplitz representation, $f$ would become the matrix and $g$ would become the vector. – qwerty Dec 16 '24 at 08:28
  • The matrix is the 2D object. Vectors, even eigenvectors, are the 1D objects. In the eigenvector equation $Mv = \lambda v$, $M$ is the matrix (2D) acting on the eigenvector $v$ (1D), converting it into another vector, which for eigenvectors is just a multiple of itself. The "summation" into "multiplication" aspect is that the action of $M$ on $v$ is a summation of products of elements of $M$ with elements of $v$. The eigenvectors are a set of rare vectors where this action leaves the direction fixed, only multiplying the entire vector by a single scalar. – Paul Sinclair Dec 16 '24 at 13:50
  • In this case they exploit the commutivity of convolution and the summation-to-multiplication property of exponentiation to demonstrate that the eigenvectors for convolution are exponential functions. But this isn't some general concept that eigenvectors are somehow two-dimensional. It is a very specific trick using the special properties of this case. – Paul Sinclair Dec 16 '24 at 13:55

0 Answers0