In this post, there are good perspectives on the diagonalization of the continuous convolution operation. However, something about it troubles me deeply, and I'd like to formalize my concerns here with bi-infinite discrete vectors.
If I'm not mistaken, every discrete linear map (even an infinite dimensional) can be written as a matrix in some basis. Now let $A_{f}(g)(x) = f \ast g(x) = \sum_{y \in \mathbb{Z}} f(y)g(x-y), x \in \mathbb{Z}$ be the linear map of convolving bi-infinite vectors $f,g \in l_{2}(\mathbb{Z}, \mathbb{C})$ as described in the post in the analogous case of continuous functions. Now, for $e_{s}(x) = e^{i2\pi s x}$, $$ \begin{align*} A_{f}(e_{s})(x) &= \sum_{y \in \mathbb{Z}} f(y)e_{s}(x-y) \\ &= \sum_{y \in \mathbb{Z}} f(y)e^{i2\pi s (x-y)} \\ &= e^{i2\pi s x} \sum_{y \in \mathbb{Z}} f(y)e^{-i2\pi s y} \\ &= e^{i2\pi s x} F(s) \\ &= F(s)e_{s}(x) \end{align*} $$ i.e. $e_{s}$ is an eigenvector of $A_{f}$ with as an eigenvalue the Fourier transform $F(s)$ of $f$. Also, it is clear to me that $\sum_{y \in \mathbb{Z}} f(y)g(x-y)$ can be written as a Toeplitz matrix times $f$: $$ \begin{equation*} \begin{pmatrix} \ddots & \vdots & \vdots & \vdots & \vdots & \vdots \\ \cdots & g(0) & g(-1) & g(-2) & g(-3) & \cdots \\ \cdots & g(1) & \boxed{g(0)} & g(-1) & g(-2) & \cdots \\ \cdots & g(2) & g(1) & g(0) & g(-1) & \cdots \\ \cdots & g(3) & g(2) & g(1) & g(0) & \cdots \\ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots \end{pmatrix} \begin{pmatrix} \cdots \\ f(-1) \\ f(0) \\ f(1) \\ f(2) \\ \cdots \end{pmatrix} \end{equation*} $$ where I've boxed the 0,0 entry of the matrix for clarity.
Now, the thing that troubles me here is that $g = e_s$ is an eigenvector of $A_f$ but in the matrix representation $f$ becomes the vector and $g$ becomes the matrix, and in addition the eigenvalues come from the vector $f$, (the Fourier transform $F(s)$). So, the roles of linear map and vector in a generic eigenvalue problem $Ag = \lambda g$ are sort of switched, which to me is really confusing. I know that linear maps satisfy the requirements of the definition of a vector and "normal" vectors can be interpreted as linear maps (linear functionals), but there is something going on here that prevents me from getting the full picture.
How can we interprete $e_s$ (a vector) as an eigenvector in the problem, when it is converted to a matrix as it's "role" in the convolution is inherently two-dimensional because of a two-parameter input $x$ and $y$?
In another words, in the convolution eigenvalue problem, how can we interprete $g$ as a vector and $f$ as a linear map when $g$ is "two-dimensional" and $f$ is "one-dimensional"?