4

When calculating the numerical range of the matrix $$ C := \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix} $$ and the left-shift operator on the Hilbert space $\ell_2$ $$ T: \ell_2(\mathbb{N}) \to \ell_2(\mathbb{N}), \ (x_1, x_2, \ldots) \mapsto (x_2, \ldots) $$ I noticed that the latter can be considered an infinite-dimensional generalisation of the first as $$ \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix} \begin{pmatrix} x_1 \\ x_2 \end{pmatrix} = \begin{pmatrix} x_2 \\ 0 \end{pmatrix} \quad \text{and} \quad \begin{pmatrix} 0 & 1 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \end{pmatrix} \begin{pmatrix} x_1 \\ x_2 \\ x_3 \end{pmatrix} = \begin{pmatrix} x_2 \\ x_3 \\ 0 \end{pmatrix} $$ and so on. If we now continue this pattern to infinity (I know this is not perfectly rigorous, but considering the norm of the difference of $T x$ and $T_n x$, where $T_n$ are the matrices, we see it goes to zero as $\ell_2$ sequences are zero sequences), we end up with $T$!

A similar example of course is the right shift operator on $\ell_2$ with "finite-dimensional analogon" $$\begin{pmatrix} 0 & 0 \\ 1 & 0 \end{pmatrix}.$$

For $$T: \ell_2 \to \ell_2, \ (x_1, x_2, x_3, \ldots) \mapsto \left(x_1, \frac{x_2}{2}, \frac{x_3}{3}, \ldots \right)$$ this is more difficult to find such an analogon, since one could argue like above that $$ T_1 := \begin{pmatrix} 1 & 0 \\ 0 & \frac{1}{2} \end{pmatrix} $$ is the finite-dimensional analogon as $$ T_1 \begin{pmatrix} x_1 \\ x_2 \end{pmatrix} = \begin{pmatrix} x_1 \\ \frac{x_2}{2} \end{pmatrix}, $$ but this somewhat doesn't contain the "essence" of the operator like in the examples of shift-operators above, as $\begin{pmatrix} 0 & 1 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0\end{pmatrix}$ exhibits a certain self-similarity, in the way that you can find the lower dimensional matrix $C$ in it two times (though they are overlapping).

Questions

  1. Are there other examples of such a corresponded (one could also consider $(x_1, x_2, x_3) \mapsto \left(x_1, \frac{x_2}{2!}, \frac{x_3}{3!}\right)$ and similar variations) and what might be a reason that $T$ has a "finite-dimensional analogon"?
  2. Consider $D := \{ y \in \ell_2: \exists N \in \mathbb{N}: y_n = 0 \ \forall n > N\}$, pick $x \in \ell_2 \setminus D$ and define $$ \hat{T}: \text{span}(x) + D \to \text{span}(x), \ cx + d \mapsto cx, $$ where $c \in \mathbb{C}$ and $d \in D$. $\hat{T}$ is still linear, but unbounded (and not closable). Can there still be a "finite-dimensional analogon"?
ViktorStein
  • 5,024

1 Answers1

2

Let $T:\ell_2(\mathbb{N})\to\ell_2(\mathbb{N})$ be a (bounded) linear operator. Then, we can represent $T$ as:

$$T(x_1,x_2,\ldots)=(T_1(x_1,x_2,\ldots),T_2(x_1,x_2,\ldots),\ldots)=(T_j((x_i)_{i=1}^n))_{j=1}^n,$$ where $T_j:\ell_2(\mathbb{N})\to\mathbb{C}$.

Actually, this corresponds to thinking of $T$ as an "infinite" matrix. Note that linearity of $T$ implies linearity of $T_j$ for $j=1,2,\ldots$. Indeed, let $x,y\in\ell_2(\mathbb{N})$ and $\lambda\in\mathbb{C}$. Then - let $\{e_1,e_2,\ldots\}$ be the usual orthonormal base of $\ell_2$:

$$T_j(x+λy)=\langle T(x+λy),e_j\rangle=\langle Tx+λTy,e_j\rangle=\langle Tx\rangle+λ\langle Ty\rangle=T_jx+λT_jy,$$ so $T_j$ is linear. Now, let $x\in\ell_2(\mathbb{N})$ and consider the sequence:

$$y_n=Tx-T^{(n)}x,$$ where $T^{(n)}x:=(T_1x,T_2x,\ldots,T_nx,0,\ldots)$. Then, we have:

$$\left\lVert y_n\right\rVert_2=\left\lVert Tx-T^{(n)}x\right\rVert_2=\left\lVert(0,\ldots,0,T_{n+1}x,T_{n+2}x,\ldots)\right\rVert_2=\left(\sum_{k=n+1}^\infty|T_{k}x|^2\right)^{1/2}.$$ Now, we have that: $$\left(\sum_{k=1}^\infty|T_kx|^2\right)^{1/2}=\lVert Tx\rVert_2<+\infty,$$ so:

$$\lim_{x\to+\infty}\left(\sum_{k=n+1}^\infty|T_{k}x|^2\right)^{1/2}=0\Rightarrow\lVert y_n\rVert_2\to0\Leftrightarrow y_n\to0.$$

So, $T^{(n)}$ converge pointwise to $T$ - actually, $T$ is not necessary to be bounded for this to hold.

So, for every linear operator $T$ we have found a sequence of other linear operators that converge to $T$ - it is easy to prove that $T^{(n)}$ are indeed linear for any $n=1,2,\ldots$. But, this does not account for what you propose about the "finite" analogon of $T$. Thus, we introduce finite rank operators.

A linear operator $T:E\to F$ is said to be of finite rank if $\dim\mathrm{im}\,T=n<+\infty$. In other words, $T$ has a finite-dimensional image. We also denote its rank with $\mathrm{rank}(T)=n$.

Now, let us focus on bounded finite rank operators on $\ell_2$ (or on Hilbert spaces, in general). We can prove that any finite rank operator on a Hilbert space can be represented as: $$Tx=\sum_{k=1}^n\langle y_k,x\rangle a_k,$$ where $n=\mathrm{rank}(T)$, $y_k\in\ell_2$ and $\{a_k\}$ is any algebraic base of $\mathrm{im}\,T$ - the proof requires Riesz Representasion Theorem and is a little bit lenghty. Now, since we can choose any algebraic base of $\mathrm{imt}\,T$ we want, we can choose a base of the form $\{e_{i_k},k=1,2,\ldots,n\}$, where $\{e_i\}$ is the usual orthonormal base of $\ell_2$. At this point, note that we strongly need $\mathrm{im}\,T$ to be finite-dimensional, since $\{e_1,e_2,\ldots\}$ is not an algebraic base of $\ell_2$, in general. However, finite-dimensionality implies that im$\,T$ is closed under the topology induced by $\ell_2-$norm, so, an orthonormal base of im$\,T$ is also an algebraic one.

Now have that:

$$Te_i=\sum_{k=1}^n\langle y_k,e_i\rangle e_{i_k}=\sum_{k=1}^ny_k(i)e_{i_k}.$$

So, for an arbitrary $x=(x_1,x_2,\ldots)\in\ell_2$, we have - recall that $T$ is bounded and of finite rank:

$$\begin{align} Tx&=T\sum_{j=1}^\infty x_je_j=\sum_{j=1}^∞ x_jTe_j=\sum_{j=1}^\infty x_j\sum_{k=1}^ny_k(j)e_{i_k}=\sum_{j=1}^\infty\sum_{k=1}^nx_jy_k(j)e_{i_k}=\sum_{k=1}^n\sum_{j=1}^\infty x_jy_k(j)e_{i_k}=\\ &=\sum_{k=1}^ne_{i_k}\sum_{j=1}^\infty\underbrace{x_jy_k(j)}_{t_{kj}}. \end{align}$$

Using the above representation, one may "visualize" a finite rank operator on $x$ as the product of an $n\times\infty$ matrix with elements $y_k(j)$, $k=1,\ldots,n$, $j=1,2,\ldots$ and a $\infty\times1$ vector - well, actually an "infinite" column - $x=(x_1,x_2,\ldots)^t$. (Note that I "cheated", in the sense that the rows are not necessarily $n$ but, actually $\max\{i_k:k=1,2,\ldots,n\}$, but, anyway, they are finite in count).

Now, for your notion of "finite" analogon, consider a bounded linear operator $T$ and the "infinite" matrix representation we provided above:

$$Tx=(T_1x,T_2x,\ldots).$$

If we also demand that $T_j$ are of finite rank, then we can think of the "finite" analogon of $T$ as the sequence $T^{(n)}$ as defined above, with the exception that we "trim" each operator's domain from $\ell_2$ to $\mathbb{R}^n$, so as to represent each operator with finite square matrix instead of one with finite rows but infinite columns.

A more general result that invokes convergence on the space of bounded operators and not simply pointwise convergence is the following -(proof omitted, of course ;):

A bounded operator $T:H\to H$ where $H$ is a Hilbert space is compact iff $\lVert T-F_n\rVert\to0$ for a sequence of finite rank bounded operators $F_n:H\to H$.

That is, if in the above you demand that the convergence is not pointwise but on the bounded operators topology, then $T$ has to be compact (well, this could be expected, since compactness is often seen as an "analogon" of finiteness).