0

Let $l^2$ be the Hilbert space of all complex sequences $\phi =(\phi_j)_{j=0}^{\infty}$ such that $\sum_{j=0}^{\infty} |\phi_j |^2 < \infty$. Set

$D= \{ \phi \in l^2 : \sum_{j=0}^{\infty} j |\phi_j |^2 < \infty \}$,

and consider the operator $X$ on $D$ which associates to each $\phi \in D$ the vector $X \phi$, whose j-th component (j=0,1,2,...) is

$(X \phi)_j = \sqrt{j+1} \psi_{j+1} + \sqrt{j} \psi_{j-1}$,

(we set $\psi_{-1}=0$). We have for every $\phi \in l^2 , \psi \in D$ \begin{equation} \sum_{j=0}^{\infty} \bar{\phi_j} [\sqrt{j+1} \psi_{j+1} + \sqrt{j} \psi_{j-1}] = \sum_{j=0}^{\infty} \psi_j \overline{ [\sqrt{j+1} \phi_{j+1} + \sqrt{j} \phi_{j-1}]}, \end{equation} where again we set $\phi_{-1}=0$. So in particular, $X$ is a symmetric operator. But it is not self-adjoint. To see this, consider the vector $\phi$ whose j-th component is $\phi_j = (-1)^{\lfloor j/2 \rfloor} j^{-\beta}$, where $1/2 < \beta <1$. Is is easy to see that $\phi \in l^2 \backslash D$, and that the vector whose j-th component is $\sqrt{j+1} \phi_{j+1} + \sqrt{j} \phi_{j-1}$ is in $l^2$, so $\phi$ belongs to the domain of the adjoint. I conjectured that the domain of the adjoint $X^{*}$ is exactly the set of all vectors $\phi \in l^2$ such that $ \sum_{j=0}^{\infty} |\sqrt{j+1} \phi_{j+1} + \sqrt{j} \phi_{j-1} |^2 < \infty$. Anyhow, I could not prove it, and now I am starting to think that it is false. See also my related post "Null functional on $l^2$".

2 Answers2

2

Your proposed domain for the adjoint $X^\star$ appears to me to be correct.

As defined, $$ (Xf,g) = \sum_{j=0}^{\infty}(\sqrt{j+1}f_{j+1}+\sqrt{j}f_{j-1})\overline{g_j}. $$ By definition of adjoint, $g\in\mathcal{D}(X^{\star})$ iff there exists $h \in \ell^2$ such that the following holds for all $f \in \mathcal{D}(X)$: $$ \sum_{j=0}^{\infty}(\sqrt{j+1}f_{j+1}+\sqrt{j}f_{j-1})\overline{g_j}=(Xf,g) = (f,h) = \sum_{j=0}^{\infty}f_j\overline{h_j} $$ In particular, it must hold for $f=(0,1,0,0,\cdots)$, which leads to $$ \overline{g_0}+\sqrt{2}\overline{g_2}=\overline{h_1} \implies h_1=g_0+\sqrt{2}g_2. $$ And it must hold for $f=(0,0,1,0,\cdots)$, which gives $$ \sqrt{2}\overline{g_1}+\sqrt{3}\overline{g_3}=\overline{h_2} \implies h_2= \sqrt{2}g_1+\sqrt{3}g_3 $$ So it is necessary that $h_j=\sqrt{j}g_{j-1}+\sqrt{j+1}g_{j+1}$. And it is necessary that $\sum_{j}|h_j|^2 < \infty$. So $g$ is in your proposed domain $\mathscr{D}$. Conversely, suppose $g\in\mathscr{D}$, and suppose $f \in \mathcal{D}(X)$. Then $$ \sum_{j=0}^{\infty}(\sqrt{j+1}f_{j+1}+\sqrt{j}f_{j-1})\overline{g_j} -\sum_{j=0}^{\infty}f_j(\sqrt{j+1}\overline{g_j}+\sqrt{j}\overline{g_{j-1}}) = 0, $$ because you can rearrange the terms in the first sum without affecting convergence in order to match the terms in the second sum. (This is because $f\in\mathcal{D}(X)$ implies the absolute convergence of $\sum_{j=0}^{\infty}\sqrt{j+1}f_{j+1}\overline{g_{j}}$ and of $\sum_{j=0}^{\infty}\sqrt{j}f_{j-1}\overline{g_{j}}$.)

Disintegrating By Parts
  • 91,908
  • 6
  • 76
  • 168
  • For sure, your argument is correct, and I thank you very much for having clearly answered my question. Still I have a big doubt about this operator. Is the closure of $X$ equal to the adjoint $X^{}$? Clearly, the domain of $\bar{X}$ is contained in the domain $D$ of $X^{}$, but I can't show the reverse inclusion. Any help is welcome. – Maurizio Barbato Jan 27 '16 at 15:39
  • @MauryBarbato : Have you tried to solve $X^\star f= \pm i f$? If either has a non-trivial solution with $f \in \ell^2$, then $X$ cannot be essentially selfadjoint, meaning that the closure of $X$ is not $X^{\star}$. That actually resolves the problem one way or the other. I don't have much time at the moment, or I would try. – Disintegrating By Parts Jan 27 '16 at 18:30
  • I checked that the equation $X^{}f= \pm i f$ has only the trivial solution, but I didn't understand why this should imply that the closure of $X$ is strictly contained in the adjoint $X^{}$ of $X$. Could you give me some hint or some reference where I could learn this fact, please? Thank you very much in advance. – Maurizio Barbato Jan 27 '16 at 20:18
  • I have found the result you were referring to in Schmudgen, Unbounded Self-Adjoint Operators on Hilbert Space, Proposition 3.8. Thank you very much for your invaluable help. I wouldn't be able to answer these questions by myself! – Maurizio Barbato Jan 27 '16 at 20:36
  • @MauryBarbato : You're very welcome. I'm glad I could help. Knowing that $\mathcal{N}(X^{\star}\pm iI)={0}$ means that $X^{\star}$ must be the closure of $X$ because $\mathcal{D}(X^{\star})=\mathcal{D}(X^c)\oplus\mathcal{N}(X^{\star}-iI)\oplus \mathcal{N}(X^{\star}+iI)$, where $X^c$ is the closure of a densely-defined symmetric operator. These null spaces are the deficiency spaces. – Disintegrating By Parts Jan 27 '16 at 21:13
  • @MauryBarbato : You get solutions of the equations for $X^{\star}f=\pm if$, but I could not tell off hand if the solutions were in $\ell^2$. Knowing there are no non-zero solutions then gives $X^c = X^{\star}$, which is the definition of $X$ being essentially selfadjoint because the closure $X^c$ must be selfadjoint when the deficiency spaces are trivial. – Disintegrating By Parts Jan 27 '16 at 21:26
  • Thank you very much for your explanation! Only I have some doubt about your statement that e.g. $X^{\star} f = if$ has some non-trivial solution. Actually, I wrote down this equation componentwise, starting from the 0-th component and it has only the zero vector solution (the equations for the first two components give $f_{0}=f_{1}=0$, the third then gives $f_{2}=0$, and so on). What did you mean when you said it has some other solution maybe not in $\ell^2$? Thank you very much again! – Maurizio Barbato Jan 28 '16 at 10:43
  • @MauryBarbato : The first equation is $\sqrt{0+1}f_{0+1}=if_{0}$, which leaves $f_{0}$ arbitrary, and $f_{1}=if_{0}$. The next equation is $\sqrt{2}f_{2}+\sqrt{1}f_{0}=if_{1}$, which gives $f_2 = \frac{1}{\sqrt{2}} { if_{1}-f_{0} }= \frac{1}{\sqrt{2}} (i-1)f_{0}$, etc.. So all terms are expressed in terms of the arbitrary $f_{0}$. The only issue is whether or not the resulting ${ f_n }$ is in $\ell^2$. – Disintegrating By Parts Jan 28 '16 at 11:13
  • Ops, you're right. I made a trivial mistake in writing down the equations. Now I see your point. It's not a trivial task to check whether the sequence $f$ recursively defined is in $\ell^{2}$! – Maurizio Barbato Jan 28 '16 at 11:45
  • As I pointed out in my answer to Maury's other question, these tri-diagonal symmetric operators are equivalent to Jacobi matrices. They are essentially self-adjoint if the off-diagonal elements grow no faster than n (roughly speaking) (this can be used to prove Nelson's analytic vector theorem). There are Jacobi matrices which are not essentially self-adjoint, so any proof that works in general is suspect. – Keith McClary Feb 08 '16 at 17:30
  • Let $S$ be the linear subspace of all vectors in $\ell^2$ with only finitely many non-zero coordinates. By the result Keith quoted about Jacobi operators (see also Schmudgen, Unbounded Self-Adjoint Operators on Hilbert Space, Example 7.6), the restriction $P$ of $X$ to $S$ is essentially self-adjoint. So we have $P^{}= \bar{P}=\bar{X}$. But since $(\bar{T})^{}=T^{}$ for every closable operator defined on a dense linear subspace, we get ${X}^{}=(\bar{X})^{}=(\bar{P})^{}=P^{*}=\bar{X}$. So $X$ is essentially self-adjoint. – Maurizio Barbato Feb 09 '16 at 22:21
1

It occurred to me that your problem has to do with a creation and annihilation operator, according to \begin{eqnarray*} X &=&U+V \\ a^{\ast } &=&V,\;a=U \end{eqnarray*} see below.

Let $\mathcal{H}=l^{2}$ \ with elements $u=u_{1},u_{2},\cdots $ and let $K$ be defined by \begin{eqnarray*} \mathcal{D}(K) &=&\mathcal{D},\;\mathcal{D}=\{u\in l^{2}|\sum_{j=0}^{\infty }j|u_{j}|^{2}<\infty \} \\ (Ku)_{j} &=&\sqrt{j}u_{j},\;u\in \mathcal{D} \end{eqnarray*} $K$ is symmetric, non-negative on $\mathcal{D}$ and its null space consists of the elements $(u_{0},0,0,\cdots )$. In fact it is self-adjoint on $% \mathcal{D}$ according to \begin{eqnarray*} (Ku,v) &=&(u,f) \\ \sum_{j=0}^{\infty }\sqrt{j}u_{j}\bar{v}_{j} &=&\sum_{j=1}^{\infty }\sqrt{j}% u_{j}\bar{v}_{j}=\sum_{j=0}^{\infty }u_{j}\bar{f}_{j}\Rightarrow f_{j}=\sqrt{% j}\bar{v}_{j},\;j\neq 0 \\ j &=&0\Rightarrow 0=(u_{0},f_{0})\Rightarrow f_{0}=0 \end{eqnarray*} We introduce the scale of spaces \begin{equation*} \mathcal{H}_{k}=[K+i]^{-k}\mathcal{H} \end{equation*} As a set $\mathcal{H}_{k}$ is dense in $\mathcal{H}$ and is itself a Hilbert space under the norm (or an equivalent one) \begin{equation*} \parallel f\parallel _{k}=\parallel Kf\parallel _{\mathcal{H}},\;f\in \mathcal{H}_{k}. \end{equation*} Thus $\mathcal{H}=\mathcal{H}_{0}$, $\mathcal{D=H}_{1}$. For $u\in \mathcal{D% }$ the operator $X$ is given by \begin{eqnarray*} (Xu)_{j} &=&\sqrt{j+1}u_{j+1}+\sqrt{j}u_{j-1}=(Uu)_{j}+(Vu)_{j},\;j>0, \;(Xu)_{j}=u_{1} \\ (Uu)_{j} &=&\sqrt{j+1}u_{j+1},\;(Vu)_{j}=\sqrt{j}u_{j-1},\;j>0, \;(Uu)_{0}=u_{1},\;(Vu)_{0}=0 \end{eqnarray*} $X$, $U$ and $V$ are bounded operators from $\mathcal{H}_{1}$ onto $\mathcal{ H}$. Next we note that $V=U^{\ast }.$ We note that for $u,v\in \mathcal{D}$ \begin{equation*} (Uu,v)=\sum_{j=0}^{\infty }\sqrt{j+1}u_{j+1}\bar{v}_{j}=\sum_{j=1}^{\infty }u_{j}\sqrt{j}\bar{v}_{j-1}=(u,Vv) \end{equation*} so $U^{\ast }\subset V$. Let now \begin{eqnarray*} (Uu,f) &=&(u,g) \\ \sum_{j=0}^{\infty }\sqrt{j+1}u_{j+1}\bar{f}_{j} &=&\sum_{j=0}^{\infty }u_{j} \bar{g}_{j} \\ \sum_{j=1}^{\infty }u_{j}\sqrt{j}\bar{f}_{j-1} &=&\sum_{j=0}^{\infty }u_{j} \bar{g}_{j} \end{eqnarray*} Choosing a specific $u$ we find \begin{eqnarray*} g_{0} &=&0 \\ g_{j} &=&\sqrt{j}\bar{f}_{j-1},\;j>0 \end{eqnarray*} so $U^{\ast }=V$. For $u\in \mathcal{H}_{2}=[K+i]^{-2}\mathcal{H}$, denoting $u=[K+i]^{-2}f$ \begin{eqnarray*} (U^{\ast }Uu)_{j} &=&(U^{\ast }U[K+i]^{-2}f)_{j}=\sqrt{j} (U[K+i]^{-2}f)_{j-1}=j([K+i]^{-2}f)_{j}=(K^{2}[K+i]^{-2}f)_{j},\;j\neq 0 \\ (U^{\ast }Uu)_{0} &=&0=(K^{2}[K+i]^{-2}f)_{0} \\ (UU^{\ast }u)_{j} &=&(UU^{\ast }[K+i]^{-2}f)_{j}=\sqrt{j+1}(U^{\ast }[K+i]^{-2}f)_{j+1}=(j+1)([K+i]^{-2}f)_{j}=((K^{2}+1)[K+i]^{-2}f)_{j} \\ (UU^{\ast }\varphi )_{0} &=&(UU^{\ast }[K+i]^{-2}f)_{0}=(U^{\ast }[K+i]^{-2}f)_{1}=([K+i]^{-2}f)_{0}\neq 0 \\ \{UU^{\ast }-U^{\ast }U\}\varphi &=&\{UU^{\ast }-U^{\ast }U\}[K+i]^{-2}f=[K+i]^{-2}f \\ UU^{\ast }-U^{\ast }U &=&[U,U^{\ast }]==1\;\mathrm{on}\;\mathcal{H}_{2}\; \mathrm{extends\;to\;}\mathcal{H} \end{eqnarray*} Recall that creation and annihilation operators satisfy \begin{equation*} \lbrack a,a^{\ast }]=1 \end{equation*} so we can identify \begin{equation*} a=U,\;a^{\ast }=U^{\ast }=V \end{equation*} Then $K^{2}=U^{\ast }U$ is the number operator.

Urgje
  • 1,981