3

Is it possible to get an analouge of the matrix represenation of a linear operator on finite dimensional space for general linear operators on Banach spaces. The matrix represenation look like a "sum of functionals" hence maybe some kind of integral operators with certain kernals?

I think I did an exam question which looks like this but I cant rememeber the formulation or find the exam.

Furthermore, given that I remember correctly what is the widest class of operators for which this is possible?

Found possible duplicate ; Can all continuous linear operators on a function space be represented using integrals?

user123124
  • 1,945
  • Sum of functionals is trivially possible, if the range of your operator is finite-dimensional. If it has a countably infinite (Hamel) basis the same holds. Otherwise, how do you define the sum of an infinite set of functionals? Need some kind of convergence/continuity at least. – Jyrki Lahtonen Sep 12 '16 at 06:10

1 Answers1

3

The first linear operators Hilbert dealt with were matrix operators on $\ell^2$, and he started by studying such an operator $L$ in terms of its matrix $$ \left[\begin{array}{cccc} \langle Le_1,e_1\rangle & \langle Le_1,e_2\rangle & \langle Le_1,e_3\rangle & \cdots \\ \langle Le_1,e_2\rangle & \langle Le_2,e_2\rangle & \langle Le_2,e_3\rangle & \cdots \\ \langle Le_1,e_3\rangle & \langle Le_2,e_3\rangle & \langle Le_3,e_3\rangle & \cdots \\ \vdots & \vdots & \vdots & \ddots \end{array}\right] $$ Working in $\ell^2$ with the standard basis $\{ e_j\}$, you can see that $$ [L]\left[\begin{array}{c}a_1 \\ a_1 \\ a_3 \\ 0 \\ 0 \\ 0 \\ \vdots \end{array}\right] $$ gives you a vector column whose members are the coordinates of $$ a_1 Le_1 + a_2 Le_2 + a_3 Le_3. $$ The problem with this approach is getting everything to converge and make sense. So the earliest class of such operators, known as Hilbert-Schmidt class, required $$ \sum_{j,k} |\langle e_j,e_k\rangle|^2 < \infty. $$ (I think Schmidt was a student of Hilbert.) This may be one of the more general classes where matrices are well-suited to study the corresponding operators. You have a trace, and all kinds of nice properties.

For general operators, but especially ones that are unbounded such as differential operators, the problems were immediate. John von Neumann noted that you could consider $L=\frac{d^2}{dx^2}$, whose domain $\mathcal{D}(L)\subset L^2[0,1]$ consisted of twice absolutely continuous functions with $f(0)=f(1)=0$, and you could not tell that operator apart from the operator $L'=\frac{d^2}{dx^2}$ where you do not require endpoint conditions. So the operators would not be in one-to-one correspondence with the matrices. This ultimately led von Neumann to adopt the more abstract view of a linear operator, which Fredholm had given some time earlier. The correspondence and convergence issues disappeared when dealing directly with linear operators instead. von Neumann started the study of closed operators, where the graph is a closed subspace of the product space, which turned out to be very fruitful, and would not have worked out well for matrices.

Disintegrating By Parts
  • 91,908
  • 6
  • 76
  • 168