4

In quantum mechanics, the Schmidt decomposition of a vector $\vert \psi\rangle \in \mathbb{C}^n\otimes \mathbb{C}^m$ is

$$ \vert \psi\rangle = \sum_{i=1}^{\min\{n,m\}} \lambda_i \vert e_i\rangle \otimes \vert f_i\rangle$$

where $\{\vert e_i\rangle\}$ are an orthonormal set of vectors in $\mathbb{C}^n$, $\{\vert f_i\rangle\}$ are orthonormal in $\mathbb{C}^m$, and each $\lambda_i\geq 0$

Mercer's theorem states that for a continuous symmetric non-negative function $K:[a,b]\times [a,b]\rightarrow \mathbb{R}$, there is a sequence of functions $\{e_i\in L^2[a,b]\}_i$ and non-negative eigenvalues $\lambda_i$ such that

$$ K(s,t) = \sum_{i=1}^\infty \lambda_i e_i(s)e_i(t)$$

These look so similar, and the proofs seem to be identical if you use the right measure on $\mathbb{C}^n$. In quantum mechanics, entanglement is important and the coefficients in the Schmidt decomposition quantify it: an unentangled state has only one non-zero $\lambda_i$, and we can even define "entanglement entropy" as the entropy of $\{\lambda_i^2\}$ (which adds up to $1$ if we start with a unit vector).

Is there a similar notion of "entanglement" for continuous functions? If only one $\lambda_i$ is non-zero, it splits in a sense, but I'm wondering if this concept has a name and some applications or theory.

Sam Jaques
  • 2,230
  • 1
    For example in probability theory, functions $K(s,t)$ that split into $A(s)B(t)$ are well-known as joint probability density functions of independent random variables. – Kurt G. Nov 03 '22 at 09:21
  • Right, so by extension: does it make sense to measure in this way how "entangled" two random variables are? Does it reduce to something like covariance? – Sam Jaques Nov 03 '22 at 11:07
  • The fundamental difference between quantum theoretical entanglement and dependent or correlated random variables in classical probability theory is that when entanglement prevails we can in principle not describe one of the two subsystems by its own state (wave function) in its own Hilbert space. Ordinary random variables can always be considered as isolated subsystems on their own regardless how correlated they are with other variables. Too much basic stuff to repeat here. Please google the relevant buzzwords such as Bell inequality, Einstein Podolsky Rosen (EPR) and so on. The recent nobel – Kurt G. Nov 04 '22 at 06:15
  • price to Anton Zeilinger should be enough of a motivation. – Kurt G. Nov 04 '22 at 06:15
  • I think you've missed the point of my question. If we take the definition of entanglement entropy and apply it to the eigenvalues of a Mercer kernel, does that give us an interesting perspective on continuous functions? For example, if the continuous function is a probability density function, does the entanglement entropy give us covariance, mutual information, etc.? – Sam Jaques Nov 04 '22 at 08:35
  • We can and do define entanglement entropy in the way you described it. A few things to be considered when you want to define this for $K(s,t)$: 1. (opinion based) avoid to call this entanglement entropy for the reasons I mentioned. 2. What happens when $\lambda_i$ and/or $e_i$ are negative ? – Kurt G. Nov 04 '22 at 16:28
  • Mercer's theorem guarantees non-negative $\lambda_i$ for non-negative $K$, right? I don't think it matters if $e_i$ can take negative values (I assume that's what you mean?). – Sam Jaques Nov 07 '22 at 10:22

0 Answers0