In Riemannian geometry I often see some tensor being contracted with the metric $g$. Now I'm not entirely sure what is being meant by contracting using the metric and I cannot find a reasonable explanation of this. If I have a $(1,2)$-tensor $$A=A^j_{ik}dx^i \otimes \partial_j\otimes dx^k,$$ can I contract this using the metric in some way?
4 Answers
Suppose $V_1,\dots, V_k,W$ ($k\geq 2$) are (finite-dimensional) vector spaces over a field $\Bbb{F}$, fix two distinct indices $i,j\in\{1,\dots, k\}$, and suppose we have a bilinear map $\beta:V_i\times V_j\to W$. Then, there is a unique linear map \begin{align} C_{\beta,i,j}:V_1\otimes\cdots\otimes V_k\to W\otimes V_1\otimes \cdots\widehat{V_i}\otimes\cdots\otimes\widehat{V_j}\otimes\cdots\otimes V_k \end{align} (where the hats means omission) such that the action on pure tensors is \begin{align} v_1\otimes\cdots\otimes v_k\mapsto\beta(v_i,v_j)\otimes v_1\otimes \cdots\widehat{v_i}\otimes\cdots\otimes\widehat{v_j}\otimes\cdots\otimes v_k. \end{align}
This map $C_{\beta,i,j}$ is called the $(i,j)$ trace/contraction on $V_1\otimes\cdots\otimes V_k$ relative to $\beta$. This is a very general definition, so let us now extract several special cases
Example 1: $W=\Bbb{F}$
In this case, we have $C_{\beta,i,j}:V_1\otimes\cdots\otimes V_k\to V_1\otimes \cdots\widehat{V_i}\otimes\cdots\otimes\widehat{V_j}\otimes\cdots\otimes V_k $, and its action on pure tensors is \begin{align} v_1\otimes\cdots\otimes v_k\mapsto\beta(v_i,v_j)\cdot v_1\otimes \cdots\widehat{v_i}\otimes\cdots\otimes\widehat{v_j}\otimes\cdots\otimes v_k. \end{align} i.e in this special case $\beta(v_i,v_j)$ is just a number so we can simply scalar multiply it with the tensor $v_1\otimes\cdots\otimes\widehat{v_i}\cdots\otimes\widehat{v_j}\otimes\cdots\otimes v_k$.
Example 2: $k=2$
So, here we only have two vector spaces $V_1,V_2$ and a bilinear map $\beta:V_1\times V_2\to W$. In this case, the trace on $V_1\otimes V_2$ relative to $\beta$ is simply the unique linear map $V_1\otimes V_2\to W$ such that the action on pure tensors is \begin{align} v_1\otimes v_2\to \beta(v_1,v_2). \end{align} In other words, it is the unique linear map given to us by the universal properties of tensor products, where the bilinear map descends to a linear map out of the tensor product.
Example 2.1: $k=2$, $V_1,V_2$ are dual spaces, $W=\Bbb{F}$, $\beta$ is evaluation
This is perhaps the most important example of all. Let us call $V_2=V$ and suppose $V_1=V^*$. Then, there’s the super important bilinear map, given by evaluation $\beta=\text{ev}:V^*\times V\to\Bbb{F}$, $(\phi,v)\mapsto \phi(v)$. This gives us a unique linear map $V^*\otimes V\to\Bbb{F}$, and after using the canonical isomorphism $V^*\otimes V\cong \text{End}(V)$, this literally becomes the trace functional $\text{tr}:\text{End}(V)\to\Bbb{F}$ that we all learn in linear algebra (take the matrix of the operator and sum the diagonals).
Example 3: k=3.
This is relevant to what you’re asking. Fix a pseudo-inner product space $(V,g)$. Now, define $(V_1,V_2,V_3)=(V^*,V,V^*)$. So, we now have a triple tensor product space $V^*\otimes V\otimes V^*$. There are many types of contractions we can take
- Suppose we want to take the $(1,2)$-contraction. Then, we can use the above evaluation map $\beta_{1,2}=\text{ev}:V^*\times V\to\Bbb{R}$. The net result is the contraction map $C_{\text{ev},1,2}:V^*\otimes V\otimes V^*\to V^*$ given by $\phi\otimes v\otimes \psi\mapsto \text{ev}(\phi,v)\cdot\psi=\phi(v)\cdot \psi$. So, this contraction takes a $(1,2)$ tensor (an element of $V^*\otimes V\otimes V^*$) and produces a $(0,1)$ tensor (an element of $V^*$).
- We can similarly take the $(2,3)$-contraction $C_{\text{ev},2,3}:V^*\otimes V\otimes V^*\to V^*$, whose action on pure tensors is $\phi\otimes v\otimes \psi\mapsto \psi(v)\cdot \phi$. Note that although the $(1,2)$ and $(2,3)$ contractions have the same domain and target space, these are different maps (they differ by a flip of the two $V^*$’s on the domain).
These are the most obvious contractions, and so far we haven’t used the metric $g$ yet. Now, we can ask whether we can define a $(1,3)$-contraction. For this, we’re going to need a bilinear map $V^*\times V^*\to W$ for some vector space $W$. There is no natural such map lying around in general. However, because we have the metric $g:V\times V\to\Bbb{R}$, we can first of all set $W=\Bbb{R}$, and then use the musical isomorphism $g^{\flat}:V\to V^*$, $x\mapsto g(x,\cdot)$, to define a bilinear map $\beta:=\widetilde{g}:V^*\times V^*\to\Bbb{R}$. This is the ‘inverse metric’, with components $g^{ab}$ (see here for more details).
- With this in mind, we can define the $(1,3)$-contraction $C_{\widetilde{g},1,3}:V^*\otimes V\otimes V^*\to V$ whose action on pure tensors is $\phi\otimes v\otimes \psi\mapsto \widetilde{g}(\phi,\psi)\cdot v$.
So, in particular, if you fix a basis $\{e_1,\dots, e_n\}$ for $V$, and let $\{\epsilon^1,\dots,\epsilon^n\}$ be the dual basis, and you have a tensor $A\in V^*\otimes V\otimes V^*$ whose components are given as \begin{align} A=A_{i\,\,k}^{\,\,j}\cdot \epsilon^i\otimes e_j\otimes \epsilon^k, \end{align} then applying this $(1,3)$-contraction (which highly depends on the metric $g$, so it’s also called the metric contraction/trace over the first and third slots) gives \begin{align} C_{\widetilde{g},1,3}(A)&= C_{\widetilde{g},1,3}(A_{i\,\,k}^{\,\,j}\cdot \epsilon^i\otimes e_j\otimes \epsilon^k)\\ &= A_{i\,\,k}^{\,\,j}\cdot C_{\widetilde{g},1,3}(\epsilon^i\otimes e_j\otimes \epsilon^k)\tag{$C_{\widetilde{g},1,3}$ is linear}\\ &= A_{i\,\,k}^{\,\,j}\cdot \widetilde{g}(\epsilon^i,\epsilon^k)\cdot e_j\\ &\equiv A_{i\,\,k}^{\,\,j}\cdot g^{ik}\cdot e_j. \end{align}
So far we’ve been discussing what happens at the level of vector spaces; if you do this at every tangent space of your manifold, you get the corresponding equation at the level of tensor fields: the tensor field $A= A_{i\,\,k}^{\,\,j} \,dx^i\otimes \partial_j\otimes dx^k$ is sent to the vector field $A_{i\,\,k}^{\,\,j} \cdot g^{ik}\,\partial_j$.
A somewhat more common notation for this is $\text{tr}_{g,1,3}$, instead of $C_{\widetilde{g},1,3}$, to mean the trace relative to $g$ over the first and third slots (even though technically speaking, it is the trace relative to the induced map $\widetilde{g}$… but saying all this just makes it more cumbersome, so we don’t).
- 65,833
-
Thanks for the thorough explanation. Is the last part with $g$ related to "lowering" and "raising" indices? @peek-a-boo – 亞歷山大 Apr 02 '24 at 20:33
-
@Jonathan yes, in traditional language, one says that you can only take the trace of one upper index together with one lower index. However, when you have two lower indices (coming from the $2$-part of the $(1,2)$ tensor), you need to raise one of the indices with the metric, and then you can take the trace by setting the two indices equal and summing. – peek-a-boo Apr 02 '24 at 21:36
-
Sorry to revive this, but wouldn't using this the Ricci tensor be the contraction of the $(1,3)$ curvature tensor $R$ by $C_{\beta, 1,4}$, where $\beta=\text{ev}:V^*\times V\to\Bbb{F}$ is the evaluation pairing? @peek-a-boo – 亞歷山大 Apr 18 '24 at 13:03
-
@Jonathan the choice of $\beta$ is correct, but conventions differ so I can’t say if Ricci is/is not the $(1,4)$ contraction of Riemann. For my conventions, Ricci is the $(1,3)$-contraction. – peek-a-boo Apr 18 '24 at 16:10
-
I write $R$ locally as $R = R^{l}{ijk} dx^i \otimes dx^j \otimes dx^k \otimes \partial_l$ so I'm contracting on the first $dx^i$ and the 4'th $\partial_l$ so this would correspond to $C{\beta, 1,4}$? @peek-a-booo – 亞歷山大 Apr 18 '24 at 17:38
-
Yes that’s right – peek-a-boo Apr 18 '24 at 22:44
Think of a $(1,2)$-tensor as a vector-valued bilinear form. Then $A(Y,\cdot,Z) = \sum A^j_{ik}Y^iZ^k\partial_j$. Define a $(0,3)$-tensor by $$B(X,Y,Z)=g\big(X,A(Y,\cdot,Z)\big).$$ In tensor notation, we have $$B_{i\ell k} = \sum g_{j\ell}A^j_{ik}.$$
- 125,228
First, consider only $2$-tensors. A covariant $2$-tensor on a vector space $T$ (e.g., tangent space) is a bilinear function $$B: T\times T \rightarrow \mathbb{R}.$$ It extends uniquely to a linear map $$ B: T\otimes T \rightarrow \mathbb{R}. $$ Therefore, $B$ is a function of contravariant $2$-tensors. Given any contravariant $2$-tensor $A \in T\otimes T$, the contraction of $B$ with $A$ is $B(A)$.
A Riemannian metric is a positive definite covariant $2$-tensor on $T$, $$ g: T\otimes T \rightarrow \mathbb{R}. $$However, since it is non-degenerate, it has an inverse, which is a contravariant $2$-tensor, $$ g^{-1} \in T\otimes T. $$ The contraction of a covariant $2$-tensor $B$ by the metric $g$ means $B(g^{-1})$.
If a basis of $T$ is $(\partial_1,\dots, \partial_n)$ and the dual basis is $(dx^1, \dots, dx^n)$, then we can write $B = b_{ij}\,dx^i\otimes dx^j.$ If $g = g_{ij}\,dx^i\otimes dx^j$, then its inverse is $g^{-1} = g^{ij}\,\partial_i\otimes\partial_j.$ Then the contraction of $B$ by $g$ is $$ b_{ij}g^{ij}. $$
If you have a more complicated tensor $M$ with at least one pair of lower indices, then the tensor can be viewed as a bilinear map $$ M: T\otimes T \rightarrow W,$$ where $W$ is a vector space (that might consist of tensors). Then the contraction of $M$ by $g$ is again $M(g^{-1})$. For specific examples, you can work out what this looks like with respect to a basis of $T$.
- 10,298
Cantraction means to replace the first tensor sign by evaluation $$dx^i\otimes \partial_k \to dx^i(\partial_k) = \delta^i_k$$ yielding $A^i_{ik} dx^k$. The evaluation is identical with the scalar product of the two vectors $$dx^i(\partial_k) = g^{ik}\partial_i \partial_k $$
- 5,122