2

(Preamble) In the book Representation Theory A First Course (Fulton, Harris), there is the following claim in the page 165 (written as an observation) without a proof:

The eigenvalues $\alpha$ occurring in an irreducible representation of $\mathfrak{sl}_3(\mathbb{C})$ differ from one another by integral linear combinations of the vectors $L_i - L_j \in \mathfrak{h}^*$

Prior to this claim, there is a quick derivation, where $X \in \mathfrak{g}_\alpha, v \in V_\beta, H \in \mathfrak{h}$ and $\mathfrak{g}$ is any Lie algebra, $\mathfrak{h}$ is a subspace of $\mathfrak{sl}_3(\mathbb{C})$ consisting of diagonal matrices, and $\alpha, \beta$ are eigenvalues (in this case linear functions on $H$) s.t. $\mathfrak{g}_\alpha = \{X \in \mathfrak{g}\mid \forall H \in \mathfrak{h}:[H, X] = \alpha(H)X\}$ and $V_\beta = \{v \in V\mid \forall H \in \mathfrak{h}:Hv = \beta(H)v\}$, where $V$ is to my knowledge just a vector space over some algebraically closed field.

The given derivation is: $H(X(v)) = X(H(v)) + [H, X](v) = X(\beta(H)v) + (\alpha(H)X)(v) = (\alpha(H) + \beta(H))X(v)$ by the bracket identity. The author(s) then state that

We see from this that $X(v)$ is again an eigenvector for the action of $\mathfrak{h}$ with eigenvalue $\alpha + \beta$

(Main question) It is perfectly clear to me that given any single eigenvalue $\beta$, we may jump to some other eigenvalues by the combining different roots $\alpha(H) = L_i - L_j$ for some $i, j$. What is unclear to me is that how do we know that we can jump from any eigenvalue $\beta_1$ to any other eigenvalue $\beta_2$ with the $L_i - L_j$s?

Wasradin
  • 1,619
  • Well, since the representation is irreducible then for any vector the subrepresentation generated by that vector is the representation itself. So you can reach any vector in your representation by applications of elements of $\mathfrak{g}$. – David Melo Apr 02 '22 at 19:43
  • @DavidMelo Could you further elaborate this? I get that if $\beta_0,\dots,\beta_n$ are the eigenvalues in the general setting $\mathfrak{sl}n(\mathbb{C})$ and $V$ is the irreducible representation, then the action by $H$ sends the subrepresentation $V{\sum_{i=0}^nc_i\beta_i}, \forall i: c_i \in \mathbb{Z}$ to $V$, i.e. $H.V_{\sum_{i=0}^n c_i\beta_i} = V$ (where I just took the general version of $V_{\beta_i - \beta_j}$, the linear combination of all $\beta$s). But how does it follow from this that we may reach any eigenvalue $\beta_i$ from $\beta_j$? – Wasradin Apr 02 '22 at 19:53

2 Answers2

3

I expressed my problems with the other answer in comments there. But of course the basic idea is right: If the weight spaces are not "connected" in the sense that you can get from any weight to any other through a combination of roots, then the "connected components" of the weights would give non-trivial subrepresentations. I'd formalize this as follows:

Let $\mathfrak g$ be any complex semisimple Lie algebra, $\mathfrak h$ a Cartan subalgebra, $V$ a finite-dimensional representation of $\mathfrak g$. Then $V = \bigoplus_{\lambda \in P(V)} V_\lambda$ (all $V_\lambda \neq 0$) as $\mathfrak h$-modules, where $P(V)$ is a finite set of weights $\lambda: \mathfrak h \rightarrow \mathbb C$.

Let $Q$ be the $\mathbb Z$-span of all roots (or equivalently, of a set of simple roots) $\alpha$ of $\mathfrak g$ with respect to $\mathfrak h$.

We agree that if $X \in \mathfrak g_\alpha$, then $X$ induces maps $V_\lambda \rightarrow V_{\lambda + \alpha}$ for all $\lambda$.

In particular, for any given $w\in P(V)$, the subspace

$$\bigoplus_{\lambda \in P(V) \cap (w +Q)} V_\lambda \subseteq V$$

is stable under the action of each $X_\alpha$ as well as $\mathfrak h$, and hence all of $\mathfrak g$; i.e. it is a nonzero subrepresentation. So if $V$ is irreducible ...


The underlying problem with the "element-wise" approach discussed in the other answer, I think, is that to do it that way, one needs more subtle considerations about whether those maps $V_\lambda \rightarrow V_{\lambda+\alpha}$ are injective / surjective / ..., which is doable but already in the case of the adjoint representation needs more effort ($\mathfrak{sl}_2$-triples etc.) than one wants to invest at this point.

1

Let $\alpha_i \in \mathfrak{h}^*$, $i \in [\ell]$ denote your roots, (i.e eigenvalues of $[\mathfrak{h},\cdot]$). Let $V$ be an irreducible representation, and $v,w \in V$, since $V$ is irreducible then the representation done through $\mathfrak{g}\cdot v$ is equal to $V$, and therefore includes $w$. Therefore there exists some element in $\mathfrak{g}$ sending $v$ to $w$, call this element $X$, this element $X$ is in the sum of some eigenspaces i.e $[H,X] = \sum_j \alpha_{i_j}X$, Therefore if $Hv = \beta_1v$ and $Hw=\beta_2w$ we have: $$ Xv = w \Rightarrow H(Xv) = Hw \Rightarrow \left(\sum_j \alpha_{i_j}(H)+\beta_1(H)\right)(Xv) = \beta_2w \\\Rightarrow (\beta_2-\beta_1)(H) = \sum_{j} \alpha_{i_j}(H)$$

$V$ is a $\mathfrak{g}$-module (i.e, a vector space over a field with an action from $\mathfrak{g}$)

David Melo
  • 665
  • 3
  • 10
  • 3
    It is not true that for any $v,w$ in an irreducible $\mathfrak g$-representation there exists $X \in \mathfrak g$ such that $X \cdot v =w$. Look e.g. at the adjoint rep and an element $0 \neq v \in \mathfrak h$, there is not even an $X \in \mathfrak g$ such that $X \cdot v =v$. Remark that "the representation done through $\mathfrak g \cdot v$" should mean the vector space spanned by all $v, X\cdot v$, not all of whose elements are just translates of $v$ acted upon by $\mathfrak g$. – Torsten Schoeneberg Apr 03 '22 at 00:58
  • @TorstenSchoeneberg How should this answer then be modified? – Wasradin Apr 03 '22 at 06:37
  • 1
    @SickSeries There is some series $X_1, \dots, X_n$ such that $X_1 \cdot (\cdots (X_n \cdot v)\cdots)=w$. Not every pair of weights is exactly $L_i - L_j$ apart. The key is that every distance between weights is an integral linear combination of $L_i - L_j$. – Callum Apr 03 '22 at 07:30
  • @Callum We know that each of the elements of the series $X_1,\dots,X_n$ can be represented as some linear combination with the eigenspaces of $[H,.]$. If we then manipulate the equality as David did, then won't we have $(\beta_2 - \beta_1)w$ in the LHS and some linear combination of the intermediary vectors $v, v_1,\dots,v_k$ (with $v_{k+1} = w$) with the various eigenvalues $\alpha_{i_j}$ as coefficients in the RHS? How could we conclude from this that the difference $\beta_2 - \beta_1$ is some integral linear combination of the eigenvalues? – Wasradin Apr 03 '22 at 13:52
  • @SickSeries all we would need to do to rescue this proof is repeatedly apply it (We can even assume the $X_i$ are root vectors rather than linear combinations for simplicity). If the difference $\beta_2 - \beta_1$ and $\beta_3 - \beta_2$ are both integral linear combinations of roots then so is $\beta_3 - \beta_1$ and so on. – Callum Apr 03 '22 at 15:21
  • 1
    @Callum: But is the existence of such a chain easily provable without using what we want to prove? I see it a posteriori from the structure of highest weight modules or something similar, but is it immediate just from the representation being irreducible? Maybe I'm just overlooking a simple argument? – Torsten Schoeneberg Apr 03 '22 at 18:05
  • Yep, I realized my mistake, too much time working with the universal enveloping algebra instead. The result that there is such a chain follows directly from simplicity and the structure of a representation generated by a single element, though seeing the results being show I'd say that structure is not at all clear at that part of the study. – David Melo Apr 03 '22 at 21:39
  • @TorstenSchoeneberg Is it not simply true that such a chain must exist or the repeated action of $\mathfrak{g}$ on $v$ will generate a submodule of $V$ not containing $w$ violating irreducibility? Perhaps we can't so quickly assume that the chain must be finite. – Callum Apr 04 '22 at 00:31
  • 1
    @Callum: My problem is with the phrase "will generate a submodule ...". It is clear that for a given $v \in V$, the set of all vectors that can be reached from that $v$ via finitely many applications of elements from $\mathfrak g$ is obviously stable under the $\mathfrak g$-action, and closed under scalar multiples, but I do not see how it is immediately closed under sums, hence would equal the submodule it generates. But then I can only claim that each $w \in V$ can be written as a sum of elements which can be reached via such chains, and the calculation does not go through. – Torsten Schoeneberg Apr 04 '22 at 21:18