2

Some definitions:

A Lie monomial in the elemens of a set $X$ is a finite product of elements of $X$ bracketed by Lie brackets in any manner, e.g. $[[[x_3,[x_1,x_2]],x_3],[x_2,[x_1,x_1]]] $.

A simpler Lie monomial is a Lie monomial bracketed successively from left to right, that is, Lie monomials of the form: $[\cdots[x_1,x_2],x_3],x_4],\cdots,x_k], $ e.g.$[[[x_1,x_2],x_3],x_4].$

The length of a Lie monomial is the number of elements bracketed in it, e.g. $[[[x_1,x_2],x_3],x_4]$ is of length 4.

I am trying to prove the following statement:

Any Lie monomial lies in the span of simpler lie monomials

I found this statement is on Reuteneuter's book about Free Lie algebras, althought he doesn't give any demonstration. Well, I'll show what I did until now:

We proceed by induction on the lenght $|w|$ of a Lie monomial $w$. If $|w|=2,$ we have $w=[x,x']$ and there's nothing to prove. We assume for $|w|=k>1$ and consider monomial $w$ of lenght $|w|=k+1.$ Since now $|w|\geq 3, $ we may write $w=[w_1,w_2],$ where $1\leq |w|\leq k$ and $|w_2| = k+1 - |w_1|.$ We can also assume that both $w_1,w_2$ are simpler lie monomials, since their lenght is smaller than $k$ covered by the induction hyphotesis. Notice that if $|w_2| =1,$ we are done. So we may assume that $|w_2|>1$ and write $w_2 = [w_2',x]$, where $w_2'$ is also a simpler lie monomial of lenght $|w_2'| = |w_2|-1 = k-|w_1|$. Now we use the Jacobi identity:

$$w = [w_1,[w_2',x]] = [[w_1,w_2'],x]+[w_2',[w_1,x]].$$

Here comes the problem. The induction hyphotesis guarentees that $[w_1,w_2']$ is a simpler lie momomial, since its lenght is $|w_1|+|w_2'| = k$; therefore, $[[w_1,w_2'],x]$ is also a simpler lie monomial. But what about $[w_2',[w_1,x]]$? Of course $[w_1,x]$ is a simpler one. But I cannot ensure that the bracket $[w_2',[w_1,x]]$ will remain a simpler lie monomial. Also, it's lenght is just $k+1$ which is not covered by the induction hyphotesis. What now?

Any help on finding a path to prove this? Thank you.

user2345678
  • 3,015
  • What is exactly the difference between a monomial and a "simpler" monomial? – Joca Ramiro Feb 04 '19 at 18:32
  • You can always write everything in a basis of the Lie algebra. Am I missing some point here? – Ennar Feb 04 '19 at 18:36
  • @jobe For example, $[[e_1,e_2],[e_3,e_4]]$ is a lie monomial, while $[[[e_1,e_2],e_3],e_4]$ is a simpler lie monomial – user2345678 Feb 04 '19 at 18:39
  • @Ennar yes, the elements of $X$ is the basis for the generated Lie algebra. But the algebra generated is the set of all linear combinations of Lie monomials, right? What I am trying to prove is a refinement of this statement, that is, instead of linear combination of maybe more "random" elements, in fact it is a linear combination of simpler elements. – user2345678 Feb 04 '19 at 18:41
  • Oh, ok, I thought $X$ was the whole algebra, not a set of generators. – Ennar Feb 04 '19 at 18:43
  • I assume that "simpler" means "left-associative" in this case. This should be in any text that studies free Lie algebras, such as the first volume (Chapter 2) of Bourbaki's Lie groups and Lie algebras or Reutenauer's Free Lie algebras. But I have to go. – darij grinberg Feb 04 '19 at 19:06

2 Answers2

2

1. Definitions

$\newcommand{\kk}{\mathbf{k}}$ First, let me restate your notations and definition.

Fix a Lie algebra $L$ over any commutative ring $\mathbf{k}$. The word "span" shall always mean "$\mathbf{k}$-linear span" from now on. If $U$ is a subset of $L$, then $\kk U$ shall denote the span of $U$.

If $U$ and $V$ are two subsets of $L$, then $\left[U, V\right]_0$ shall mean the set of all Lie brackets $\left[u, v\right]$ with $u \in U$ and $v \in V$. This is a subset of $L$.

If $U$ and $V$ are two $\mathbf{k}$-submodules of $L$, then $\left[U, V\right]$ shall mean the span of the set $\left[U, V\right]_0$. This is a $\mathbf{k}$-submodule of $L$ and contains $\left[U, V\right]_0$ as a subset (but is, in general, greater).

Let $X$ be a subset of $L$. Let me restate the definition of Lie monomials as follows:

Definition 1. We define a sequence $\left(B_1, B_2, B_3, \ldots\right)$ of subsets of $L$ recursively as follows: We set $B_1 = X$; then, for each $n > 1$, we set $B_n = \bigcup\limits_{a + b = n} \left[B_a, B_b\right]_0$, where the $a$ and $b$ in the big union are meant to range over positive integers.

Thus, $B_1 = X$ and $B_2 = \left[B_1, B_1\right]_0 = \left[X, X\right]_0$ and $B_3 = \left[B_1, B_2\right]_0 \cup \left[B_2, B_1\right]_0 = \left[X, \left[X, X\right]_0\right]_0 \cup \left[\left[X, X\right]_0, X\right]_0$ and $B_4 = \left[B_1, B_3\right]_0 \cup \left[B_2, B_2\right]_0 \cup \left[B_3, B_1\right]_0$ and so on.

Definition 2. Set $B = B_1 \cup B_2 \cup B_3 \cup \cdots$. The elements of $B$ are called the Lie monomials in $X$. For each $n \geq 1$, the elements of $B_n$ are called the Lie monomials in $X$ of length $n$.

Next, I shall define what you call "simpler monomials", but I will call them "left-bracketed Lie monomials" instead:

Definition 3. We define a sequence $\left(C_1, C_2, C_3, \ldots\right)$ of subsets of $L$ recursively as follows: We set $C_1 = X$; then, for each $n > 1$, we set $C_n = \left[C_{n-1}, X\right]_0$.

Thus, $C_1 = X$ and $C_2 = \left[C_1, X\right]_0 = \left[X, X\right]_0$ and $C_3 = \left[C_2, X\right]_0 = \left[\left[X, X\right]_0, X\right]_0$ and $C_4 = \left[C_3, X\right]_0 = \left[\left[\left[X, X\right]_0, X\right]_0, X\right]_0$ and so on.

Definition 4. Set $C = C_1 \cup C_2 \cup C_3 \cup \cdots$. The elements of $C$ are called the left-bracketed Lie monomials in $X$.

2. The claim and the proof outline

Now, your claim is the following:

Theorem 1. We have $B \subseteq \kk C$.

The proof is easiest made using the following definition:

Definition 5. We define a sequence $\left(L_1, L_2, L_3, \ldots\right)$ of $\mathbf{k}$-submodules subsets of $L$ recursively as follows: We set $L_1 = \kk X$; then, for each $n > 1$, we set $L_n = \left[L_{n-1}, \kk X\right]$.

Thus, $L_1 = \kk X$ and $L_2 = \left[L_1, \kk X\right] = \left[\kk X, \kk X\right]$ and $L_3 = \left[L_2, \kk X\right] = \left[\left[\kk X, \kk X\right], \kk X\right]$ and so on.

Now, let me split Theorem 1 into the following bite-sized pieces:

Proposition 2. We have $L_n = \kk C_n$ for each $n \geq 1$.

Proposition 3. We have $\left[L_a, L_b\right] \subseteq L_{a+b}$ for any $a \geq 1$ and $b \geq 1$.

Proposition 4. We have $B_n \subseteq L_n$ for each $n \geq 1$.

See below for the detailed proofs of these three propositions as well as the derivation of Theorem 1 from them. But first, here are hints that should suffice if you have any experience with the Lie algebra axioms:

Proposition 2 is proven by straightforward induction on $n$.

To prove Proposition 3, we proceed by induction on $a$. In the induction step, we assume that Proposition 3 is true for $a-1$, and intend to prove it for $a$. It suffices to show that $\left[\left[x, y\right], z\right] \in L_{a+b}$ for all $x \in L_{a-1}$, $y \in X$ and $z \in L_b$ (because $L_a = \left[L_{a-1}, L\right]$ is spanned by elements of the form $\left[x, y\right]$ with $x \in L_{a-1}$ and $y \in X$). But this follows by applying the Jacobi identity \begin{align} \left[\left[x, y\right], z\right] = \left[\left[x, z\right], y\right] - \left[x, \left[z, y\right]\right] \end{align} and realizing that both addends $\left[\left[x, z\right], y\right]$ and $- \left[x, \left[z, y\right]\right]$ on the right hand side belong to $L_{a+b}$ (indeed, we have $\left[x, z\right] \in \left[L_{a-1}, L_b\right] \subseteq L_{a+b-1}$ (by the induction hypothesis) and thus $\left[\left[x, z\right], y\right] \in \left[L_{a+b-1}, \kk X\right] = L_{a+b}$ (by the definition of $L_{a+b}$), and we also have $\left[z, y\right] \in \left[L_b, \kk X\right] = L_{b+1}$ (by the definition of $L_{b+1}$) and therefore $\left[x, \left[z, y\right]\right] \in \left[L_{a-1}, L_{b+1}\right] \subseteq L_{a+b}$ (by the induction hypothesis)). So Proposition 3 follows by induction.

Proposition 4 is proven by strong induction on $n$, using Proposition 3.

Combining Proposition 2 with Proposition 4, we obtain $B_n \subseteq L_n = \kk C_n$ for each $n \geq 1$. Thus, $B \subseteq \kk C$, so that Theorem 1 is proven.

3. Formal proofs

Let me prove these Propositions 2, 3 and 4 in detail, just to make sure everything is exactly as I claimed (you have most likely proven them yourself by now). This has turned out to be even duller than expected.

We will tacitly use the observation that if $U$ and $V$ are two subsets of $L$, and if $u\in U$ and $v\in V$, then $\left[ u,v\right] \in\left[ U,V\right] $. (Indeed, if $U$ and $V$ are two subsets of $L$, then the definition of $\left[ U,V\right] $ shows that $\left[ U,V\right] =\left( \text{the span of }\left[ U,V\right] _0 \right) =\kk \left( \left[ U,V\right] _0 \right) $. Thus, if $U$ and $V$ are two subsets of $L$, and if $u\in U$ and $v\in V$, then we have $\left[ u,v\right] \in\left[ U,V\right] _0 \subseteq\kk \left( \left[ U,V\right] _0 \right) =\left[ U,V\right] $.)

We shall use two simple lemmas:

Lemma 5. Let $S$ be a subset of $L$. Let $M$ be a $\kk$-submodule of $L$. If $S\subseteq M$, then $\kk S\subseteq M$.

Proof of Lemma 5. Recall that $\kk S$ is the span of $S$, and thus is the smallest $\kk$-submodule of $L$ that contains $S$ as a subset. Hence, any $\kk$-submodule of $L$ that contains $S$ as a subset must contain $\kk S$ as a subset. Applying this to the $\kk$-submodule $M$, we conclude that $M$ contains $\kk S$ as a subset if $M$ contains $S$ as a subset. In other words, if $S\subseteq M$, then $\kk S\subseteq M$. This proves Lemma 5. $\blacksquare$

Lemma 6. Let $U$ and $V$ be two subsets of $L$. Then, $\left[ \kk U,\kk V\right] =\kk \left( \left[ U,V\right] _0 \right) $.

Proof of Lemma 6. Recall that $\left[ \kk U,\kk V\right] $ is the span of the subset $\left[ \kk U,\kk V\right] _0 $ (by the definition of $\left[ \kk U,\kk V\right] $). In other words, $\left[ \kk U,\kk V\right] =\kk \left( \left[ \kk U,\kk V\right] _0 \right) $. Hence, $\left[ \kk U,\kk V\right] $ is a $\kk$-submodule of $L$ and satisfies $\left[ \kk U,\kk V\right] _0 \subseteq\left[ \kk U,\kk V\right] $.

Let $r\in\left[ \kk U,\kk V\right] _0 $ be arbitrary. We shall show that $r\in\kk \left( \left[ U,V\right] _0 \right) $.

Indeed, we have $r\in\left[ \kk U,\kk V\right] _0 $; in other words, $r=\left[ u,v\right] $ for some $u\in\kk U$ and some $v\in\kk V$ (by the definition of $\left[ \kk U,\kk V\right] _0 $). Consider these $u$ and $v$.

We have $u\in\kk U$. In other words, $u$ is a $\kk$-linear combination of the elements of $U$ (by the definition of the span $\kk U$). In other words, $u=\sum_{i\in I}\lambda_{i}u_{i}$ for some finite set $I$ and some families $\left( \lambda_{i}\right) _{i\in I} \in\kk ^{I}$ and $\left( u_{i}\right) _{i\in I}\in U^{I}$. Consider this $I$ and these families.

We have $v\in\kk U$. In other words, $v$ is a $\kk$-linear combination of the elements of $V$ (by the definition of the span $\kk V$). In other words, $v=\sum_{j\in J}\mu_{j}v_{j}$ for some finite set $J$ and some families $\left( \mu_{j}\right) _{j\in J}\in\kk ^{J}$ and $\left( v_{j}\right) _{j\in J}\in V^{J}$. Consider this $J$ and these families.

Now, \begin{align*} r & =\left[ u,v\right] =\left[ \sum_{i\in I}\lambda_{i}u_{i},\sum_{j\in J}\mu_{j}v_{j}\right] \qquad\left( \text{since }u=\sum_{i\in I}\lambda _{i}u_{i}\text{ and }v=\sum_{j\in J}\mu_{j}v_{j}\right) \\ & =\sum_{i\in I}\lambda_{i}\sum_{j\in J}\mu_{j}\left[ u_{i},v_{j}\right] \qquad\left( \text{since the Lie bracket is }\kk \text{-bilinear} \right) \\ & =\sum_{\left( i,j\right) \in I\times J}\lambda_{i}\mu_{j} \underbrace{\left[ u_{i},v_{j}\right] }_{\substack{\in\left[ U,V\right] _0 \\\text{(since }u_{i}\in U\text{ and }v_{j}\in V\text{)}}}\in\sum_{\left( i,j\right) \in I\times J}\lambda_{i}\mu_{j}\left[ U,V\right] _0 . \end{align*} Thus, $r$ is a $\kk$-linear combination of elements of $\left[ U,V\right] _0 $. In other words, $r$ belongs to the span of $\left[ U,V\right] _0 $. In other words, $r\in\kk \left( \left[ U,V\right] _0 \right) $ (because $\kk \left( \left[ U,V\right] _0 \right) $ is the span of $\left[ U,V\right] _0 $).

Now, forget that we fixed $r$. We thus have proven that $r\in\kk \left( \left[ U,V\right] _0 \right) $ for each $r\in\left[ \kk U,\kk V\right] _0 $. In other words, $\left[ \kk U,\kk V\right] _0 \subseteq\kk \left( \left[ U,V\right] _0 \right) $. Thus, Lemma 5 (applied to $M=\kk \left( \left[ U,V\right] _0 \right) $ and $S=\left[ \kk U,\kk V\right] _0 $) yields that $\kk \left( \left[ \kk U,\kk V\right] _0 \right) \subseteq\kk \left( \left[ U,V\right] _0 \right) $. Now, recall that $\left[ \kk U,\kk V\right] =\kk \left( \left[ \kk U,\kk V\right] _0 \right) \subseteq\kk \left( \left[ U,V\right] _0 \right) $.

On the other hand, let $q\in\left[ U,V\right] _0 $. We shall show that $q\in\left[ \kk U,\kk V\right] $.

In fact, we have $q\in\left[ U,V\right] _0 $; in other words, $q=\left[ \widetilde{u},\widetilde{v}\right] $ for some $\widetilde{u}\in U$ and some $\widetilde{v}\in V$ (by the definition of $\left[ U,V\right] _0 $). Consider these $\widetilde{u}$ and $\widetilde{v}$. From $\widetilde{u}\in U\subseteq\kk U$ and $\widetilde{v}\in V\subseteq\kk V$, we obtain $\left[ \widetilde{u},\widetilde{v}\right] \in\left[ \kk U,\kk V\right] $. Thus, $q=\left[ \widetilde{u},\widetilde{v}\right] \in\left[ \kk U,\kk V\right] $.

Forget that we fixed $q$. We thus have proven that $q\in\left[ \kk U,\kk V\right] $ for each $q\in\left[ U,V\right] _0 $. In other words, $\left[ U,V\right] _0 \subseteq\left[ \kk U,\kk V\right] $. Thus, Lemma 5 (applied to $M=\left[ \kk U,\kk V\right] $ and $S=\left[ U,V\right] _0 $) yields that $\kk \left( \left[ U,V\right] _0 \right) \subseteq\left[ \kk U,\kk V\right] $. Combining this with $\left[ \kk U,\kk V\right] \subseteq\kk \left( \left[ U,V\right] _0 \right) $, we obtain $\left[ \kk U,\kk V\right] =\kk \left( \left[ U,V\right] _0 \right) $. This proves Lemma 6. $\blacksquare$

Proof of Proposition 2. We shall prove Proposition 2 by induction on $n$:

Induction base: The definition of $C_{1}$ yields $C_{1}=X$. The definition of $L_{1}$ yields $L_{1}=\kk \underbrace{X}_{=C_{1}}=\kk C_{1}$. In other words, Proposition 2 holds for $n=1$. This completes the induction base.

Induction step: Fix a positive integer $m>1$. Assume that Proposition 2 holds for $n=m-1$. We must prove that Proposition 2 holds for $n=m$.

We have assumed that Proposition 2 holds for $n=m-1$. In other words, $L_{m-1}=\kk C_{m-1}$. The recursive definition of the $C_{n}$ yields $C_{m}=\left[ C_{m-1},X\right] _0 $. Hence, $\kk C_{m}=\kk \left( \left[ C_{m-1},X\right] _0 \right) $. But Lemma 6 (applied to $U=C_{m-1}$ and $V=X$) yields $\left[ \kk C_{m-1},\kk X\right] =\kk \left( \left[ C_{m-1},X\right] _0 \right) $. Comparing these two equalities, we obtain $\kk C_{m}=\left[ \kk C_{m-1} ,\kk X\right] $.

But the recursive definition of the $L_{n}$ yields $L_{m}=\left[ \underbrace{L_{m-1}}_{=\kk C_{m-1}},\kk X\right] =\left[ \kk C_{m-1},\kk X\right] $. Comparing these two equalities, we obtain $L_{m}=\kk C_{m}$. In other words, Proposition 2 holds for $n=m$. This completes the induction step. Thus, Proposition 2 is proven. $\blacksquare$

Proof of Proposition 3. We shall prove Proposition 3 by induction on $a$:

Induction base: Let $b\geq1$ be an integer. We shall prove that $\left[ L_{1},L_{b}\right] \subseteq L_{1+b}$.

Indeed, the recursive definition of the $L_{n}$ yields $L_{b+1}=\left[ L_{b},\kk X\right] $ and $L_{1}=\kk X$.

Let $r\in\left[ L_{1},L_{b}\right] _0 $ be arbitrary. We shall show that $r\in L_{b+1}$.

Indeed, we have $r\in\left[ L_{1},L_{b}\right] _0 $; in other words, $r=\left[ u,v\right] $ for some $u\in L_{1}$ and some $v\in L_{b}$ (by the definition of $\left[ L_{1},L_{b}\right] _0 $). Consider these $u$ and $v$.

We have $\left[ \underbrace{v}_{\in L_{b}},\underbrace{u}_{\in L_{1}}\right] \in\left[ L_{b},\underbrace{L_{1}}_{=\kk X}\right] =\left[ L_{b},\kk X\right] =L_{b+1}$ (since $L_{b+1}=\left[ L_{b} ,\kk X\right] $). But $r=\left[ u,v\right] =-\left[ v,u\right] $ (since the Lie bracket is antisymmetric). Hence, $r=-\underbrace{\left[ v,u\right] }_{\in L_{b+1}}\in-L_{b+1}\subseteq L_{b+1}$ (since $L_{b+1}$ is a $\kk$-submodule of $L$).

Now, forget that we fixed $r$. We thus have proven that $r\in L_{b+1}$ for each $r\in\left[ L_{1},L_{b}\right] _0 $. In other words, $\left[ L_{1},L_{b}\right] _0 \subseteq L_{b+1}$. Hence, Lemma 5 (applied to $M=L_{b+1}$ and $S=\left[ L_{1},L_{b}\right] _0 $) yields that $\kk \left( \left[ L_{1},L_{b}\right] _0 \right) \subseteq L_{b+1} $. But $\left[ L_{1},L_{b}\right] $ is the span of $\left[ L_{1} ,L_{b}\right] _0 $ (by the definition of $\left[ L_{1},L_{b}\right] $); in other words, $\left[ L_{1},L_{b}\right] =\kk \left( \left[ L_{1},L_{b}\right] _0 \right) $. Hence, $\left[ L_{1},L_{b}\right] =\kk \left( \left[ L_{1},L_{b}\right] _0 \right) \subseteq L_{b+1}=L_{1+b}$.

Now, forget that we fixed $b$. We thus have proven that $\left[ L_{1} ,L_{b}\right] \subseteq L_{1+b}$ for all $b\geq1$. In other words, Proposition 3 holds for $a=1$. This completes the induction base.

Induction step: Fix an integer $c>1$. Assume that Proposition 3 holds for $a=c-1$. We must prove that Proposition 3 holds for $a=c$.

Let $b\geq1$ be an integer. We shall show that $\left[ L_{c},L_{b}\right] \subseteq L_{c+b}$.

Proposition 2 (applied to $n=b$) yields $L_{b}=\kk C_{b}$. Proposition 2 (applied to $n=c$) yields $L_{c}=\kk C_{c}$. Now, \begin{align*} \left[ \underbrace{L_{c}}_{=\kk C_{c}},\underbrace{L_{b}} _{=\kk C_{b}}\right] =\left[ \kk C_{c},\kk C_{b}\right] =\kk \left( \left[ C_{c},C_{b}\right] _0 \right) \end{align*} (by Lemma 6, applied to $U=C_{c}$ and $V=C_{b}$).

Now, let $r\in\left[ C_{c},C_{b}\right] _0 $ be arbitrary. We shall show that $r\in C_{c+b}$.

We have $r\in\left[ C_{c},C_{b}\right] _0 $. In other words, $r=\left[ u,z\right] $ for some $u\in C_{c}$ and some $z\in C_{b}$ (by the definition of $\left[ C_{c},C_{b}\right] _0 $). Consider these $u$ and $z$.

We have $u\in C_{c}=\left[ C_{c-1},X\right] _0 $ (by the recursive definition of the $C_{n}$, since $c>1$). In other words, $u=\left[ x,y\right] $ for some $x\in C_{c-1}$ and some $y\in X$ (by the definition of $\left[ C_{c-1},X\right] _0 $). Consider these $x$ and $y$.

The recursive definition of the $L_{n}$ yields $L_{c+b}=\left[ L_{c+b-1} ,\kk X\right] $ and $L_{b+1}=\left[ L_{b},\kk X\right] $.

We can apply Proposition 3 to $c-1$ instead of $a$ (since we have assumed that Proposition 3 holds for $a=c-1$). We thus obtain $\left[ L_{c-1} ,L_{b}\right] \subseteq L_{\left( c-1\right) +b}=L_{c+b-1}$.

We can apply Proposition 3 to $c-1$ and $b+1$ instead of $a$ and $b$ (since we have assumed that Proposition 3 holds for $a=c-1$). We thus obtain $\left[ L_{c-1},L_{b+1}\right] \subseteq L_{\left( c-1\right) +\left( b+1\right) }=L_{c+b}$.

Proposition 2 (applied to $n=c-1$) yields $L_{c-1}=\kk C_{c-1}$. From $x\in C_{c-1}\subseteq\kk C_{c-1}=L_{c-1}$ and $z\in C_{b} \subseteq\kk C_{b}=L_{b}$, we obtain $\left[ \underbrace{x}_{\in L_{c-1}},\underbrace{z}_{\in L_{b}}\right] \in\left[ L_{c-1},L_{b}\right] \subseteq L_{c+b-1}$. Combining this with $y\in X\subseteq\kk X$, we obtain $\left[ \underbrace{\left[ x,z\right] }_{\in L_{c+b-1} },\underbrace{y}_{\in\kk X}\right] \in\left[ L_{c+b-1},\kk X\right] =L_{c+b}$.

From $z\in L_{b}$ and $y\in X\subseteq\kk X$, we obtain $\left[ \underbrace{z}_{\in L_{b}},\underbrace{y}_{\in\kk X}\right] \in\left[ L_{b},\kk X\right] =L_{b+1}$. Hence, $\left[ \underbrace{x}_{\in L_{c-1}},\underbrace{\left[ z,y\right] }_{\in L_{b+1}}\right] \in\left[ L_{c-1},L_{b+1}\right] \subseteq L_{c+b}$.

But recall that \begin{align*} r & =\left[ \underbrace{u}_{=\left[ x,y\right] },z\right] =\left[ \left[ x,y\right] ,z\right] =\underbrace{\left[ \left[ x,z\right] ,y\right] }_{\in L_{c+b}}-\underbrace{\left[ x,\left[ z,y\right] \right] }_{\in L_{c+b}}\\ & \qquad\qquad\left( \begin{array} [c]{c} \text{since the Jacobi identity yields}\\ \left[ \left[ x,z\right] ,y\right] =\left[ \left[ x,y\right] ,z\right] +\left[ x,\left[ z,y\right] \right] \end{array} \right) \\ & \in L_{c+b}-L_{c+b}\subseteq L_{c+b}\qquad\left( \text{since } L_{c+b}\text{ is a }\kk \text{-submodule of }L\right) . \end{align*}

Now, forget that we fixed $r$. We thus have shown that $r\in L_{c+b}$ for each $r\in\left[ C_{c},C_{b}\right] _0 $. In other words, $\left[ C_{c} ,C_{b}\right] _0 \subseteq L_{c+b}$.

Hence, Lemma 5 (applied to $M=L_{c+b}$ and $S=\left[ C_{c},C_{b}\right] _0 $) yields that $\kk \left( \left[ C_{c},C_{b}\right] _0 \right) \subseteq L_{c+b}$. Now, recall that $\left[ L_{c},L_{b}\right] =\kk \left( \left[ C_{c},C_{b}\right] _0 \right) \subseteq L_{c+b}$.

Now, forget that we fixed $b$. We thus have proven that $\left[ L_{c} ,L_{b}\right] \subseteq L_{c+b}$ for all $b\geq1$. In other words, Proposition 3 holds for $a=c$. This completes the induction step. Hence, Proposition 3 is proven by induction. $\blacksquare$

Proof of Proposition 4. We shall prove Proposition 4 by strong induction on $n$. Thus, we fix an integer $m\geq1$. We assume that Proposition 4 holds for all $n<m$. We want to prove that Proposition 4 holds for $n=m$ as well.

Let $r\in B_{m}$. We shall prove that $r\in L_{m}$.

If $m=1$, then this holds for easy reasons (in fact, if $m=1$, then $B_{m}=B_{1}=X\subseteq\kk X=L_{1}=L_{m}$ (since $1=m$), and therefore $r\in B_{m}\subseteq L_{m}$). Thus, for the rest of this proof, we WLOG assume that we don't have $m=1$. Hence, $m>1$.

Thus, the recursive definition of $B_{m}$ yields $B_{m}=\bigcup\limits_{a+b=m} \left[ B_{a},B_{b}\right] _0 $. Hence, $r\in B_{m}=\bigcup\limits_{a+b=m} \left[ B_{a},B_{b}\right] _0 $. In other words, $r\in\left[ B_{a} ,B_{b}\right] _0 $ for some pair $\left( a,b\right) $ of positive integers satisfying $a+b=m$. Consider this pair $\left( a,b\right) $. We have $a>0$ (since $a$ is a positive integer) and thus $a+b>b$, so that $b<a+b=m$. Similarly, $a<m$.

We have assumed that Proposition 4 holds for all $n<m$. Hence, we can apply Proposition 4 to $n=a$ (since $a<m$). We thus obtain $B_{a}\subseteq L_{a}$. The same argument (applied to $b$ instead of $a$) yields $B_{b}\subseteq L_{b}$.

But $r\in\left[ B_{a},B_{b}\right] _0 $. In other words, $r=\left[ u,v\right] $ for some $u\in B_{a}$ and some $v\in B_{b}$ (by the definition of $\left[ B_{a},B_{b}\right] _0 $). Consider these $u$ and $v$. We have $u\in B_{a}\subseteq L_{a}$ and $v\in B_{b}\subseteq L_{b}$. Now, $r=\left[ \underbrace{u}_{\in L_{a}},\underbrace{v}_{\in L_{b}}\right] \in\left[ L_{a},L_{b}\right] \subseteq L_{a+b}$ (by Proposition 3). In view of $a+b=m$, this rewrites as $r\in L_{m}$.

Now, forget that we fixed $r$. We thus have proven that $r\in L_{m}$ for each $r\in B_{m}$. In other words, $B_{m}\subseteq L_{m}$. In other words, Proposition 4 holds for $n=m$. This completes the induction proof. Thus, Proposition 4 is proven by strong induction. $\blacksquare$

Proof of Theorem 1. Let $r\in B$. Thus, $r\in B=B_{1}\cup B_{2}\cup B_{3}\cup\cdots$. In other words, $r\in B_{n}$ for some integer $n\geq1$. Fix this $n$.

Now, $r\in B_{n}\subseteq L_{n}$ (by Proposition 4), so that $r\in L_{n}=\kk C_{n}$ (by Proposition 2). But $C_{n}\subseteq C_{1}\cup C_{2}\cup C_{3}\cup\cdots=C$ (since $C=C_{1}\cup C_{2}\cup C_{3}\cup\cdots$), so that $r\in\kk \underbrace{C_{n}}_{\subseteq C}\subseteq\kk C$.

Now, forget that we fixed $r$. We thus have proven that $r\in\kk C$ for each $r\in B$. In other words, $B\subseteq\kk C$. This proves Theorem 1. $\blacksquare$

  • Wow, thanks for the effort. I ain't got time to check precisely all demonstrations, but the path seems good. As soons as I do it, I'll give you the points. Hm, I would like to ask you: where did you study those stuff about lie algebras? – user2345678 Feb 07 '19 at 16:43
  • @math.h: Writing this thing was very automatic (great chore when one's brain is tired) and involved a lot of copy-paste, as you can see by the repetitive formulations. I learned some of this stuff from Hartmut Laue, Freie algebraische Strukturen (the chapter on Lie algebras is mostly independent of the free groups chapter, which I have never read). But Proposition 3 (the crux of the argument) is a known fact; I had to prove very similar things in http://www.cip.ifi.lmu.de/~grinberg/algebra/r2t.pdf . – darij grinberg Feb 07 '19 at 17:02
  • I know this post is very old, but assuming $B$ is the nilpotent Lie algebra $\mathfrak{\nu,n}$ on $n$ generators of nilpotence $\nu$, is there a way to explicitly describe the Jacobi identity using the left-associative generators? If you prefer posting an answer, this is the relevant post – bliipbluup Jul 31 '20 at 12:28
  • @UnexpectedExpectation: I'm afraid I don't know anything about it (if what you're looking for are explicit formulas for expanding the Lie bracket of two left-associative iterated Lie brackets in terms of left-associative iterated Lie brackets). – darij grinberg Jul 31 '20 at 15:10
2

Write $[x_1, \dots, x_n] = [\cdots[x_1, x_2], \dots x_n]$. By the Jacobi formula, $$[x, [y, z]] = [x, y, z] - [x, z, y].$$ Induction on the length of $w$ then shows the statement holds for any bracket $[x, w]$ with $x$ a simpler monomial, as $[x_1, \dots, x_n] = [[x_1, \dots, x_k], x_{k+1}, \dots, x_n]$. But $$[[x, y], z] = -[z, [x, y]] + [y, [x, z]],$$ so another induction on the length of $x$ proves the result for an arbitrary bracket $[x, w]$.

anomaly
  • 26,475