9

$\require{AMScd}$ I am studying Koszul duality of (quadratic) algebras following the paper Koszul resolutions and I wanted to perform some explicit computations to make things more concrete. In order to do this, I wanted to use a sort of toy example which made computations accessible enough. I kind of arbitrarily chose the simplest non-free $k$-algebra ($k$ fixed ground field) with a quadratic presentation I could think of: $\Lambda := k[\varepsilon]/(\varepsilon^2)$, the algebra of dual numbers where we are taking $\varepsilon \in \Lambda$ to be an element of degree $1$. It is easy to see that $$ \Lambda \cong T(V)/(x \otimes x) $$ where $V$ is the graded vector space of dimension $1$ concentrated in degree $1$. The augmentation map of $\Lambda$ is clearly given by $$ \pi : \Lambda \to k $$ $$ (\lambda \cdot 1 + \mu \cdot \varepsilon)\mapsto \lambda $$ and, hence, $I(\Lambda) := \ker{\pi} = k[\varepsilon]$. My problems arise when I try to explicitly describe the bar complex for this algebra. Following the construction, we have that $$ B_s(\Lambda,\Lambda) := \Lambda \otimes k[\varepsilon]^{\otimes s} \otimes \Lambda $$ where $\otimes$ will always refer to $\otimes_k$. As a vector space, $B_s(\Lambda,\Lambda) $ is generated by the following basis: $$ \big \{ 1 [\varepsilon | \cdots | \varepsilon]1, \varepsilon [\varepsilon | \cdots | \varepsilon]1, 1 [\varepsilon | \cdots | \varepsilon] \varepsilon, \varepsilon [\varepsilon | \cdots | \varepsilon] \varepsilon \big \} $$ Finally, I want to determine the differential map $B_s(\Lambda,\Lambda) \to B_{s-1}(\Lambda,\Lambda)$. Following the expression from the paper, $$ \partial(a[a_1|\cdots|a_s]a') := (-1)^{e_0} aa_1[a_2|\cdots|a_s]a' + \sum_{i=1}^{s-1} a[a_1| \cdots | a_i a_{i+1}|\cdots|a_s]a' - (-1)^{e_{s-1}} a[a_1|\cdots|a_{s-1}] a_s a' $$ where $e_0 = |a|$ and $$ e_i = \big | a[a_1|\cdots|a_i] \big | = |a| + \sum_{t=1}^i |a_t| + i $$ It is clear that $$ \partial(\varepsilon[\varepsilon|\cdots|\varepsilon]\varepsilon) = 0 $$ Moreover, $$ \partial(1[\varepsilon|\cdots|\varepsilon]\varepsilon) = \varepsilon[\varepsilon|\cdots|\varepsilon] \varepsilon $$ In these two next I am unsure: $$ \partial(1[\varepsilon|\cdots|\varepsilon]1) = \varepsilon [\varepsilon |\cdots |\varepsilon] 1 - (-1)^{e_{s-1}} 1 [\varepsilon|\cdots|\varepsilon]\varepsilon $$ Now observe that $$ e_{s-1} = |1| + \sum_{i=1}^{s-1} |\varepsilon| + (s-1) = 2(s-1) $$ Hence, $$ \partial(1[\varepsilon|\cdots|\varepsilon]1) = \varepsilon [\varepsilon |\cdots |\varepsilon] 1 - 1 [\varepsilon|\cdots|\varepsilon]\varepsilon $$ By a similar reasoning, I concluded that $$ \partial(\varepsilon[\varepsilon | \cdots | \varepsilon] 1) = \varepsilon [\varepsilon | \cdots | \varepsilon] \varepsilon $$ I am convinced that my computations are wrong since $\partial$ (I believe) should be bi-$A$-linear and in my computations it is clearly not, $$ \varepsilon \cdot \partial(1[\varepsilon | \cdots |\varepsilon] 1) \not = \partial(\varepsilon [\varepsilon|\cdots|\varepsilon]\varepsilon) $$

Any comment, correction or remark in the computations I have done so far are welcome. Furthermore, suggestions to other similar and perhaps more enlightening computations will also be useful.

Update : 17/10/2024

I have decided to ignore this issue for the time being and try to directly compute $\Lambda^!$ via the definition: $$ \Lambda^! := \text{Ext}_{\Lambda}(k,k) $$ In the non-graded case, a free resolution of the $\Lambda$-module $k$ is given as follows: $$ \cdots \to \Lambda \xrightarrow{(\cdot \varepsilon)} \Lambda \xrightarrow{(\cdot \varepsilon)} \Lambda \xrightarrow{\pi} k $$ In the graded case, we take the following resolution: $$ \cdots \to \Lambda[-2] \xrightarrow{(\cdot \varepsilon)} \Lambda[-1] \xrightarrow{(\cdot \varepsilon)} \Lambda \xrightarrow{\pi} k $$ It might be obvious but I do not see why shall we apply the suspensions in the graded case (do we want that our differentials have internal degree $0$ or something like this?). Taking $\text{Hom}_{\Lambda}(-,k)$, we obtain the following chain complex: $$ 0 \to \text{Hom}_{\Lambda}(\Lambda,k) \xrightarrow{(\cdot \varepsilon)^*} \text{Hom}_{\Lambda}(\Lambda[-1],k) \xrightarrow{(\cdot \varepsilon)^*} \text{Hom}_{\Lambda}(\Lambda[-2],k) \to \cdots $$ Observe that $\varphi \in \text{Hom}_{\Lambda}(\Lambda[-n],k)$ is fully determined by $\varphi(1)$. If $\varphi(1) = \mu \in k$, it follows that $$ \varphi(k_0 \cdot 1 + k_1 \cdot \varepsilon) = k_0 \varphi(1) + k_1 \varepsilon \varphi(1) = \mu \cdot k_0 $$ Therefore, $\varphi = \mu \cdot \pi$ for some scalar $\mu$ and, hence, $\text{Hom}_{\Lambda}(\Lambda[-n],k)$ is generated by the morphism $\pi_n$ of degree $n$. Actually, the chain complex is precisely given by $$ 0 \to k[p_0] \xrightarrow{0} k[p_1] \xrightarrow{0} k[p_2] \to \cdots $$

Hence, one concludes that $\text{Ext}_{\Lambda}^{i,i}(k,k) = k[p_i]$ and vanishes out of the diagonal. In particular, $\Lambda$ is Koszul.

Now, we are going to study the Yoneda algebra structure. In order to do this, observe that an $n$-extension representing the generator of $\text{Ext}_{\Lambda}^{n,n}(k,k)$ is given as follows:

\begin{CD} \cdots @>>> \Lambda[-(n+1)] @>{(\cdot \varepsilon)}>> \Lambda[-n] @>{(\cdot \varepsilon)}>> \cdots @>>> \Lambda[-2] @>{(\cdot \varepsilon)}>> \Lambda[-1] @>{(\cdot \varepsilon)}>> \Lambda @>{\pi}>> k @>>> 0 \\ @. @VVV @V{p_n}VV @. @V{id}VV @V{id}VV @V{id}VV @V{id}VV \\ \cdots @>>> 0 @>>> k @>{(\cdot \varepsilon)}>> \cdots @>>> \Lambda[-2] @>{(\cdot \varepsilon)}>> \Lambda[-1] @>{(\cdot \varepsilon)}>> \Lambda @>{\pi}>> k @>>> 0 \end{CD}

From this follows that given $[p_n],[p_m]$ generators (as vector space) of $\text{Ext}_{\Lambda}(k,k)$, the Yoneda product is given by $$ [p_n] \cup [p_m] = [p_{m+n}] $$ In particular, we can deduce that, actually, $\Lambda^! \cong k[x]$ the polynomial algebra over one variable of degree $|x| = 1$.

Remaining Questions:

  • Why did I mess up the signs of the bar construction?
  • How (and why) do we take the free resolutions of $k$ in the graded setting?
  • Further comments, corrections of my thought? Any other toy example worth having when delving deeper into the theory?

1 Answers1

2

First I'll make a few remarks on the reduced Bar Construction that is constructed in your reference (Priddy) and show why the signs that you get are in fact true and the issue lies on the sign convention for the bimodule structure on the complex:

Given $A$ a positively graded algebra with augmentation $A \cong k \oplus A_+$ the reduced bar complex is defined by $\mathcal{B}(A,A)_s := A \otimes (A^{+})^s \otimes A$ and differential: $$\partial(a[a_1|\dots |a_s]a') := (-1)^{\epsilon_0} a a_1[a_2|\dots |a_s]a' + \sum_{i=1}^{s-1}{(-1)^{\epsilon_i} a[a_1|\dots| a_i a_{i+1}|\dots |a_s]a'} + (-1)^{\epsilon_s} a[a_1|\dots |a_{s-1}] a_s a' $$ with $$\epsilon_i = i + \sum_{k=1}^{i}{|a_k|} + |a|.$$ Note that the last term is different in your reference if $a_s$ is even, but in your example it's the same as $A_+ = k \varepsilon$ is odd.

Now on the issue: It is stated in the beginning that they use the sign convention on tensor products, whenever two elements $a$ and $b$ are permuted past each other we get a sign of $(-1)^{|a|\cdot|b|}$. This leads to the following bimodule structure with respect to the internal grading of $B(A,A)_s = \bigoplus_t B(A,A)_{s,t}$. Let $a[a_1|\dots|a_s]a'$ be in $B(A,A)_{s,t}$, ie $|a| + |a_1| + \dots |a'| = t$. Then, $$ (a[a_1|\dots|a_s]a').(\tilde{a}\otimes \tilde{b}) := (-1)^{|\tilde{a}|t}(\tilde{a}a)[a_1|\dots|a_s](a'\tilde{b}). $$ You can now check that with this sign convention the differentials become $A\otimes A^{op}$-module maps.

Specifically, in your example the computation of $\partial$ on the basis $a[\varepsilon|\dots|\varepsilon] a'$ but the left action of $\varepsilon$ to $1[\varepsilon|\dots|\varepsilon]1$ would give $(-1)^s \varepsilon[\varepsilon|\dots|\varepsilon]1$.

The rest of your example and the computation of $\Lambda^!$ looks good, the note on the shiftings is that you want a resolution of graded modules while multiplication by $\varepsilon$ is a degree $1$ map.

MPos
  • 1,205
  • It might be obvious but why do we ask for the maps of the resolution to be of (length) degree $0$ ? As you can tell, I have not worked with derived functors in the graded setting before. – Javier Herrero Oct 22 '24 at 09:07