$\require{AMScd}$ I am studying Koszul duality of (quadratic) algebras following the paper Koszul resolutions and I wanted to perform some explicit computations to make things more concrete. In order to do this, I wanted to use a sort of toy example which made computations accessible enough. I kind of arbitrarily chose the simplest non-free $k$-algebra ($k$ fixed ground field) with a quadratic presentation I could think of: $\Lambda := k[\varepsilon]/(\varepsilon^2)$, the algebra of dual numbers where we are taking $\varepsilon \in \Lambda$ to be an element of degree $1$. It is easy to see that $$ \Lambda \cong T(V)/(x \otimes x) $$ where $V$ is the graded vector space of dimension $1$ concentrated in degree $1$. The augmentation map of $\Lambda$ is clearly given by $$ \pi : \Lambda \to k $$ $$ (\lambda \cdot 1 + \mu \cdot \varepsilon)\mapsto \lambda $$ and, hence, $I(\Lambda) := \ker{\pi} = k[\varepsilon]$. My problems arise when I try to explicitly describe the bar complex for this algebra. Following the construction, we have that $$ B_s(\Lambda,\Lambda) := \Lambda \otimes k[\varepsilon]^{\otimes s} \otimes \Lambda $$ where $\otimes$ will always refer to $\otimes_k$. As a vector space, $B_s(\Lambda,\Lambda) $ is generated by the following basis: $$ \big \{ 1 [\varepsilon | \cdots | \varepsilon]1, \varepsilon [\varepsilon | \cdots | \varepsilon]1, 1 [\varepsilon | \cdots | \varepsilon] \varepsilon, \varepsilon [\varepsilon | \cdots | \varepsilon] \varepsilon \big \} $$ Finally, I want to determine the differential map $B_s(\Lambda,\Lambda) \to B_{s-1}(\Lambda,\Lambda)$. Following the expression from the paper, $$ \partial(a[a_1|\cdots|a_s]a') := (-1)^{e_0} aa_1[a_2|\cdots|a_s]a' + \sum_{i=1}^{s-1} a[a_1| \cdots | a_i a_{i+1}|\cdots|a_s]a' - (-1)^{e_{s-1}} a[a_1|\cdots|a_{s-1}] a_s a' $$ where $e_0 = |a|$ and $$ e_i = \big | a[a_1|\cdots|a_i] \big | = |a| + \sum_{t=1}^i |a_t| + i $$ It is clear that $$ \partial(\varepsilon[\varepsilon|\cdots|\varepsilon]\varepsilon) = 0 $$ Moreover, $$ \partial(1[\varepsilon|\cdots|\varepsilon]\varepsilon) = \varepsilon[\varepsilon|\cdots|\varepsilon] \varepsilon $$ In these two next I am unsure: $$ \partial(1[\varepsilon|\cdots|\varepsilon]1) = \varepsilon [\varepsilon |\cdots |\varepsilon] 1 - (-1)^{e_{s-1}} 1 [\varepsilon|\cdots|\varepsilon]\varepsilon $$ Now observe that $$ e_{s-1} = |1| + \sum_{i=1}^{s-1} |\varepsilon| + (s-1) = 2(s-1) $$ Hence, $$ \partial(1[\varepsilon|\cdots|\varepsilon]1) = \varepsilon [\varepsilon |\cdots |\varepsilon] 1 - 1 [\varepsilon|\cdots|\varepsilon]\varepsilon $$ By a similar reasoning, I concluded that $$ \partial(\varepsilon[\varepsilon | \cdots | \varepsilon] 1) = \varepsilon [\varepsilon | \cdots | \varepsilon] \varepsilon $$ I am convinced that my computations are wrong since $\partial$ (I believe) should be bi-$A$-linear and in my computations it is clearly not, $$ \varepsilon \cdot \partial(1[\varepsilon | \cdots |\varepsilon] 1) \not = \partial(\varepsilon [\varepsilon|\cdots|\varepsilon]\varepsilon) $$
Any comment, correction or remark in the computations I have done so far are welcome. Furthermore, suggestions to other similar and perhaps more enlightening computations will also be useful.
Update : 17/10/2024
I have decided to ignore this issue for the time being and try to directly compute $\Lambda^!$ via the definition: $$ \Lambda^! := \text{Ext}_{\Lambda}(k,k) $$ In the non-graded case, a free resolution of the $\Lambda$-module $k$ is given as follows: $$ \cdots \to \Lambda \xrightarrow{(\cdot \varepsilon)} \Lambda \xrightarrow{(\cdot \varepsilon)} \Lambda \xrightarrow{\pi} k $$ In the graded case, we take the following resolution: $$ \cdots \to \Lambda[-2] \xrightarrow{(\cdot \varepsilon)} \Lambda[-1] \xrightarrow{(\cdot \varepsilon)} \Lambda \xrightarrow{\pi} k $$ It might be obvious but I do not see why shall we apply the suspensions in the graded case (do we want that our differentials have internal degree $0$ or something like this?). Taking $\text{Hom}_{\Lambda}(-,k)$, we obtain the following chain complex: $$ 0 \to \text{Hom}_{\Lambda}(\Lambda,k) \xrightarrow{(\cdot \varepsilon)^*} \text{Hom}_{\Lambda}(\Lambda[-1],k) \xrightarrow{(\cdot \varepsilon)^*} \text{Hom}_{\Lambda}(\Lambda[-2],k) \to \cdots $$ Observe that $\varphi \in \text{Hom}_{\Lambda}(\Lambda[-n],k)$ is fully determined by $\varphi(1)$. If $\varphi(1) = \mu \in k$, it follows that $$ \varphi(k_0 \cdot 1 + k_1 \cdot \varepsilon) = k_0 \varphi(1) + k_1 \varepsilon \varphi(1) = \mu \cdot k_0 $$ Therefore, $\varphi = \mu \cdot \pi$ for some scalar $\mu$ and, hence, $\text{Hom}_{\Lambda}(\Lambda[-n],k)$ is generated by the morphism $\pi_n$ of degree $n$. Actually, the chain complex is precisely given by $$ 0 \to k[p_0] \xrightarrow{0} k[p_1] \xrightarrow{0} k[p_2] \to \cdots $$
Hence, one concludes that $\text{Ext}_{\Lambda}^{i,i}(k,k) = k[p_i]$ and vanishes out of the diagonal. In particular, $\Lambda$ is Koszul.
Now, we are going to study the Yoneda algebra structure. In order to do this, observe that an $n$-extension representing the generator of $\text{Ext}_{\Lambda}^{n,n}(k,k)$ is given as follows:
\begin{CD} \cdots @>>> \Lambda[-(n+1)] @>{(\cdot \varepsilon)}>> \Lambda[-n] @>{(\cdot \varepsilon)}>> \cdots @>>> \Lambda[-2] @>{(\cdot \varepsilon)}>> \Lambda[-1] @>{(\cdot \varepsilon)}>> \Lambda @>{\pi}>> k @>>> 0 \\ @. @VVV @V{p_n}VV @. @V{id}VV @V{id}VV @V{id}VV @V{id}VV \\ \cdots @>>> 0 @>>> k @>{(\cdot \varepsilon)}>> \cdots @>>> \Lambda[-2] @>{(\cdot \varepsilon)}>> \Lambda[-1] @>{(\cdot \varepsilon)}>> \Lambda @>{\pi}>> k @>>> 0 \end{CD}
From this follows that given $[p_n],[p_m]$ generators (as vector space) of $\text{Ext}_{\Lambda}(k,k)$, the Yoneda product is given by $$ [p_n] \cup [p_m] = [p_{m+n}] $$ In particular, we can deduce that, actually, $\Lambda^! \cong k[x]$ the polynomial algebra over one variable of degree $|x| = 1$.
Remaining Questions:
- Why did I mess up the signs of the bar construction?
- How (and why) do we take the free resolutions of $k$ in the graded setting?
- Further comments, corrections of my thought? Any other toy example worth having when delving deeper into the theory?