4

Let $V$ be a vector space over a field $K$ and denote $K^\times:=K\setminus\{0\}$.

I am interested in functions $f: \text{End}(V)\to K$ where $f:\text{Aut}(V)\to K^\times$ is a group homomorphism with the property $f(\varphi \circ \psi) = f(\varphi)\cdot f(\psi)$, extending to $f: \text{End}(V)\to K$ such that $f(\varphi \circ \psi) = f(\varphi)\cdot f(\psi)$ holds for all $\varphi, \psi\in \text{End}(V)$.

Obviously, $f(\text{id})=1$.

For finite dimensional $V$, $\det$ is a famous example.

I tried to prove or disprove that $f(\varphi)\neq 0 \implies \varphi\in\text{Aut}(V)$, but I fail to find neither a proof nor a counter example.

  • If $f(\varphi)=0$ and $\varphi$ is an automorphism, then $1=f(Id)=f(\varphi\circ \varphi^{-1})=f(\varphi)f(\varphi^{-1})=0$, hence a contradiction, no? or have i misunderstood something ? – GreginGre Mar 07 '25 at 18:55
  • @GreginGre right, but I thought of this case already and asked for $f\not=0$ on automorphisms. – Gyro Gearloose Mar 07 '25 at 18:58
  • 1
    $K \setminus { 0 }$ is usually denoted $K^{\times}$. Your notation is confusing because it suggests positivity wrt some order. – Qiaochu Yuan Mar 07 '25 at 19:25
  • Are you requiring that $f$ is only multiplicative on automorphisms or on arbitrary linear maps? – Qiaochu Yuan Mar 07 '25 at 19:30
  • @QiaochuYuan Yes, the property should hold on all of End, just like a determinate does. Background is that I look for a kind of extension to the concept of a determinant, for infinite dimensions, and there is not any obvious solution. – Gyro Gearloose Mar 07 '25 at 19:45
  • Then your current phrasing is ambiguous; as written it could be read as saying that only the restriction of $f$ to $\text{Aut}(V)$ is required to be multiplicative. – Qiaochu Yuan Mar 07 '25 at 19:49

1 Answers1

4

Claim: Let $V$ be an infinite-dimensional vector space and $f : \text{End}(V) \to K$ a monoid homomorphism (so we require $f(1) = 1$ and $f(ST) = f(S) f(T)$, and nothing else). Then $f(T) = 1$ identically.

In other words, there is no nontrivial infinite-dimensional determinant. To begin the proof, observe that $f(0)$ satisfies

$$f(0 \circ T) = f(0) f(T) = f(0)$$

for every $T \in \text{End}(V)$. So either $f(0) = 0$ or, if $f(0) \neq 0$, then $f(T) = 1$ identically.

Lemma 1: Every linear map $T : V \to V$ which is injective has a left inverse, and every linear map which is surjective has a right inverse.

Proof. If $T : V \to V$ is injective, then we can (using the axiom of choice to choose bases) find a complement $W$ to $\text{im}(T) \subset V$, so that $V = \text{im}(T) \oplus W$ (internal direct sum). Then we can take $S : V \to V$ to be the linear map equal to $T^{-1}$ on $\text{im}(T)$ and $0$ on $W$, which satisfies $ST = 1$ by construction. The argument for the surjective case is similar. $\Box$

Corollary: $f(T) \neq 0$ if $T$ has either a left or a right inverse.

Proof. If $LR = 1$ (so $L$ is a left inverse and $R$ is a right inverse) then $f(L) f(R) = 1$, and in particular $f(L), f(R) \neq 0$. $\Box$

So $f(T) \neq 0$ if $T$ can be written as the composite of maps with either left or right inverses. But:

Lemma 2: $0 \in \text{End}(V)$ can be written as a composite $L_1 R_2$ where $L_1$ is a left inverse (has a right inverse $R_1$) and $R_2$ is a right inverse (has a left inverse $L_2$).

Proof. It will be easier to explain by starting with an example: take $V$ to be the vector space of infinite sequences $(a_0, a_1, a_2, \dots)$. Then the composite $L_1 R_2$ of the maps

$$L_1(a_0, a_1, a_2, \dots) = (a_1, a_3, a_5, a_7, \dots)$$ $$R_2(a_0, a_1, a_2, \dots) = (a_0, 0, a_1, 0, \dots)$$

is zero. But $L_1$ and $R_2$ have right resp. left inverses

$$R_1(a_0, a_1, a_2, \dots) = (0, a_1, 0, a_2, \dots)$$ $$L_2(a_0, a_1, a_2, \dots) = (a_0, a_2, a_4, a_6 \dots).$$

To generalize this example requires that we choose a direct sum decomposition $V \cong V_1 \oplus V_2$ such that $\dim V_1 = \dim V_2 = \dim V$, which can be done using the axiom of choice. Then we take

  • $L_1 : V \to V$ to be the projection onto $V_1$ followed by any isomorphism $\varphi_1 : V_1 \cong V$,
  • $R_1 : V \to V$ to be $\varphi_1^{-1}$ followed by the inclusion $V_1 \hookrightarrow V$, and
  • similarly for $L_2, R_2$. $\Box$

Corollary: $f(0) \neq 0$, so $f(T) = 1$ identically.

The nonexistence of the infinite-dimensional determinant should be intuitive since it already can't assign meaningful values to scalar multiples of the identity in any obvious way. But this doesn't appear to be the cleanest way to proceed to a contradiction and we instead use the fact that $\text{End}(V)$ fails very badly to be Dedekind finite.

This argument also uses no properties of $K$ other than that it's commutative (the condition $f(T) \neq 0$ can be replaced by the condition that $f(T)$ is invertible), so in fact it shows that the abelianization of $\text{End}(V)$ (as a monoid) is trivial.


Edit: See this MO discussion for some more context on the maps $L_1, R_1, L_2, R_2$ above.

Edit #2: I've now asked a follow-up question about what happens if we only ask for the determinant to be defined on $\text{Aut}(V)$.

Qiaochu Yuan
  • 468,795