0

I'm trying to understand the idea behind set $F(S)$ we consider for constructing a free vector space over a given set $S$ and a field $K$. We define $F(S)$ as the maps $f: S \to K$ such that $C(K)$ is the set of functions $f^{-1} \left( K -\{0 \} \right)$ is finite.

Why do we need finiteness condition? Reference

2 Answers2

1

Recall the defining universal property of "the free vector space on $S$": it must be a vector space $W$ together with a set-theoretic map $i\colon S\to W$ such that for every vector space $V$ and every set map $g\colon S\to V$ there exists a unique linear transformation $T\colon W\to V$ such that $T(i(s))=g(s)$ for all $s\in S$. Note that knowing what happens to $i(S)$ inside $W$ tells you what happens in $\mathrm{span}(\{i(s)\mid s\in S\})$, but not what happens outside of that span; so the uniqueness will force $\mathrm{span}(i(S))=W$. The vector space $F(S)$ tries to capture that with the finiteness condition, since the span consists of linear combinations which are inherently finite.

You should think of a function $f\in F(S)$ as a formal linear combination of elements of $S$, there $f(s)$ is the coefficient of $s$ in that linear combination. Because linear combinations are finite, there can only be finitely many nonzero coefficients.

You can define a vector space without that condition: the vector space $F^S$ of all functions $f\colon S\to F$ with pointwise addition and obvious scalar multiplication is a vector space. But if $S$ is infinite, then this vector space is not free on $S$. The reason behind this is the fact that linear combinations are finite sums of vectors.

The vector space $F(S)$ you describe includes the functions $\{\delta_s\}_{s\in S}$ which take value $\delta_s(t) = \delta_{st}$ (Kronecker's delta) on all $s,t\in S$; that is, $$\delta_s(t) = \left\{\begin{array}{ll} 1 &\text{if }t=s,\\ 0 &\text{if }t\neq s. \end{array}\right.$$ These functions are linearly independent. I claim that these are the functions you need for the "free" vector space on $S$. We identify $s\in S$ with the function $\delta_s$.

Indeed, let $V$ be any vector, and let $g\colon S\to V$ be any function. We need there to be a unique linear transformation $T\colon F(S)\to V$ such that $T(\delta_s) = g(s)$ for all $s\in S$. The definition of $T$ is clear: you should define $T(\delta_s)=g(s)$, and then "extend linearly". But that only tells you how to deal with function $\mathsf{f}\colon S\to F$ given by a finite subset $s_1,\ldots,s_n\in S$ of pairwise distinct elements, and scalars $\alpha_i$ such that $$\mathsf{f}=\sum_{i=1}^n \alpha_i\delta_{s_i}.$$ That is, precisely the functions in $F(S)$. This is linear and unique satisfying the given conditions.

If you drop the finiteness condition, then there are other functions in $F^S$ when $S$ is infinite; for example, the function that sends every element of $S$ to $1$. If you accept the Axiom of Choice, then you can extend $\{\delta_s\}_{s\in S}$ to a basis $\gamma$ for $F^S$, and you can define infinitely many distinct linear transformations $F^S\to V$ by letting it take arbitrary values in the elements of $\gamma\setminus\{\delta_s\}_{s\in S}$.

The dimension of $F(S)$ is always $|S|$. When $S$ is finite, this is the same dimension as the dimension of $F^S$; but when $S$ is infinite, you get a vector space of strictly larger cardinality. This is the key behind the fact that a vector space is isomorphic to its dual if and only if it is finite dimensional. Because the dimension is strictly larger, the linear transformation induced by a map $g\colon S\to V$ will not be unique, so you do not get a "free vector space on $S$".

Arturo Magidin
  • 417,286
1

$ \def\l{\lambda} $ Arturo's answer is very clear and complete, but maybe the following captures the "idea" of $F(S)$ a little.

What we want $F(S)$ to be is a vector space $V$ for which $S$ is a basis. If we had one, then every $v\in V$ would be a (finite) linear combination of elements of $S$: $$ v = \sum_{s \in A} \l_s s, $$ where $A$ is some finite subset of $S$. But we don't know how to do addition or scalar multiplication for elements of $S$.

The machinery described by Arturo is a way of making the idea of "a linear combination of elements of $S$" into a formal object that we can manipulate. The finiteness condition on $f^{-1}(K\setminus\{0\})$ reflects the fact that all elements of $F(S)$ are finite linear combinations of elements of $S$.

Jamie Radcliffe
  • 2,259
  • 1
  • 14
  • 9