0

I've been thinking really hard about the dual space recently, hoping to think about it in terms of codim 1 subspaces and seeing if there was a reasonable way to define scalar multiplication and addition on this space. Needless to say: I gave up. The best I could come up with is the following (on finite dimensional spaces):

Every non-zero linear functional has a kernel of dimension $n-1$, and maps a unique vector to the scalar 1. The former is by rank-nullity, and hence $f^{-1}(1)=v_f+w$ for a unique $v_f \in f^{-1}(Im(f))$ which is of dimension 1, and any $w \in ker(f)$. Let $\phi:=f \rightarrow v_f$. Then it is obvious that $\phi$ is a bijection. If we want it to be a homomorphism, we need e.g. $\phi^{-1}(v_f+a\cdot v_g)=h$ such that $h(v_f+a \cdot v_g)=1$, and $h=f+a\cdot g$. But the latter evaluates to $a+1$. So in a sense the problem is just scaling. Does this make any sense? Is there any way this can be used for intuition or salvaged to say something reasonable? How does a basis/an IP fix this problem?

Derek
  • 542
  • Am I missing something? If $f(v_0)=1$, then $f(v_0+v)=1$ for all $v \in \ker f$. – azif00 Apr 12 '25 at 04:30
  • Yeah, I don’t know what you are using the word “unique” to mean. The fibers of points are cosets of the kernel. – Malady Apr 12 '25 at 05:04
  • Also, $f^{-1}(\mbox{Im}f)=\mbox{dom}f$, which obviously does not have dimension $1$ in general. – Malady Apr 12 '25 at 05:05
  • @Malady Well the kernel is n-1 dim subspace, and I'm saying take the unique v that has no component in the kernel. – Derek Apr 12 '25 at 05:11
  • You haven't said what $n$ or $V$ is, but some vector spaces have infinite dimensions. In finite dimensions, there is always a linear bijection. – Thomas Andrews Apr 12 '25 at 06:08
  • 1
    Basically, you should state your terms and definitions up front. – Thomas Andrews Apr 12 '25 at 06:10

1 Answers1

1

Does this make any sense?

Not quite, as stated. Without an inner product, you don't have a way to choose a "unique" $v_f$ like you want. In particular, you don't have a concept of orthogonal projection, which you would need to make sense of "the unique $v$ that has no component in the kernel" (from the comments).

How does a basis/an IP fix this problem?

If you have an inner product, you can pair $v\in V$ with $\langle v, \cdot\rangle \in V^*$. In fact, you can make do with a bit less than a true inner product. See Isomorphisms Between a Finite-Dimensional Vector Space and its Dual

A basis is a even stronger gadget, from which you can define an inner product. As soon as you have a basis, you can employ all your favorite tricks for matrices of numbers. In particular, you can take the transpose of a column vector to get a row vector.

Is there any way this can be used for intuition or salvaged to say something reasonable?

Once you give up on defining an isomorphism of your own, you can turn the question around and start asking why the attempt had to fail. See In categorical terms, why is there no canonical isomorphism from a finite dimensional vector space to its dual?

Chris Culter
  • 27,415