11

I am trying to see if someone can help me understand the isomorphism between $V$ and $V''$ a bit more intuitively.

I understand that the dual space of $V$ is the set of linear maps from $V$ to $\mathbb{F}$. i.e. $V' = \mathcal{L}(V, \mathbb{F})$.

Therefore, double dual of $V$, is the set of linear maps from $V'$ to $\mathbb{F}$, or $V'' = \mathcal{L}(V', \mathbb{F})$. That is to say, the $V''$ is the set of linear functionals on linear functionals on $V$.

The part that gets me tripped up is the natural isomorphism $\varphi: V \rightarrow V''$, where $\varphi(v)(f)=f(v)$ for $f \in V'$. I know how the proof that this is a isomorphism goes, but I am having trouble understanding it intuitively.

I think of an isomorphism as a bijective map that tells me how to "relabel" elements in the domain to elements in the codomain. For example, the subspace $\{(0,y) | y \in \mathbb{R} \} \subset \mathbb{R}^2$ is isomorphic with the subspace $\{(x,0) | x \in \mathbb{R} \} \subset \mathbb{R^2}$. One particular isomorphism is the map $T: \mathbb{R}^2 \rightarrow \mathbb{R}^2$ defined by $(0,y) \mapsto (y,0)$. It's clear that the rule says: take the input, and flip the coordinates. In particular, it tells me how to go from one vector space to the other clearly.

However, when I try to figure out what the rule is for $\varphi: V \rightarrow V''$ in words, I'm a little stuck.

$\varphi$ takes any $v \in V$ and finds a unique map $g \in \mathcal{L}(V', \mathbb{F})$. How does it "find" this unique map $g$? The definition $\varphi(v)(f)=f(v)$ seems to only describe what you do with $g$, which is evaluate it with the input $f$ and $v$ - it doesn't tell me what this $g$ is, in way that's equally satisfying like the example with $\mathbb{R}^2$ above.

Another way to pose my question is, how would you define $\varphi:V \rightarrow V''$ using the "maps to" symbol? $v \mapsto .....?$ I'm not sure what should be in the place of the .....

Snowball
  • 1,159
  • 3
    $g$ is the map $f\mapsto f(v)$, evaluation at $v$. So, $\varphi$ is the map $v\mapsto (f\mapsto f(v))$, the map that sends $v$ to the functional 'evaluation at $v$'. – conditionalMethod Dec 05 '19 at 09:02
  • Just to clarify on the saying "evaluation at $v$", which I've seen numerous places. If $g$ is just a map in $V''$ (not in the context of this isomorphism), is it automatically endowed with a $v \in V$? In other words, when I think of any $g \in V'' $, do I think of it having a $f \in V'$ and $v \in V$ as an input? Previously I've only been thinking of $g$ only as having $f$ as an input, and somehow there's a way to associate each of these $f$ with $\mathbb{F}$, which may be why I'm slightly confused. – Snowball Dec 05 '19 at 09:34
  • 1
    One thing that may be adding to your confusion: $\varphi$ is always linear and injective, but it is only surjective (and therefore an isomorphism) when $V$ is finite dimensional (or when your definition of "dual space" is more than just "linear maps into the scalar field"). Therefore, it doesn't directly correspond to your $\Bbb R^2$ example, as there isn't a natural way to find the $v \in V$ that maps to a given $g \in V''$. – Paul Sinclair Dec 05 '19 at 17:59
  • @PaulSinclair So if I limited it to $V$ being finite dimensional, and "dual space" is just linear maps into the scalar field, then is this still true? "Therefore, it doesn't directly correspond to your $\mathbb{R}^2$ example, as there isn't a natural way to find the $v\in V$ that maps to a given $g \in V''$?" – Snowball Dec 05 '19 at 18:53
  • @Snowball Have you any experience with functional programming, in particular, are you familiar (or at least somewhat acquainted) with currying? If so: $\varphi$ is the curried version of the evaluation map $\eta \colon V \times V' \to \mathbb{F}$. – Daniel Fischer Dec 05 '19 at 19:05
  • @DanielFischer No I don't. But I saw it mentioned in celtschk's post. – Snowball Dec 05 '19 at 19:56
  • @Snowball - It is an isomorphism for finite dimensional vector spaces. But my point is that if there were some natural way of defining $\varphi^{-1}(g)$, much like the definition of $\varphi(v)$, then you would have an obvious back-and-forth, making it easier to understand the relationship. But such a natural formula for the inverse would work for infinite dimensions as well as finite, and so no such formula can exist. This is true even when you restrict to finite dimensions. – Paul Sinclair Dec 05 '19 at 21:51
  • @DanielFischer Thank you for mentioning functional programming. I studied math and comp-sci in Uni and I couldn't understand dual spaces or pullbacks at all until I started writing stuff out using Lambda Calculus. My professor basically thought I lost my mind. I would love to see a pedagogical approach like covered advanced linear algebra from this perspective. – Searke Dec 10 '19 at 18:53

5 Answers5

11

Maybe it helps if we first widen our view, in order to then narrow it again and see the double-dual as a special case.

So let's start with functions (any functions, for now) $f:X\to Y$. As a concrete example, take $X=Y=\mathbb R$. That is, we are dealing with real-values functions of a real argument: $f:\mathbb{R} \to \mathbb{R}$. Examples are

  • the identity $\mathrm{id} = x\mapsto x$,
  • the constant functions $\mathrm{const}_c = x\mapsto c$, and
  • the trigonometric functions $\sin$ and $\cos$.

The normal way to look at functions is to think of them as encoding the operation. For example, it is a property of the function $\sin$ that it maps the number $\pi$ to the number $0$: $$\sin(\pi) = 0$$

But another view is that the result of applying the function $\sin$ to the number $\pi$ gives the number $0$, and it is that applying that has all the logic. So you have one function $\mathrm{apply}$ that takes two arguments, a real function and a real number, and assigns them another number: $$\mathrm{apply}(\sin,\pi)=0$$

Now looking at this form, we see that $\sin$ and $\pi$ are on equal footing. Both are merely arguments of the $\mathrm{apply}$ function. You recover the original sine function by “pre-inserting” $\sin$ as first argument of apply (this is known as currying): $$x\mapsto \mathrm{apply}(\sin,x)$$

But given that both arguments are on equal footing, you may just as well pre-apply the second argument instead: $$f\mapsto \mathrm{apply}(f,\pi)$$

We might consider this the application of $\pi$ to the function $f$. Thus $\mathrm{apply}(\sin,\pi)$ could equivalently be written as $$\pi(\sin) = 0$$

In this manner, we can define a set of “number functions” $$\{f\mapsto \mathrm{apply}(f,c) \mid c \in\mathbb{R}\} \tag{1}$$

So now, from each real number $c \in \mathbb{R}$, we get a function that maps real functions to real numbers. Note that just like the function $\sin$ is not determined just by the value $\sin(\pi)$, but by the values it takes for all real numbers, similarly, the function $\pi$ is not determined just by the value it takes at $\sin$, but by the values it takes for all real functions. That is, we not only have $\pi(\sin)=0$, but also $\pi(\cos)=-1$, $\pi(\mathrm{id})=\pi$ and $\pi(\mathrm{const_c})=c$.

Note also that the set of real functions $$\mathcal F := \{f : \mathbb R \to \mathbb R\} \tag{2}$$ forms an $\mathbb R$-vector space under pointwise addition $f_1 + f_2 = x \mapsto f_1(x) + f_2(x)$ and scalar multiplication $cf = x \mapsto cf(x)$. It is easily determined that the “number functions” in (1) are linear functions on $\mathcal F$; that is, they live in the dual space $\mathcal F^*$.

However, the set in (1) is only a proper subset of the dual space of $\mathcal F$ in (2) because it doesn't include the constant function $f\mapsto 0$ (as there is no real number that is mapped to $0$ by all real functions). Indeed, that example shows that (1) is not even a vector subspace of $\mathcal F^*$ because it does not include the zero element.

We have, however, an injection into that dual, as we can identify each number by looking only at the function values. The easiest choice is to apply each number to the identity function (that returns the number itself), but even if we did not have that available (as will be the case below), we could e.g. look at the functions that are $1$ for exactly one number, and $0$ for all others; with those functions, we can uniquely identify the number by just noting which of those functions give a value of $1$.

Now let's look instead at a vector space $V$ over a field $K$, and at linear functions $V\to K$, that is, members of the dual $V^*$. Again, we can do the same game as above, and for each vector, we get a function mapping members of $V^*$ to the dual of $V^*$, which is the double dual of $V$.

However, now that we have only linear functions, we get more than above: The function that maps vectors to members of the double dual can easily be shown to be linear itself. And again, we can construct a set of functions in $V^*$ that uniquely identifies the vector: Choose a basis $\{b_i\}$ in $V$, and then take the set of linear functions $f_i$ that map $v = \sum_i\alpha_i b_i$ to $\alpha_i$. Since a vector is uniquely identified by its basis coefficients, this proves that the map $V\to V^{**}$ is injective: You can uniquely identify the vector by the values $v(f_i)=\alpha_i$.

celtschk
  • 44,527
6

How would you define $\varphi:V \rightarrow V''$ using the "maps to" symbol?

We can write $$\begin{aligned}\varphi:V&\longrightarrow V''\\ v&\longmapsto\left( {\begin{aligned} g_v:V'&\to\mathbb R\\ f&\mapsto f(v) \end{aligned}}\right) \end{aligned}$$ Therefore, $$\varphi(v)=g_v$$ and thus $$(\varphi(v))(f)=g_v(f)=f(v)$$

In short: $\varphi$ is the map $v\mapsto g_v$ where, for each fixed $v\in V$, $g_v$ is the map $f\mapsto f(v)$.


Edit (in response to the comments)

Example: Let $V$ be the vector space of polynomials. In this case, $\varphi$ is the map that takes a polynomial $p$ to the linear map $g_p$ defined by $$g_p(f)=f(p),\quad \forall \ f\in V'.$$ For example:

  • if $f:V\to\mathbb F$ is the linear functional that evaluates a polynomial $p$ at the value $1$ (that is, $f(p)=p(1)$), then $$g_p(f)=p(1).$$ In particular,
    • $g_{x^2-1}(f)=0$
    • $g_{x^2+1}(f)=2$
    • $g_{x-1}(f)=0$
  • if $h:V\to\mathbb F$ is the linear functional that evaluates a polynomial $p$ at the value $2$ (that is, $h(p)=p(2)$), then $$g_p(h)=p(2).$$ In particular,
    • $g_{x^2-1}(h)=3$
    • $g_{x^2+1}(h)=5$
    • $g_{x-1}(h)=1$
  • if $i:V\to\mathbb F$ is the linear functional that evaluates a polynomial $p$ at the value $\int_0^1 p(t)\;dt$ (that is, $i(p)=\int_0^1 p(t)\;dt$), then $$g_p(i)=\int_0^1 p(t)\;dt.$$ In particular,
    • $g_{x^2-1}(i)=-\frac{2}{3}$
    • $g_{x^2+1}(i)=\frac{4}{3}$
    • $g_{x-1}(i)=-\frac{1}{2}$

Remark: The image of $p\in V$ by $\varphi$ is the functional $g_p$ (not the value of $g_p$ in some particular functional). Therefore, the fact that $g_{x^2-1}(f)=0$ and $g_{x-1}(f)=0$ (for the particular $f$ in the example above) does not violate the injectivity of $\varphi$ because the images of $x^2-1$ and $x-1$ by $\varphi$ are not $0$. In order to violate injectivity, we should have the existence of $p,q\in V$ such that $$\varphi(p)=\varphi (q),$$ that is, $$g_p(f)=g_q(f),\quad \forall\ f\in V'$$ (for all $f$, not only for a particular $f$).

Pedro
  • 19,965
  • 9
  • 70
  • 138
  • Thanks for the quick answer. So let's say $V$ was the vector space of polynomials. $f: V \rightarrow \mathbb{F}$ is the linear functional that evaluates a polynomial at the value 1. What you are saying is that $\varphi$ is the map that takes a polynomial, say $x^2-1$ to a linear map, $g$, which evaluates $x^2-1$ at 1, which gives us 0? (I am tempted to say "which evaluates 1 at $x^2-1$, since $f$ represents evaluation at 1, and $v$ is the polynomial, which is the reverse of what I'm used to.) – Snowball Dec 05 '19 at 09:45
  • But then wouldn't another $v \in V$, say $x-1$, evaluated at $1$, give us $0$, and thus imply $\varphi$ isn't injective? – Snowball Dec 05 '19 at 09:50
  • @Snowball See my edit in the post – Pedro Dec 05 '19 at 11:35
  • 1
    Thanks for the edit. Very helpful. – Snowball Dec 05 '19 at 18:27
  • One more follow up - if I were to look at the maps $g$ in the vector space $V''$ outside of the context of our isomorophism, then any map $g$ in $V''$at this point does not have a $v$ associated with it, it merely asks for a $f \in V'$ as input. Continuing with our polynomial example above, $g$ is asking for "what do I evaluate at"? Once you give $g$ an $f$, $g$ is the map from this $f$ to $\mathbb{F}$. What I find odd is that it is this $f$ that demands a polynomial as an input, not the $g$. So it seems that $g$ could live happily without ever being given an $v$... – Snowball Dec 05 '19 at 18:47
  • But in your examples above, $g$ is identified by the $v$. How can then $g$ be uniquely identifiable by a $v$ then? In other words, $g$ only needs the "evaluation at" functional. And this "evaluation at" functional needs a polynomial. But we directly say, $g_{x^2-1}(f)=0$, for example, in the context of the isomorphism from $v$ to $g$. – Snowball Dec 05 '19 at 18:51
1

A shorthand way to write some partially evaluated functions is by leaving a $-$ sign (pronounced “blank”) in the space of an argument. As an example, if $v \in \mathbb{R}^n$ and $\cdot$ is the dot product, we have a function $(v \cdot -) \in V^*$ given by taking the dot product with $v$, meaning $(v \cdot -) = (u \mapsto (v \cdot u))$. As an example, we could say that the hyperplane orthogonal to $v$ is the set of points where the function $(v \cdot -)$ evaluates to zero.

Now, if $V$ is any vector space and $V*$ is its dual, then for $v \in V$ and $f \in V^*$ introduce the alternative notation $\langle v, f \rangle = f(v)$. (I like this notation because it reminds me that $(v, f) \mapsto f(v)$ is bilinear, and puts $V$ and $V^*$ on more equal footing). There are two canonical partial evaluations we can do:

  1. The map $V^* \to V^*$ defined by $f \mapsto \langle -, f\rangle$ is the identity map.
  2. The map $V \to V^{**}$ defined by $v \mapsto \langle v, - \rangle$ is the canonical injection into the double dual.
Joppy
  • 13,983
0

This natural isomorphism only arises in finite-dimensional vector spaces. Do note that there exists isomorphisms between $V$ and $V^*$ as well, but these need coordinates (or rather, an inner product) to be properly defined, so they're never a "natural" isomorphism. (Fun fact, it's apparently this very question of a bijection which needed extra properties to work well (ie, not "natural") which led Eilenberg and MacLane to develop Category Theory.)

My way of seeing this question intuitively is the following.

1) $V \simeq L(K, V)$

Why ? Your vectors in $V$ are column vectors, and are thus $n*1$ matrices, so correspond to maps from $K$ (dimension $1$) to $V$ (dimension $n$). (This is another way of understanding vectors, as functions from scalars into vectors.)

Fun fact: $K \simeq L(K, K)$, even as a $K$-algebra isomorphism, where multiplication of scalars is composition of functions.

2) $V^* := L(V, K)$

What are elements of $V^*$, covectors, as matrices ? Covectors are simply row-vectors, so $1*n$ matrices, which take an $n$-vector and return a scalar.

3) Going from $V$ to $V^*$, or $L(K, V)$ to $L(V, K)$

How do you go from one to the other ? Your (conjugate) transpose. But since the (finite-dimensional, conjugate) transpose is an involution, you get back what you started with, ie, elements of $V^{**}$ are column vectors just like elements of $V$.

This makes sense, if you consider bra-ket type handling of vector spaces and their dual. For the double-dual, you want a map that returns a scalar from a covector, in a linear way. What allows you to return a scalar from a covector $\langle \phi|$ ? Simply a vector $|u\rangle$. So it makes sense that you'd have precisely the same possibilities for evaluation maps $\epsilon_u$ as you do for vectors $u$, ie an isomorphism $V \simeq V^{**}$ such that $|\epsilon_u \rangle \langle \phi| = \langle \phi | u \rangle$

4) Infinite dimensions

In infinite dimensions, the dualization operator is injective. Thus, the double-dualization operator is a composition of injections, and an injection itself.

  • Are you saying that $\varphi:V\to V''$ (as defined by the OP) cannot be an isomorphism if $V$ is infinite dimensional? – Pedro Dec 05 '19 at 13:56
  • That's what I seem to get from what I read on the subject (the injectivity of the dual map is often stated for infinite dimensional vector spaces, but not justified, nor accompanied by examples; sometimes it's presented as the idea that "your dual-of-dual-of-dual spaces just keep getting larger" in a cardinal sense). So since I'm not an expert on infinite-dimensional vector spaces and their duals, I can't really help you any more than that. Sorry ! :D – Tristan Duquesne Dec 05 '19 at 15:42
  • 1
    @Pedro See here. For infinite-dimensional spaces, the dimension of the [algebraic] dual is strictly larger, hence a fortiori the dimension of the double dual. – Daniel Fischer Dec 05 '19 at 18:57
  • 1
    Proving the injectivity of $\varphi$ is easy. Take $0 \neq v \in V$. Extend to a basis, define $\lambda \in V'$ by $\lambda(v) = 1$ and $\lambda(w) = 0$ for all $w \neq v$ in the basis. Then $\varphi(v)(\lambda) = 1 \neq 0$, so $\varphi(v) \neq 0$. – Daniel Fischer Dec 05 '19 at 19:00
  • 1
    In fact, I've given proving the naturality of this isomorphism as an exercise. (Though not in those terms, just asking them to prove if $T : V \to W$ is linear then $\Phi_W \circ T = T^{**} \circ \Phi_V$ without mentioning where the problem came from. Makes a nice exercise in unfolding the definitions.) – Daniel Schepler Dec 06 '19 at 02:31
  • 1
    @DanielFischer and Tristan, thanks for the clarifications. – Pedro Dec 07 '19 at 23:07
0

The intuitive difficulty you are having seems to be that you wish to write $\varphi(v) = g,$ or $v \mapsto g$, where $g$ is an expression that denotes a function in the same way in which $(y, 0)$ denotes an ordered pair, or in which (say) $\{x \in \mathbb{R} : x > 1\}$ denotes a set, so that it doesn't appear as if $g$ somehow magically already exists.

The only way I can think of to do so without either inventing a new notation (the edit history of this answer contains several unnecessary and embarrassingly verbose attempts in that direction) or relying too heavily on an arbitrary choice of a particular set-theoretic construction of a function (as a set of ordered pairs, or as a tuple with an element that is a set of ordered pairs), is to use the notation for a family. You could write: \begin{gather*} \varphi \colon V \to V'', \ v \mapsto (f(v))_{f \in V'}, \\ \text{or }\ \varphi(v) = (f(v))_{f \in V'} \in V'' \quad (v \in V), \end{gather*} or (to press the point - admittedly tastelessly): $$ \varphi = ((f(v))_{f \in V'})_{v \in V} \in \mathscr{L}(V; V''), $$ or any of several other variants (which I must refrain from labouring, as I did in earlier versions of this answer!).