4

Let $A$ and $B$ be two $\mathbb{R}$-vector spaces, and $F$ the $\mathbb{R}$-vector space of "smooth enough" functions between them, where: sum is defined pointwise, and by "smooth enough" I mean that all needed limits exist (I don't want to deal with weird cases).

With these assumptions, I think you can define the set $D$ of all operators from $F$ to $F$, of this form: $ d(f)(x) = \lim_{\epsilon \rightarrow 0} ((f(x+\epsilon v) - f(x)) / \epsilon ) $. (clearly there's one for each $v$) (Is this ill-defined somehow?)

Now, take $B$ to be an algebra over $\mathbb{R}$, with some vector product. Now $F$ is an algebra over $\mathbb{R}$ too, with the product defined pointwise. Clearly the definition of $D$ still applies.

But, you can now define a new set of operators, $L$, defined to be the set of all linear operators from $F$ to $F$ that satisfy the Leibniz rule, $l(fg) = l(f)g + fl(g)$.

My question is:

Are $L$ and $D$ the same set?

If yes, why?

(For context: it looks like they are the same in the case of $A=\mathbb{R}^d$, $B=\mathbb{R}$.)

If no (as it's more likely in general- since the definition of $D$ doesn't use the product at all!): Can you provide a counterexample? And most importantly: which additional conditions on $B$ do you have to impose for $D$ and $L$ to be the same?

For example: Is it only true on $B=\mathbb{R}$ ? What about $B=\mathbb{C}$ ? What if $B$ has finite dimension?

And finally (which is of course my real question, although it is only a soft question): why does $L$ turn out to be a more interesting way than $D$ to extend the concept of derivative to more abstract structures? What's interesting about derivatives over algebras? Any insight on this would be appreciated.

Micoloth
  • 183
  • https://math.stackexchange.com/questions/4878410/what-does-an-exotic-derivation-at-a-point-x-0-in-mathbbrn-look-like/4878635#4878635 – Moishe Kohan Apr 26 '24 at 22:47
  • Thanks Moishe.. Very interesting. Can you explain what you mean by “vector space V=m/m^2” in your answer? I’ve seen that notation before but I don’t know how to read it – Micoloth Apr 27 '24 at 09:08
  • Do you know how to define the quotient of a ring by an ideal? – Moishe Kohan Apr 27 '24 at 11:25
  • Ahh.. Yes, i don't have much experience w it but I know the idea. And what operation is m^2 ? – Micoloth Apr 27 '24 at 13:33
  • It is the square of an ideal, https://math.stackexchange.com/questions/290229/explaining-the-product-of-two-ideals – Moishe Kohan Apr 27 '24 at 13:43

1 Answers1

1

Unfortunately, your question suffers from certain deficiencies:

(1) You did not define what "smooth enough" means, especially if $A$ is infinite-dimensional.

(2) You did not define what $v$ is. Is it supposed to be a vector field? If so, of what degree of smoothness? Or maybe you mean a constant vector?

(3) In order to define limits you have to prescribe a topology on $B$ and you did not explain the setup for this.

Let's simplify the matters by assuming that $A$ and $B$ are finite-dimensional, then there is a canonical topology on both and we know what $C^r(A,B)$ means for $0\le r\le \infty$.

Then directional derivatives $D_v$ with respect to vector fields $v$ make sense. A map $D: C^r(A,B)\to C^{r-1}(A,B)$ (note that we are not mapping $C^r$ to $C^r$!) is called a derivation if it is $\mathbb R$-linear and satisfies the Leibnitz Rule: $$ D(fg)= gD(f) + f D(g). $$

I will address your question

Are $$ and $$ the same set?

More precisely:

Do all derivations come from directional derivatives along vector fields.

Here are some answers.

  1. Assume that we are working in the $C^\infty$ category, i.e. we consider derivations $C^\infty\to C^\infty$ and use $C^\infty$ vector fields $v$. Then it is elementary (and you can find a proof in many differential geometry textbooks) that all such derivations come from directional derivatives along vector fields.

It suffices to understand the case $A\cong \mathbb R^n, B\cong \mathbb R$. Let $D$ be a derivation. Consider the functions $f_i(x)=x_i$ (the $i$-th coordinate of $x$). By applying $D$ we obtain $$ D(x_i)=v^i(x)\in C^\infty(A). $$ Define a vector field $v(x)=(v^1(x),...,v^n(x))$. Then for a general $f\in C^\infty(A)$ we get
$$ D(f)= D_v(f). $$ To prove this, first note that $D(1)=0$ (where $1$ is the constant function identically equal to $1$). Thus, $D$ vanishes on all constant functions. Let's check that $D(f)= D_v(f)$ at $0$ for functions $f$ such that $f(0)=0$ (which can be achieved by subtracting $f(0)$ from $f$ and using linearity of $D$). Then $$ f(x)=\sum_{i=1}^n x_i g_i(x), $$ for some functions $g\in C^\infty(A)$ (this is where the proof breaks down in the case $r<\infty$). Thus, $$ g_i(0)=\frac{\partial}{\partial x_i} f (0).$$ Applying the Leibnitz Rule we get $$ Df(0)= \sum_{i=1}^n D(x_i) g_i(0)= \sum_{i=1}^n D(x_i) \frac{\partial}{\partial x_i} f (0) = \sum_{i=1}^n v^i(0) \frac{\partial}{\partial x_i} f (0)= D_v(f)(0), $$ as required. The verification at points other than $0$ is similar and I omit it.

  1. Assume that we are working in the $C^1$ category, i.e. we consider derivations $C^1\to C^0$ and use $C^0$ vector fields $v$. It is then proven in Proposition 2 in

Osborn, Howard, Intrinsic characterizations of tangent spaces, Proc. Am. Math. Soc. 16, 591-594 (1965). ZBL0138.43003.

that all derivations in this setting again come from directional derivatives.

Lastly, regarding

And finally (which is of course my real question, although it is only a soft question): why does turn out to be a more interesting way than to extend the concept of derivative to more abstract structures?

The main reason is that it allows one to define vector fields on differentiable manifolds, see for instance the discussion here.

Moishe Kohan
  • 111,854