1

In short, my question – which has come up in a mechanism design setting I'm working on – is the following. Let $f,g\colon \mathbb{R} \rightarrow \mathbb{R}$ be continuous functions and let $f$ be non-decreasing and non-negative. Then for each $y\in\mathbb{R}$ consider the function $$ r_y\colon \mathbb{R} \rightarrow \mathbb{R}\colon x \mapsto f(x)\cdot y-g(x) $$ over $\mathbb{R}$. What do $f,g$ have to look like for $r_y$ to be globally optimal at $x=y$ for all $y\in \mathbb{R}$?

Of course, if $f,g$ are somewhat "well-behaved", we can go through the usual motions. For all $y$ it has to be $r_y'(y)=0$, i.e. $g'(y)=f'(y)y$ and via the fundamental theorem of calculus $$ g(x)=C+\int_0^x f'(\zeta)\zeta d\zeta. $$

But my calculus/real analysis knowledge is insufficient to understand whether the premises of the problem ensure that $f,g$ are sufficiently well-behaved or to understand what happens if they are not well-behaved.

Here are some more thoughts of mine. It's been a while since I took advanced calculus, so some of this might be confused.

  • Instead of talking about derivatives, we can talk about the (ratios of) differences. In particular, for all $d>0,y\in \mathbb{R}$, it must be $r_y(y+d)-r_y(y)\leq 0$ and $r_y(y)-r_y(y-d)\geq 0$. Rearranging stuff a little, this is equivalent to (for all $d>0,y\in \mathbb{R}$) $$ y(f(y+d)-f(y))\leq g(y+d)-g(y)\leq (y+d)(f(y+d)-f(y)). $$ This is nice because it's similar to the relationship between the derivatives. (Dividing by $d$ and letting it go to $0$ gives the relationship between the derivatives if they exist.) But it doesn't immediately answer questions about what kinds of $f$ are allowed or how to construct the corresponding $g$. (See below for problematic examples.)

  • Lebesgue's theorem for the differentiability of monotone functions seems to imply that $f$ and $g$ are differentiable almost everywhere. (The previous point implies that if $f$ is monotone, $g$ is monotone on $(-\infty,0)$ and on $(0,\infty)$.) So we can talk about the derivatives of $f$ and $g$ almost everywhere. But we might not be able to obtain (an "allowed") $g$ via integration of $g'(x)=f'(x)x$. For example, if $f$ is the Cantor function for $x\in[0,1]$, then $$ \int_0^x f'(\zeta)\zeta d\zeta = 0, $$ right (see, e.g., sect. 2 here)? But if $f$ is the Cantor function and $g=0$, then $r_{\frac{1}{2}}(\frac{1}{2})<r_{\frac{1}{2}}(1)$. That is, $r_{\frac{1}{2}}$ does not have a maximum at $1/2$. So can $f$ be the Cantor function? That is, if $f$ is the Cantor function, is there a corresponding $g$ (s.t. $r_y$ has a maximum at $y$ for all $y$) and if so what is the corresponding $g$? More generally, can $f$ be a function that we can't find by integrating its derivative and if so, is there a way of finding the corresponding $g$?

  • The above point asks whether $g$ might not be an integral over $g'$. Another question is whether $g'$ (or $f'$) must be integrable at all. Of course, there are continuous functions whose derivatives are not integrable, but the examples I am aware of aren't even monotone (not to mention our other requirements for $f,g$).

  • One standard move that people use to replace derivatives is to use sub-/super-derivatives. But that seems to require the functions in question to be convex/concave. But neither $r_y$, nor $f$ or $g$ must be convex/concave everywhere.

CPO
  • 11

1 Answers1

0

I think I found a solution via "standard" real analysis (no subderivatives, alternative integrals, ...).

By telescoping, it is for any $n\in\mathbb{N}_{>0}$ $$ g(\mu) = g(0) + \sum_{i=1}^n g\left(\frac{i\mu}{n}\right)-g\left(\frac{(i-1)\mu}{n}\right). $$

Using the relationship between $f$ and $g$ in the OP it is $$ \sum_{i=1}^n \frac{(i-1)\mu}{n}\left( f\left(\frac{i\mu}{n}\right)-f\left(\frac{(i-1)\mu}{n}\right)\right) \leq g(\mu) - g(0) \leq \sum_{i=1}^n \frac{i\mu}{n} \left( f\left(\frac{i\mu}{n}\right)-f\left(\frac{(i-1)\mu}{n}\right)\right). $$

And now the lower and upper bounds can be interpreted as left and right Riemann sums for $f^{-1}$ for the partition $(f\left(\frac{i\mu}{n}\right))_{i=1,...,n}$. And then standard integration theory tells us that the lower and upper bound converge to the same value as $n\rightarrow \infty$: $$ \int_{f(0)}^{f(\mu)} f^{-1}(x)dx. $$ So $g(\mu)$ must also be that value plus/minus a constant.

CPO
  • 11