34

Exercise 3.32 page 119 of Convex Optimization is concerned with the proof that if $f:\mathbb{R}\rightarrow\mathbb{R}:x\mapsto f(x)$ and $g:\mathbb{R}\rightarrow\mathbb{R}:x\mapsto g(x)$ are both convex, nondecreasing (or nonincreasing) and positive, then $h:\mathbb{R}\rightarrow\mathbb{R}:x\mapsto h(x)=f(x)g(x)$ is also convex.

Are there any generalisations or analogies of this claim to the case where both $f$ and $g$ are convex functions mapping elements from $\mathbb{R}^n$ to $\mathbb{R}$, for any $n>1$?

  • 2
    Why is it necessary that $f,g$ be monotonous? – a06e Sep 26 '18 at 21:20
  • @becko The statement is not true if we drop monotonicity. For instance, let $f=(|x|+10^{-6}){x\in\mathbb R},g=(e^{-x}){x\in\mathbb R};$ then $fg=((|x|+10^{-6})\cdot e^{-x})_{x\in\mathbb R}$ is not convex. (For instance, $f(0)=10^{-6},f(1)=1.000001/e,f(2)=2.000002/e^2<1.000001/e$ since $2/e<1,$ so $f(0),f(2)<f(1),$ so $f$ is not quasiconvex and hence is not convex.) – xFioraMstr18 Oct 16 '20 at 00:17
  • See also Exercise 10(g), page 204, in Stromberg's An Introduction to Classical Real Analysis (AMS, 2015), or Exercise 10(g), page 204, in Stromberg's Introduction to Classical Real Analysis (Wadsworth, 1981). This book has lots of exercises, by the way. – xFioraMstr18 Oct 16 '20 at 00:22

1 Answers1

33

Yes a generalization is possible. Here is an elementary approach to the convexity of the product of two nonnegative convex functions defined on a convex domain of $\mathbb{R}^n$.

Choose $x$ and $y$ in the domain and $t$ in $[0,1]$. Your aim is to prove that $\Delta\ge0$ with $$\Delta=t(fg)(x)+(1-t)(fg)(y)-(fg)(tx+(1-t)y). $$ But $f$ and $g$ are nonnegative and convex, hence $$ (fg)(tx+(1-t)y)\le(tf(x)+(1-t)f(y))(tg(x)+(1-t)g(y)). $$ Using this and some easy algebraic manipulations, one sees that $\Delta\ge t(1-t)D(x,y)$ with $$ D(x,y)=f(x)g(x)+f(y)g(y)-f(x)g(y)-f(y)g(x), $$ that is, $$ D(x,y)=(f(x)-f(y))(g(x)-g(y)). $$ This proves a generalization of the result you quoted to any product of convex nonnegative functions $f$ and $g$ such that $D(x,y)\ge0$ for every $x$ and $y$ in the domain of $f$ and $g$.

In particular, if $f$ and $g$ are differentiable at a point $x$ in the domain, one asks that their gradients $\nabla f(x)$ and $\nabla g(x)$ are such that $z^*M(x)z\ge0$ for every $n\times 1$ vector $z$, where $M(x)$ is the $n\times n$ matrix $$ M(x)=\nabla f(x)\cdot(\nabla g(x))^*. $$

Did
  • 284,245