0

I have found several questions about functions with infinite stationary points like

and there are certainly more to find. Many of the questions I found involve around if it is possible to have an infinite of stationary points and how to find whether it is a minimum, maximum or saddle point. We would also like to look at two examples. The first being from the title, find the stationary points of the function given by

$$ f(x, y) = x \cdot y^2 $$

We know that we can find the stationary points by letting every partial derivative equal to zero. But this gives

$$\begin{align} f_x(x, y) &= y^2 \\ f_y(x, y) &= 2 \cdot x \cdot y \end{align}$$

and since $y=0$ with the first function, what is the value of $x$? Above that, we find that the Hessian is given by

$$ H(x, y) = \begin{bmatrix}0 & 2y \\ 2y & 2 x\end{bmatrix} $$

Calculating the determinant gives us

$$ \det H(x,0) = 0 \cdot 2x - (2\cdot0)^2 = 0 $$

Would these points be given by minimum, maximum, saddle points, how can we know?


For the second example, we have this complex equation given by a sum for $n > 4$: $$ g(x_1, \dotsc, x_n) = x_1 \sum_{i=4}^n x_i^2 + \bigl(x_2+x_3\bigr) \sum_{i=4}^n x_i $$

How can you find the infinite stationary points, and how would you classify them? I searched a bit, but could not find many questions involving sums like this.

anderium
  • 113

1 Answers1

1

Infinite stationary points?

As I'm only having my first calculus class and not a maths student I cannot answer the question of how to always determine the type of points. However, there can indeed be an infinite of stationary points for a given function.

By letting $f_x(x, y) = 0$ we determine that $y = 0$ must be true. Using this to let $f_y(x, y) = 0$ we find that this is already $0$. In case we don't believe it doesn't matter what $x$ is, let's take an example. If we let $x = \pi$, which has no reason to be chosen. We find that

$$ f_y(\pi, 0) = 2\cdot\pi\cdot0 $$

This is indeed equal to zero, so any point $(x, 0)$, with $x\in\mathbb{R}$, is a stationary point.

Calculating the determinant of the hessian gives that this is zero. So how do we determine the type of stationary point? This can be done by using a bit of intuition, or if possible plotting the function too. If we take a slice of the function where we vary the value of $y$, we can see that the function is of the form $c \cdot y^2$. When $x < 0$, this parabola opens to the bottom, so our point is a maximum. If we look at points where $x > 0$, the parabola opens to the top, so these points are minima. In the special case of $x = 0$, the function is a line. Since it has positive values on one side and negative on the other, we can say that this point is a saddlepoint. We can confirm this by plotting in this case, see this WolframAlpha calculation*, that what we intuitively thought gives the correct answer. See also the image below:

Plot of function $x \cdot y^2$+

* I used the 'stationary point calculator' for the link, but WolframAlpha does not display all the stationary points we found here. It does plot the function for us.
+ I need 10 reputation to post the image, so I can't post the image, it's just the link now.


A more complex example

This second example is actually a bonus question from my homework this week. I will use a smaller question with sums as an example. Let's first find the stationary point of $$ h(x_1, \dotsc, x_n) = \sum_{i=1}^n \bigl(x_i - i\bigr)^2 $$

Since this is just the sum of multiple variables, without interaction between them, the partial derivative to any $x_i$ with $1 \leq i \leq n$ is given by the partial derivative to this term: $$ \frac{\partial}{\partial x_i} \sum_{i=1}^n \bigl(x_i - i\bigr)^2 = \frac{\partial}{\partial x_i} \Bigl[\bigl(x_i - i\bigr)^2\Bigr] = 2\bigl(x_i - i\bigr) $$

As for when variables are letters, the stationary point is the point where all partial derivatives are zero. This means that $f_{x_i} = 0$ for all $1 \leq i \leq n$. Solving this gives us the answer that $x_i = i$ which corresponds with the point $(1, 2, 3, \dotsc, n)$


The calculation of the stationary points of the function $g$ can be done very similar. Instead of calculating the partial derivative for each term at the same time, we will look at the cases where $i=1$, $i=2 \vee i=3$ and $i\geq4$. Let us first calculate the partial derivative for $i=1$.

As we can see when we look at the function, the first part is a constant times $x_1$ and the second part contains no $x_1$. This means that our partial derivative is just the constant before $x_1$, the sum of squares. $$ g_{x_1}(x_1, \dotsc, x_n) = \sum_{i=4}^n x_i^2 $$

The partial derivative for $x_2$ and $x_3$ can be calculated in a similar way. Their derivatives are: $$ g_{x_2}(x_1, \dotsc, x_n) = g_{x_3}(x_1, \dotsc, x_n) = \sum_{i=4}^n x_i $$

For $i\geq4$ we find that the partial derivative in a similar fashion to what we did before. Since it is about a sum, we know that we really only need to take the partial derivative for the specific term $i$: $$ g_{x_i}(x_1, \dotsc, x_n) = \frac{\partial}{\partial x_i} \Bigl[x_1 \cdot x_i^2 + \bigl(x_2+x_3\bigr) x_i\Bigr] = 2 x_1 x_i + x_2 + x_3 \qquad \textrm{where }i\geq4 $$

If we solve $g_{x_1}\! = 0$, we find that all $x_i$ for $i\geq4$ are zero, as there are no reals whose square is negative. Substituting this into $g_{x_2}$ and $g_{x_3}$ yields that these equal to zero now too. This means the last we have to solve is $g_{x_i}\! = 0$. $$\begin{align} g_{x_i}(x_1, \dotsc, x_n) &= 0 \\ 2 x_1 x_i + x_2 + x_3 &= 0 \\ 0 + x_2 + x_3 &= 0 \qquad \textrm{since } x_i = 0 \textrm{ for } i\geq4\\ x_2 &= -x_3 \end{align}$$

This gives us that there are no constraints for $x_1$ and that the only constrains for $x_2$ and $x_3$ are that they should be each other's opposite. The stationary points of $g(x_1, \dotsc, x_n)$ are the points given by $\bigl(a, b, -b, 0, \dotsc, 0\bigr)$ where $a, b \in \mathbb{R}$.

I will not dive into which points are minimum and maximum, though I assume it is going to be similar to what we found with our first example.

anderium
  • 113