1

Let $(X, \langle \cdot, \cdot \rangle)$ be a vector space over $\mathbb{R}$ with an inner product. Then every approximately compact set in this space is proximal. Now, I want to find an example where the converse of the above statement does not necessarily hold. My attempt: consider the set $K = \{y \in l_2 \mid \|y\| = 1\}$ and the sequence $(e_n)$ defined by $e_n(i)=$ $$ \delta_{n,i} = \begin{cases} 0, & \text{if } n \neq i, \\ 1, & \text{if } n = i \end{cases} \quad \forall n, i \in \mathbb{N}. $$ Observe the following:

  • $e_n \in K$ and $d(0, K) = 1 = \|e_n\|$ for all $n \in \mathbb{N}$, hence the sequence $(e_n)$ in $K$ is minimizing for the point $0$.
  • $\|e_n - e_m\| = \sqrt{2}$ for all $n, m \in \mathbb{N}, n \neq m$, so no subsequence of $(e_n)$ is convergent, and thus the set $K$ is not approximately compact.

Now I want to prove:

  1. The set $K$ is not convex.
  2. $P_K(x) = \frac{x}{\|x\|}$ for all $x \neq 0$ and $P_K(0) = K$.
  3. The set $K$ is proximal but not Chebyshev.

For 1. point it seems obvious that $K$ in not convex, but how would I prove it formally. Point 2. I just don't know how to even approach. Any for point 3. why does point 1. and 2. imply point 3.? Why do we even need that $K$ is not Chebyshev here? Thanks for all your help in advance.

Note: Let $K$ be a nonempty subset of the inner product space $X$ and let $x \in X$. An element $y_0 \in K$ is called a best approximation, or nearest point, to $x$ from $K$ if $$ \|x - y_0\| = d(x, K), $$ where $d(x, K) := \inf_{y \in K} \|x - y\|$. The number $d(x, K)$ is called the distance from $x$ to $K$, or the error in approximating $x$ by $K$.

The (possibly empty) set of all best approximations from $x$ to $K$ is denoted by $P_K(x)$. Thus $$ P_K(x) := \{ y \in K \mid \|x - y\| = d(x, K) \}. $$

This defines a mapping $P_K$ from $X$ into the subsets of $K$ called the metric projection onto $K$.

If each $x \in X$ has at least (respectively exactly) one best approximation in $K$, then $K$ is called a proximal (respectively Chebyshev) set. Thus $K$ is proximal (respectively Chebyshev) if and only if $P_K(x) \neq \emptyset$ (respectively $P_K(x)$ is a singleton) for each $x \in X$.

  • $1.)$ $0\notin K$, but $\pm e_1\in K$. $2.)$ The case $x=0$ is trivial. For $x\neq 0$ consider first $x=e_1$ (easy case) and then use a rotation to reduce to that case. $3.)$ Don't know how those things are defined and it would be good for you to include those definitions. In fact, I think it would help you to also spell out how $P_K$ is defined. – Severin Schraven Aug 28 '24 at 12:37
  • I made an edit. Can you expand your explanation for point 2. a bit? Why is x=0 trivial? Also, I do not undeatand that explanation with rotation, unfortunately. –  Aug 28 '24 at 13:06
  • Well, all points in $K$ have distance $1$ to the origin. I suggest you try to deal first with the case $\mathbb{R}^2$ and $K$ being the the unit circle. This is anyways the picture you should have in mind. – Severin Schraven Aug 28 '24 at 13:08
  • Also, $3.)$ is a trivial consequence of $2.)$, you even compute the set $P_K(x)$ for all $x$. – Severin Schraven Aug 28 '24 at 13:11
  • Oh right (for 3.) Can you show me all the work only for 2. please? :) –  Aug 28 '24 at 13:19
  • I suggest you work out $2.)$ for $\mathbb{R}^2$ and $K$ the unit circle. Then we can discuss how to generalize to $\ell^2$. – Severin Schraven Aug 28 '24 at 13:42
  • Maybe more rigorously for my previous comment on rotation. Fix $x_0\neq 0$ and pick a unitary map $U: \ell^2 \rightarrow \ell^2$ such that $U(x_0)=\Vert x_0\Vert e_1$. Then you can easily check (using that $U(K)=K$ as it preserves norms) that $$ P_K(x_0)= U^{-1}P_{U(K)}(U(x_0)) =U^{-1} P_K(\Vert x_0\Vert e_1).$$ Thus, if you can show that $P_K(\Vert x_0\Vert e_1)=e_1$. Then you get $$ P_K(x_0) = U^{-1}({e_1}) = { U^{-1}(e_1)} = { x_0 /\Vert x_0\Vert }. $$ – Severin Schraven Aug 28 '24 at 14:48

1 Answers1

0

The first point follows from $\pm e_1 \in K$ and $\frac{1}{2}e_1+\frac{1}{2}(-e_1)=0\notin K$. The third point follows trivially from the second point. We are left to prove the second point. It's clear that $P_K(0)=K$ as all elements in $K$ have distance $1$ to the origin. Thus, we need to show that $P_K(x)=x/\Vert x \Vert$ for $x\neq 0$. We first consider the special case $x=c e_1$ for some $c>0$ and then deduce the general case from it.

Special case: $P_K(ce_1)=\{e_1\}$ for $c>0$: We need to find $x=(x_j)_j \in K$ which minimizes $\Vert x-ce_1 \Vert^2$. Using that $x\in K$ we can write $$ \Vert x-ce_1\Vert^2 = \vert x_1-c \vert^2 + \sum_{j\geq 2} \vert x_j\vert^2 = \vert x_1-c \vert^2 + 1-\vert x_1\vert^2. $$ Write $x_1=a+ib$. Then we need to minimize $$ f(a,b) = (a-c)^2+b^2+1-a^2-b^2=1-a^2+(a-c)^2 $$ under the constraint $a^2+b^2\leq 1$ (remember that $x\in K$ and hence $a^2+b^2=\vert x_1\vert^2 \leq 1$). The function $f$ does not depend on $b$ at all, so we really only need to minimize $$ g(a) = 1-a^2+(a-c)^2$$ under the constraint $a^2\leq 1$. Basic calculus gives us that the minimum is at $a=1$ (as $c>0$) and hence the overall minimizer is $x_1=1$ (as $\vert x_1\vert \leq 1$ which forces $b=0$), respectively $x=e_1$.

General case: We note that $P_K(x)$ is the points $y\in K$ which have minimal distance to $x$. Fix $x_0\neq 0$ and pick a unitary map $U: \ell^2 \rightarrow \ell^2$ such that $U(x_0)=\Vert x_0\Vert e_1$. How do we get this unitary map? Well, you can start with $f_1=x_0/\Vert x_0\Vert$ and extend it to a Hilbert basis $(f_n)_{n\in \mathbb{N}}$, see here How to extend an orthonormal set to a basis on a Hilbert space?. Now define $U(f_n)=e_n$ and extend this linearly. It's easy to check that this is unitary on the span of $(f_n)_{n\in \mathbb{N}}$ (as it maps an orthonormal basis to another orthonormal basis) and thus extend to a unitary map on all of $\ell^2$.

Note that $U(K)=K$ (as $U$ is unitary we have $\Vert U(x)\Vert = \Vert x\Vert$).

Now, if there exists a single point $z\in K$ which minimizes the distance to $U(x_0)$, then $U^{-1}(z)$ is the unique minimizer for $x_0$ (here we use again that $U$ is unitary and hence $\Vert z-U(x_0) \Vert= \Vert U^{-1}(z)-x_0\Vert$). In formulas, we get $$ P_K(x_0) = U^{-1}P_{U(K)}(U(x_0))=U^{-1}P_K(\Vert x_0\Vert e_1) = \{U^{-1}(e_1) \} = \{ x_0/\Vert x_0\Vert \}, $$ where we have used the previously established special case $P_K(ce_1)=\{ e_1\}$ for $c>0$.

  • I have the following questions from the proof you wrote: 1. Why is minimum at $a = 1$? (I have calculated the derivate $g'(a) = -2 a + 2 (a - c) = -2 c$, what now?), 2. Why exactly does $x_1 = 1$ imply that $x = e_1$? (We only show it for first coordinate of $x$ that it is equal to $1$.), 3. What do you mean by "extend this linearly"? (And how can I from span extend $U$ to a unitary map on all of $l^2$?), 4. Why exactly is $U(K) = K$? and 5. How can I see first equality in $P_K(x_0) = ...$? –  Aug 28 '24 at 22:41
  • Well, $-2c<0$ so the function is decreasing and $a\leq 1$, thus, the minimum is at $a=1$. We have $1\leq a^2+b^2 =1^2+b^2$ implying that $b=0$. Hence, $x_1=a+ib=1$. Furthermore, $1=\vert x_1\vert^2\leq \sum_{j=1}^\infty \vert x_j \vert^2=1$, thus, $x_j=0$ for $j\geq 2$ and therefore $x=e_1$. Extending linearly is a basic notation from linear algebra, meaning that $$ U(\sum_{j=1}^\infty a_j f_j)=\sum_{j=1}^\infty a_j e_j $$ for all $(a_j)_j \in \ell^2$. $U(K)=K$ is trivial as $\Vert U(x)\Vert = \Vert x \Vert$ and $\Vert x \Vert=1$ for $x\in K$. – Severin Schraven Aug 28 '24 at 22:56
  • The first identity is explained in the paragraph just before the chain of identities. – Severin Schraven Aug 28 '24 at 22:56
  • I undestand everything you wrote now, except this: How can I really check that $U$ is unitary map (by definiton)? –  Aug 28 '24 at 23:01
  • It maps an orthonormal basis to an orthonormal basis, thus, it is unitary. If you want to check that it is unitary using the definition, you can also just note that $$ \Vert U(\sum_{j=1}^\infty a_j f_j) \Vert^2 = \Vert \sum_{j=1}^\infty a_j e_j \Vert^2 = \sum_{j=1}^\infty \vert a_j \vert^2 = \Vert \sum_{j=1}^\infty a_j f_j \Vert^2. $$ – Severin Schraven Aug 28 '24 at 23:04
  • Right, thanks a lot for all the explanations :)! –  Aug 28 '24 at 23:06
  • If that answers your question, then you should consider accepting the answer. – Severin Schraven Aug 28 '24 at 23:08
  • I am not sure I've managed to convey this point very clearly, but all of this is really just inspired by thinking about the problem in $\mathbb{R}^2$. Projecting to the unit circle means going down along the ray. The proof also follows this kind of logic reparametrizing it in the coordinate of the ray and all the remaining once if $x$ is the first basis vector and then noting that I can just use a rotation to always reduce to said case. The rest is just technical fluff to make this geometric intuition precise. – Severin Schraven Aug 28 '24 at 23:16
  • Just two more questions as I read all you have written a lot more carefully: 1. Why is $\sum_{j=1}^{\infty} |a_j|^2 = |\sum_{j=1}^{\infty} a_j f_j|^2$? (I undestand that $\sum_{j=1}^{\infty} |a_j|^2 = |\sum_{j=1}^{\infty} a_j e_j|^2$, but why is it true for $f_j$?) 2. Can you please provide a direct proof (direct calculation - if possible) for $|z - U(x_0)| = |U^{-1}(z) - x_0|$? –  Aug 29 '24 at 08:57
  • It's true for any orthonormal basis by Parseval's identity. If you want to have a formal proof, you can compute $$\Vert \sum_j a_j f_j \Vert^2 =\langle \sum_j a_j f_j, \sum_k a_k f_k \rangle =\sum_{j,k} \overline{a_k} a_j \langle f_j, f_k\rangle =\sum_{j,k} \overline{a_j}a_k \delta_{j,k}=\sum_j \vert a_j\vert^2.$$ We have $$\Vert z-U(x_0)\Vert =\Vert U(U^{-1}(z)-x_0)\Vert=\Vert U^{-1}(z)-x_0\Vert$$ as $U$ is unitary. – Severin Schraven Aug 29 '24 at 09:13
  • Everything clear now, thanks again! –  Aug 29 '24 at 09:16