7

The (real) projective plane is often motivated by the issue of lines in $\mathbb R^2$ having exactly one intersection, except in the case of parallel lines. The solution is to mimic what we see when we stand on the plane and see parallel train tracks on the ground: add "points at infinity"/"at the horizon" (using the language of perspective in art), one in each possible direction of a line. By construction, all lines have 1 intersection (the "correct number").

In higher dimensions, the construction is still to add a point at infinity for every possible direction of a line, so that all parallel lines of that direction intersect once at that newly added point at infinity.

I also understand that another major part of projective spaces is projective duality. Still, this is about linear things (affine linear subspaces).

What I'm bothered by is the apparent miracle, that spaces constructed to make linear things work well (e.g. the initial motivation of making sure parallel lines intersect, or the "richer" observation of duality), somehow work well for many other things. For example, in the simplest non-trivial example of conic sections, with these extra points at infinity, all the conic sections now look like simple loops!

Is there any intuition that tells us why making things nice for lines, also makes everything else nice too? (I'm aware that another point brought up is that projective spaces are compact, and hence projective spaces will have the nice properties that compactness bestows. But there are many ways of making affine space compact, so again it's not clear why this specific compactification is so magical.)


Some further remarks/"analogies" to better contextualize my question, and to paint a picture of what type of answer I might be looking for:

I'm reminded of a similar phenomenon for algebraic closedness of $\mathbb C$: adding in the points needed to solve quadratic equations over $\mathbb R$, somehow we get enough to solve all polynomial equations! With the technology of Galois and group theory, we see that this miracle boils down to

(1) Every polynomial of odd degree in $\mathbb{R}[X]$ has a root in $\mathbb{R}$.

(2) Every polynomial of degree 2 in $\mathbb{C}[X]$ has a root in $\mathbb{C}$.

(or as a comment pointed out, and explained in this answer, further reduced to just the fact that $\mathbb R$ is a "ordered field in which every positive element has a square root and every odd degree polynomial has a root"). Although still an amazing fact, these answers investigate the miracle piece by piece and lay out clearly the foundational facts on which standard machinery (which one can develop completely independent of the miracle) can produce the result.

I'm also reminded of examples of limits in multivariable calculus where approaching along straight lines things are fine, but along polynomial or other curves, the limit doesn't exist. So somehow lines are "enough" for projective spaces, but not "enough" for even basic limits in multiple variables. Of course the "analogy" is bad, but I write it to point out it's reasonable to expect that a situation which works well for lines may not work well at all for higher degree polynomials.

(cross posted to MO)


EDIT 2/7/24: I've been reading the book Ideals, Varieties, and Algorithms and in particular the proof of Bezout's theorem using resultants. The fundamental difference between ordinary polynomials and homogeneous polynomials is that the resultant for the latter, with $f$ of degree $n$ and $g$ of degree $m$, has degree exactly $mn$, whereas the resultant for ordinary polynomials may have "perfect cancellation" and end up with a lower degree than $mn$. For example, taking the resultant of $f=y^{2}+\left(2\sqrt{3}x-2\sqrt{3}\right)y+3x^{2}+2x$ and $g=\left(1+\epsilon \right)y^{2}+\left(2\sqrt{3}x-\left(4-2\sqrt{3}\right)\right)y+\left(3x^{2}-\left(4\sqrt{3}+2\right)x\right)=0$ is degree $2$ for $\epsilon=0$, but otherwise degree $4$ (the expected number). So somehow homogeneous polynomials are better algebraically behaved, perhaps captured by the idea of "grading" (e.g. for a homogeneous polynomial in $k[x,y,z]$ of degree $d$, the coefficients (in $k[x,y]$) of all $z^k$ terms is guaranteed to be a homogeneous polynomial in $k[x,y]$ of degree $d-k$).

Also, geometrically, it seems like "generically" (in the sense of $\epsilon$-perturbations of the coefficients like I did above) we get the correct counts over $\mathbb C$, and bringing $\epsilon \searrow 0$, the only issue is that some intersection points run away to infinity. So yes it is an issue with the noncompactness of affine space. But the reason lines suffice is somehow that when intersection points run away in this setting, they run away "polynomially" and that is well-behaved enough that having an observer standing one unit in a new orthogonal direction looking "down at the 2D-ground" their line of sight following the point running away doesn't move so much, and in fact converges to a horizontal line of sight. E.g. the point can not run off to infinity an any wild oscillatory/spiral path (in those situations the line of sight would not converge). This heuristic seems to be supported by Bezout's theorem itself, since Bezout's theorem tells us that algebraic curves can not have infinite oscillations or spirals.

However, I still feel like these 2 reasons (algebraic and geometric) that I have provided are somewhat artificial. It does not give the air of some beautiful philosophy about the "true nature"/"soul" of projective space.

A friend also pointed out that perhaps there is a "good reason" that things intersect at all in projective space: Reference request for the dimension of intersection of affine varieties tells us that if the intersection of 2 varieties over affine space intersect once, then they do so "many times" (with a lower bound on the dimension of the intersection space). Projective space extends curves in affine space to be "cones" intersecting at a point (that is by construction fundamentally what projective space does: project everything through one point), and so now the intersection theorem applies and tells us we should have one line of intersection, corresponding to a point of intersection in projective space.

This I think is a pretty substantial part, but of course it doesn't say anything about capturing all the intersections, just at least one.

D.R.
  • 10,556
  • Compactification – Jan-Magnus Økland Jan 28 '24 at 05:49
  • @Jan-MagnusØkland I already addressed this idea in the post, and gave reasons why I don't think it's a satisfactory answer. – D.R. Jan 28 '24 at 08:35
  • It is partly the answer, if you don’t have it, points escape out. – Jan-Magnus Økland Jan 28 '24 at 08:54
  • The "problem" with projective duality for degree >2 is that the dual generally is singular, and can in some sense be said to live in the wrong dimension. Also it's always a hypersurface and can be a (maybe many times over) a cone over a non-cone. The virtues of course are manifold... (sorry for the anti-pun). – Jan-Magnus Økland Jan 28 '24 at 09:25
  • 4
    @Jan-MagnusØkland it's also not the whole answer because there are lots of different compactifications of $\Bbb A^n$ and not all of them give the "right" intersection theory. For instance, $\Bbb P^2$ and $\Bbb P^1\times\Bbb P^1$ are both compactifications of $\Bbb A^2$, but in one we get that any two distinct lines intersect in one point, while in the other you can find two lines which do not intersect at all. – KReiser Jan 28 '24 at 22:45
  • @KReiser yes, thank you for explaining better my point that "there are many ways of making affine space compact, so again it's not clear why this specific compactification is so magical" – D.R. Jan 28 '24 at 22:51
  • @D.R. - there is an "infinite" number of different compactifications of an affine "variety" $X \subseteq \mathbb{A}^n_k$. Projective $n$-space $\mathbb{P}^n_k$ has an open affine cover $D(x_i) \cong \mathbb{A}^n_k$ for $i=0,..,n$ and for each $i$ you get a closed embedding $\phi_i: X \rightarrow D(x_i) \subseteq \mathbb{P}^n_k$ and a "compactification" $X \subseteq Y_i:=\overline{\phi_i(X)}$. – hm2020 Jan 31 '24 at 11:51

3 Answers3

3

Edit

See the parts about Poncelets work in Fragments of a History of the Concept of Ideal by Karine Chemla.

The miracle; the continuity/preservation of the number of solutions to polynomial systems in complex projective space, took a hundred years to get precise. I quote:

At the outset of the 19th century, it was to extend "preservation of number" that algebraic geometers made two important choices: to work over the complex numbers rather than the real numbers, and to work in projective space rather than affine space. (With these choices the two points of intersection of a line and an ellipse have somewhere to go as the ellipse moves away from the real points of the line, and the same for the point of intersection of two lines as the lines become parallel.) Over the course of the century, geometers refined the art of counting solutions to geometric problems - introducing the central notion of a parameter space; proposing the notions of an equivalence relation on cycles and a product on the equivalence classes and using these in many subtle calculations. These constructions were fundamental to the developing study of algebraic curves and surfaces.

In a different field, it was the search for a mathematically precise way of describing intersections that underlay Poincare's study of what became algebraic topology. We owe Poincare duality and a great deal more in algebraic topology directly to this search. His inability to work with continuous spaces (now called manifolds) led him to develop the idea of a simplicial complex, too.

Despite the lack of precise foundations, nineteenth century enumerative geometry rose to impressive heights: for example Schubert, whose Kalkul der abzahlenden Geometrie (Schubert [1979]) represents the summit of intersection theory at the time of its writing, calculated the number of twisted cubics tangent to 12 quadrics - and got the right answer (5,819,539,783,680). Imagine landing a jumbo jet blindfolded!

At the outset of the 20th century, Hilbert made finding rigorous foundations for Schubert calculus one of his celebrated Problems, and the quest to put intersection theory on a sound footing drove much of algebraic geometry for the following century; the search for a definition of multiplicity fueled the subject of commutative algebra in work of van der Waerden, Zariski, Samuel, Weil and Serre. This progress culminated, towards the end of the century, in the work of Fulton and MacPherson and then in Fulton's landmark book Intersection Theory (Fulton [1984]), which both greatly extended the range of intersection theory and put the subject on a precise and rigorous foundation.

From the introduction to "3264 and all that - Intersection Theory in Algebraic Geometry" by Eisenbud and Harris

  • 1
    Sure, the development took many twists and turns, and monumental efforts. But in hindsight, do we see any clearer exactly how the miracle works? For example, I gave the example of the algebraic closedness of $\mathbb C$, where people now have a very clear understanding of how that miracle works (real closed fields, leading to further nice model-theoretic results) – D.R. Jan 28 '24 at 22:04
  • @D.R. I added the intersection theory tag, but you could also ask on the expert site – Jan-Magnus Økland Jan 28 '24 at 22:25
  • 1
    @D.R. Paraphrasing: The original miracle was observed (according to Harris you should look to Poncelet) and cleverly used in the 1800s, and it took a hundred years to be able to properly compute with it. And it's still mostly for smooth varieties. – Jan-Magnus Økland Jan 28 '24 at 23:25
  • @D.R. I would actually refer to the principle in asking there though – Jan-Magnus Økland Jan 29 '24 at 17:25
  • @D.R. See the paper in the edit – Jan-Magnus Økland Jan 30 '24 at 09:36
  • @hm2020 No, the miracle was apparent from the beginning. Getting it on firm ground took a hundred years. The video in a previous comment of mine goes through the history. – Jan-Magnus Økland Jan 31 '24 at 14:28
  • @Jan-MagnusØkland - why do you think this "took 100 years"? – hm2020 Jan 31 '24 at 14:30
  • @hm2020 it looked obvious, and wasn't. A correct proof of the moving lemma is a case in point, see the above mentioned video. – Jan-Magnus Økland Jan 31 '24 at 14:31
2

From a geometric point of view, projecting a geometric space that lies in a flat ambient space to another is often a beautiful and powerful idea. The projections used are usually either in the direction of lines through a common point or in the direction of a family of parallel lines. But implicit in these projections is a map from a geometric space to lines in the flat ambient space

The best known examples are stereographic projections of the sphere and hyperbolic space, which involve projections from a common point.

Affine space has the simplest realization as a geometric object in a flat space, namely a hyperplane in a vector space that does not pass through the origin. It is therefore a natural idea to "project" the affine space onto the space of lines through the origin. Once you do that, there is an obvious way to take the closure of the set of lines by adding by adding lines that represent "points at infinity". And because the lines at infinity also lie in a flat space (namely the linear subspace parallel to the affine hyperplane), it is clear that the "boundary at infinity" is itself a projective space. That two lines in the projective plane always intersect in exactly one point is an easy consequence of the observation that two $2$-dimensional linear subspaces of a 3-dimensional vector space always intersect in a line through the origin.

Another observation is that unlike the sphere and hyperbolic space, the geometry of affine space requires no inner product. It comes from the linear geometry of the ambient vector space. And this geometry extends nicely to the geometry of projective space.

So the whole picture is simple but beautiful.

If you introduce coordinates and study polynomials, then the fact that polynomials in the affine space translate simply to homogeneous polynomials in the vector space makes it clear that a lot of algebraic geometry can be transferred easily back and forth between the two settings.

Deane
  • 10,298
  • Thank you for this geometric picture --- it did make the ideas feel more natural. Do you have any comments on why "a priori" one might be able to predict that e.g. Bezout's theorem on intersections will hold? You gave the picture of lines in affine space "extending" to 2D planes in a 3D-VS (which must intersect at a line through the origin), but for more complicated curves, I feel as though there is no reason at all to expect this "projection to common point" idea to give us exactly the desired intersection count. – D.R. Jan 31 '24 at 18:32
  • @D.R., I can't say anything about Bezout's theorem itself. My guess is that there is an affine version that has to allow for some of the points of intersection to "escape" to infinity. Transferring it to projective space means the polynomials become homogeneous polynomials and the intersections at infinity cannot escape. So its statement in projective space presumably cleaner and simpler. – Deane Jan 31 '24 at 19:46
1

Question: "Is there any intuition that tells us why making things nice for lines, also makes everything else nice too? (I'm aware that another point brought up is that projective spaces are compact, and hence projective spaces will have the nice properties that compactness bestows. But there are many ways of making affine space compact, so again it's not clear why this specific compactification is so magical.)"

Answer: Note: There is not one unique compactification. There is an infinite number of different compactifications. If $Y:=\mathbb{P}^n_k$ is projective $n$-space and if $X \subseteq \mathbb{A}^n_k$ is an affine algebraic variety, there is an infinite number of open immersions

$$ i: \mathbb{A}^n_k \rightarrow \mathbb{P}^n_k$$

giving rise to a compactification $X \subsetneq \overline{i(X)}:=Z_i \subseteq \mathbb{P}^n_k$.

One reason to introduce quasi projective varieties is the following: one wants a systematic way to study singularities and to classify them. You may take any affine variety $X \subseteq \mathbb{A}^n_k$ and any closed subvariety $Z \subseteq X$ and "blow up" this subvariety to get

$$\pi: BL_Z(X) \rightarrow X$$

and the blow up $BL_Z(X)$ is always a quasi projective variety of finite type over $k$. When $X:=C$ is a curve and $Z:=\{p\}$ is a singular point, the blow up $BL_{-}(-)$ may be used to "resolve the singularity" $p$. In fancy language we say

"you may blow up to remove the singularity"

You find some information on "blow up" and "resolution of singularities" at wikipedia at

https://en.wikipedia.org/wiki/Blowing_up

https://en.wikipedia.org/wiki/Resolution_of_singularities#Blowing_up

Comment: "In higher dimensions, the construction is still to add a point at infinity for every possible direction of a line, so that all parallel lines of that direction intersect once at that newly added point at infinity."

Comment: When you blow up the affine plane $S:=\mathbb{A}^2_k$ at the origin $p:=(0,0)$ you get a quasi projective variety

$$\pi: BL_p(S) \rightarrow S$$

where points of $\pi^{-1}(p)$ are in 1-1 correspondence with lines through the origin in $S$, hence $\pi^{-1}(p) \cong \mathbb{P}^1$.

Hence you replace $p$ by a copy of $\mathbb{P}^1$. If $C \subseteq S$ is a curve with $p \in C$ and you take the inverse image $\tilde{C}:=\pi^{-1}(C) \subseteq BL_p(S)$ you get an induced map $\pi_C: \tilde{C} \rightarrow C$ and $\tilde{C}:=BL_p(C)$ is the "blow up of $C$ at $p$". If $C$ has a singularity at $p$ that is "sufficiently simple" the blow up will be smooth. Hence you "blow up to remove all singularities" to get a new curve $\tilde{C}$ that is "birational to $C$.

There is also the "Nagata compactification theorem" which proves you may in many cases compactify your "abstract variety" $X$ to a projective variety $\overline{X} \subseteq \mathbb{P}^n_k$ where $\overline{X}-X$ is a "divisor with normal crossings":

https://en.wikipedia.org/wiki/Nagata%27s_compactification_theorem

hm2020
  • 10,015
  • oh damn. sorry to see you got suspended. I appreciate your effort in answering my question, and you have made valuable contributions to the site. Thank you for your work. – D.R. Feb 08 '24 at 04:05