2

I am following Gathmann's Notes on Algebraic Geometry. In particular, in his proof of Corollary 8.13 (as linked) Gathmann says:

But these minors are polynomials in the entries of this matrix, and thus in the coordinates of $\omega$ .

I was able to follow the entire proof but this last line. Here are a few things I don't understand:

a) How are the entries of the matrix co-ordinates of $\omega$ ? The matrix must be ${n \choose d+1} \times n$ but there are only ${n \choose d}$ coordinates.

b) How do the minors give you homogeneous polynomials whose solutions are exactly $\omega$? Because, thus far, nothing leads me to believe that the minors given by different $\varphi$ as defined for an $\omega$ and $\nu \neq \omega$ are in fact the same homogenous polynomial with both the co-ordinates of $\omega$ and $\nu$ as common roots.

I see that This question is similar, but as I understand it, it seeks a different method from the get-go whereas I seek to understand how this particular proof works.

EDIT:

I will attempt to make (b) clearer. Sorry about the ambiguity. I will define $\varphi_{\omega} : K^n \rightarrow \bigwedge^{d+1} K^n$ as $\varphi_{\omega}(v) = v \wedge \omega$ for every non-zero $\omega \in \bigwedge^d K^n$.

Since the matrix associated with $\varphi_{\omega}$ has entries in the co-ordinates of $\omega$ I can see how the $(n-d+1) \times (n-d+1)$ minors are polynomials in the co-ordinates of $\omega$. I can also see that since $\varphi_{\lambda \omega} = \lambda \varphi_{\omega}$ for every non-zero constant $\lambda$, that the co-ordinates of $\lambda \omega$ are also roots of these polynomials. But I don't see why these polynomials defined by the $(n-d+1) \times (n-d+1)$ minors are homogeneous.

Secondly, to say that image of $Gr(d,n)$ under the Plücker embedding is closed in $\mathbb{P}^{{n \choose d}-1}$ we need a set of homogenous polynomials in $K[x_0, \ldots, x_{{n \choose d} - 1}]$ whose common vanishing set equals the image of $Gr(d,n)$ under the Plücker embedding.

Let us fix an $\omega$ that is a non-zero pure tensor. This proof tells us that the co-ordinates of $\omega$ is the root of polynomials given by the $(n-d+1) \times (n-d+1)$ minors of $\varphi_{\omega}$. Now take a different pure tensor $\nu$ (which is not a scalar multiple of $\omega$). We know that $\nu$ is also the root of polynomials given by the minors of $\varphi_{\nu}$. But we don't know that the polynomials given by the minors of $\varphi_{\nu}$ and $\varphi_{\omega}$ are the same (In fact, I have a hunch that they need not be. But I might be wrong). So, thus far, we do not know that there is a set of polynomials whose set of common roots are exactly the elements in the image of the Grassmannian. Once we know this, our statement is proved.

I hope that makes it clearer.

1 Answers1

1

a) Pick an element of $\bigwedge^dK$ representing $\omega\in\mathbb P^{\binom{n}{d}-1}$, and by abuse of notation denote it by $\omega$ also. Let $\{e_1,\ldots,e_n\}$ be a basis for $K$ and write $$ \omega=\sum_{i_1<\ldots<i_d}a_{i_1,\ldots,i_d}e_{i_1}\wedge\ldots\wedge e_{i_d}. $$ These $a$ are the coordinates of $\omega$. Each entries of the matrix is a coefficient of $e_{i_1}\wedge\ldots\wedge e_{i_{d+1}}$ in $e_j\wedge\omega$ for some $i_1<\ldots<i_{d+1}$ and $j$. If $j=a_s$ for some $s$ this is $$ (-1)^{s-1}a_{i_1,\ldots,\widehat{i_s},\ldots,i_{d+1}}. $$ Otherwise the entry is zero. So the number of matrix entries is different from the number of coordinates because there are zeros and repetitions, but the entries are all linear in the coordinates. Therefore the $(n-d+1)\times(n-d+1)$ minors are homogenous polynomials of degree $n-d+1$ in the coefficients $a_{i_1,\ldots,i_d}$.

b) The polynomials obtained depend only on $(n,d)$. They don't depend on a specific element $\omega$. By way of analogy, consider a proof that singular matrices form a closed subset of $M_{2,2}(k)$. For a specific matrix $$ A=\begin{bmatrix}a&b\\c&d\end{bmatrix}, $$ we know that $A$ is singular iff $\det(A)=ad-bc$ vanishes. Thus singular matrices are roots of the polynomial $ad-bc$. This polynomial doesn't depend on any specific element of $M_{2,2}(k)$. There is some abuse of notation here (which may be causing confusion): In the equation for $A$ above, $A$ is an arbitrary element of $M_{2,2}(k)$ and $a,b,c,d$ are elements of $k$, whereas in "the polynomial $ad-bc$", $a,b,c,d$ are coordinates.

stewbasic
  • 6,211
  • Thank you very much. I have updated the question. Please let me know if anything is still unclear. – Infinite_Fun Jul 27 '17 at 04:28
  • @Infinite_Fun they are homogenous because the matrix entries are linear. The polynomials are not defined by a fixed element $\omega$. I tried to explain by analogy, HTH. – stewbasic Jul 27 '17 at 05:03