0

I'm trying to prove the following:

Let $P(x_1,\cdots,x_r)$ be a homogeneous polynomial. Then $P(\alpha_1,\cdots,\alpha_r) = 0$ for all $(\alpha_1,\cdots,\alpha_r) \in \mathbb{R}^r_{+}$ such that $\sum\limits^r_{j=1} \alpha_j = 1$ if and only if $P(.)$ is the zero polynomial.

I had posted it here also, but the wording was incorrect hence posting it as a new question again.

I think I have proven it also, using induction, with $A.\Gamma$'s help on this question. I have used induction on $r$ (trivially true for $r=1$). Then for a fixed $r$ I fixed any $m \in [r]$ and used induction on the exponent of $x_m$, showing coefficients of any term with $x^0_m$ is $0$, then after removing those, coefficients of any term with $x^1_m$ is $0$ and so on, as outlined for the case $r=2$ by $A.\Gamma.$ here.

My first question is - is the above statement correct? If not, kindly suggest suitable modifications to a correct version. Maybe I have made mistakes in the proof and hence not realizing that this is not correct.

Secondly, is there any existing result (perhaps involving fundamental theorem for multivariate polynomials, e.g.) from which this follows? I tried looking for it, found Bezout's Theorem, but not sure if and how this can be derived using it. Thank you so much for your help.

Canine360
  • 1,574

2 Answers2

1

Since $P$ is a homogeneous polynomial (0f degree $k$ say) $P(\alpha_1,..,\alpha_n)=a^{k}P(\beta_1,..,\beta_n)=0$ for any $\alpha_1,..,\alpha_n$, where $\beta_i = \alpha_i /a$ and $a=\sum \alpha_i$ (assuming that $a \neq 0$). This proves that $P(\alpha_1,..,\alpha_n)=0$ whenever $\sum \alpha_i \neq 0$ . By continuity we get $P \equiv 0$. I an unable to suggest how the question should be modified, but, as stated, it seems quite trivial.

  • I didn't understand how we get $P \equiv 0$ by continuity. So are you saying we're showing that it is zero for any vector the sum of whose components is not zero, and then by continuity it is zero for any vector, and hence it is the zero polynomial? – Canine360 Jun 05 '18 at 08:55
  • So the induction argument suggested by A.$\Gamma$. here is not necessary? https://math.stackexchange.com/questions/2805121/how-to-know-if-a-given-infinite-set-of-vectors-lie-in-the-same-hyperplane/2805152?noredirect=1#comment5784202_2805152 – Canine360 Jun 05 '18 at 08:56
  • @Canine360 If $a_1+a_2+..+a_n=0$ then $(a_1+\frac 1 n)+a_2+..+a_n\neq 0$ and $(a_1+\frac 1 n,a_2,...,a_n) \to (a_1+\frac 1 n,a_2,...,a_n)$ – Kavi Rama Murthy Jun 05 '18 at 09:03
  • But what if some of the $\beta_i$'s are negative and some are positive? The condition only holds for non-negative $\alpha_i$'s as mentioned. – Canine360 Jun 07 '18 at 12:21
  • If a polynomial is zero on $\mathbb R_{+}^{r}$ the it is zero eveywhere. – Kavi Rama Murthy Jun 07 '18 at 12:29
  • On second thoughts, doesn't having infinite number of zeros itself imply it's the zero polynomial? e.g. as discussed here: https://math.stackexchange.com/questions/245503/bivariate-polynomials-over-finite-fields – Canine360 Jun 07 '18 at 17:49
  • In the reference the polynomials are over $\mathbb Z$. – Kavi Rama Murthy Jun 07 '18 at 23:39
  • Yeah that question was meant as an example. But wouldn't a non-zero multivariate polynomial over the reals also have finite no. of zeros? And hence if it has infinite no. of zeros wouldn't that mean it is the zero polynomial? – Canine360 Jun 08 '18 at 01:14
  • 1
    $f(x,y)=xy$ has infinitely many zeros. – Kavi Rama Murthy Jun 08 '18 at 03:32
  • OK I think my understanding was not clear. If there exists an open ball in $\mathbb{R}^n$ such that $P(.)$ is zero everywhere on the ball, then we can say it is the zero polynomial right? – Canine360 Jun 08 '18 at 05:20
  • 1
    You are right. If $P$ is zero on an open ball the it is zero everywhere. – Kavi Rama Murthy Jun 08 '18 at 05:22
0

Consider $$x+y=0$$ which has infinitely many solutions, $y=-x, x\in R$.