It is commonly known that the quadratic equation $ax^2+bx+c=0$ has two solutions given by: $$x = \frac{-b\pm \sqrt{b^2-4ac}}{2a}$$ But how do I prove that another root couldn't exist?
I think derivation of quadratic formula is not enough....
It is commonly known that the quadratic equation $ax^2+bx+c=0$ has two solutions given by: $$x = \frac{-b\pm \sqrt{b^2-4ac}}{2a}$$ But how do I prove that another root couldn't exist?
I think derivation of quadratic formula is not enough....
Suppose there are three distinct roots $x,y,z$. One has $$\begin{cases}ax^2+bx+c=0\\ay^2+by+c=0\\az^2+bz+c=0\end{cases}\Rightarrow\begin{cases}a(x^2-y^2)+b(x-y)=0\\a(x^2-z^2)+b(x-z)=0\end{cases}\Rightarrow\begin{cases}a(x+y)+b=0\\a(x+z)+b=0\end{cases}$$ It follows $$a(z-y)=0\Rightarrow z=y$$ which is a contradiction.
I think derivation of quadratic formula is not enough....
Yes it is. The derivation is of the form if $ax^2+bx+c=0$, then $x = \frac{-b \pm \sqrt{b^2-4ac}}{2a}$. The derivation is a proof if you pay attention.
The trickiest step is simply that if $y^2 = k$ for $k \geq 0$ then $y = \pm \sqrt k$, if you do not take this as evident.
$$0 = ax^2 + bx + c$$
We solve this equation by completing the square. It offers up to two distinct solutions. The name we give to the general solutions is the quadratic formula. That's all there is.
If we consider the case of real solutions, and you think there may be a sneaky third solution, remember that $f(x) = ax^2 +bx +c$ can be plotted as below (depending on the sign of $a$). How many times could a parabola cross a horizontal line?
Hint $ $ If $f(x)\,$ is a polynomial of $\color{#0a0}{{\rm degree}\,2}\,$ with coef's in a $\rm\color{#c00}{field}$ (or $\rm\color{#c00}{domain}$) $F$ (e.g. $\,\Bbb Q,\Bbb R,\Bbb C,\Bbb Z_p)$ and $\,f\,$ has $\,2\,$ distinct roots $\,a\neq b,\,$ then applying the BiFactor Theorem below we deduce that $\,f(x) = c(x\!-\!a)(x\!-\!b)\,$ for $\,\color{#0a0}{0\neq c}\in F.\,$ Thus if $\,d\neq a,b\,$ then $\,f(d) = c(d\!-\!a)(d\!-\!b)\ne 0\,$ since each factor is $\ne 0\,$ (by $\,x,y\ne 0\,\Rightarrow\,xy\ne 0\,$ in a $\rm\color{#c00}{domain}$). Thus a $\rm\color{#0a0}{quadratic}$ has at most $\,\color{#0a0}2\,$ roots.
BiFactor Theorem $ $ Suppose that $\,a,b\,$ are elements of a field $\,F\,$ and $\:f\in F[x],\,$ i.e. $\,f\,$ is a polynomial with coefficients in $\,F.\,$ If $\ \color{#C00}{a\ne b}\ $ are elements of $\,F\,$ then
$$ f(a) = 0 = f(b) \iff f\, =\, (x\!-\!a)(x\!-\!b)\ h\ \ {\rm for\ \ some}\ \ h\in F[x]$$
Proof $\,\ (\Leftarrow)\,$ clear. $\ (\Rightarrow)\ $ Applying Factor Theorem twice, while canceling $\: \color{#C00}{a\!-\!b\ne 0},$
$$\begin{eqnarray}\:f(b)= 0 &\ \Rightarrow\ & f(x)\, =\, (x\!-\!b)\,g(x)\ \ {\rm for\ \ some}\ \ g\in F[x]\\[.3em] f(a) = (\color{#C00}{a\!-\!b})\,g(a) = 0 &\Rightarrow& g(a)\, =\, 0\,\ \Rightarrow\ g(x) \,=\, (x\!-\!a)\,h(x)\ \ {\rm for\ \ some}\ \ h\in F[x]\\[.3em] &\Rightarrow& f(x)\, =\, (x\!-\!b)\,g(x) \,=\, (x\!-\!b)(x\!-\!a)\,h(x)\end{eqnarray}$$
Generally by inductively iterating the Factor Theorem (as we did above) we infer
Theorem $ $ A nonzero polynomial $\,f\,$ over a field (or domain) has no more roots than its degree $\,n.\,$
Proof $ $ If $\,f\,$ has $\,\ge n\,$ distinct roots $\,r_i$ then inductively applying Factor Theorem as above shows $\,f = c(x\!-\!r_1)\cdots (x\!-\!r_n),\,$ so $\ r\ne r_i\Rightarrow\, f(r)= c(r\!-\!r_1)\cdots (r\!-\!r_n) \ne 0\,$ by all factors $\ne 0\,$ (in a domain). Thus $\,f\,$ has at most $\,n\,$ roots.
The above root-bound property characterizes integral domains (commutative rings $\ne \{0\}$ which satisfy $\,ab=0\,\Rightarrow\, a=0\,$ or $\,b=0),\,$ viz. a ring $\: D\:$ is a domain $\iff$ every nonzero polynomial $\ f(x)\in D[x]\ $ has at most $\, \deg f\,$ roots in $D.\,$ For a simple proof see this answer, where I illustrate it constructively in $\: \mathbb Z/m\: $ by showing that, given any $\:f(x)\:$ with more roots than its degree, we can quickly compute a nontrivial factor of $\:m\:$ via a quick $\gcd.\,$
The quadratic case of this result is at the heart of some integer factorization algorithms, which e.g. attempt to factor $\:m\:$ by searching for a square-root of $1$ that is nontrivial $(\not\equiv \pm1)$ in $\: \mathbb Z/m.$
Alternatively, instead of the Factor Theorem, we can apply elimination methods as here, viz. if $\,f(x) = ax^2\!+\!bx\!+\!c,\ a\ne 0\:$ and $\,f(x) = f(y) = f(z)\,$ then the following determinant $= 0$, having proportional first and last columns.
$$ 0\, =\, \left | \begin{array}{ccc} f(x) &\!\! x &\!\! 1 \\ f(y) &\!\! y &\!\! 1 \\ f(z) &\!\! z &\!\! 1 \end{array} \right | \, =\, \left | \begin{array}{ccc} ax^2&\!\!\! x &\!\!\! 1 \\ ay^2 &\!\! y &\!\!\! 1 \\ az^2 &\!\! z &\!\!\! 1 \end{array} \right | \, =\, a\:\!V(x,y,z)\, =\, a\,(x\!-\!y)(y\! -\! z)(z\!-\!x)\qquad $$
hence in a field (or domain) they cannot be distinct: either $\,x=y\,$ or $\,y=z\,$ or $\,z=x$.
Here $\,V\,$ denotes the ubiquitous Vandermonde determinant (cf. this answer or Wikipedia). Ataulfo's (hot-list popular) answer here boils down to this idea (their elimination amounts to computing the Vandermonde determinant by a common recursive method).
Beware that there are very simple examples of failure in non-domains, e.g. if $\,ab=0, a,b\neq 0\,$ then $\,ax\,$ has at least $2$ roots $\,b,0,\,$ and $\,(x-a)(x-b)\,$ has at least $\,4\,$ roots $\,a,b,0,a+b\, $ if $\,a\neq b.\,$ A simple concrete case is in $\,\Bbb Z_8 = $ integers $\!\bmod 8\!:\,$ ${odd}^2= 1\,$ so $\,x^2-1\,$ has $\,4\,$ roots $\,\pm1,\pm 3.\,$ The quadratic $\:\!x^2\:\!$ has infinitely many roots $\,x=2t^n\,$ over $\,\Bbb Z_4[t]$.
A more general answer to this question lies in the following theorem:
Theorem If $P(x)$ is a polynomial of degree $n$ and $a$ is a value for which $P(a) = 0$, then $P(x) = (x - a)Q(x)$, where $Q(x)$ is a polynomial of degree $n - 1$.
This theorem is a simple consequence of polynomial long division. By long division, $P(x) = (x - a)Q(x) + R(x)$, for some polynomials $Q(x), R(x)$ with the degree of $R(x)$ less than the degree of $(x-a)$. But since $x - a$ is of degree 1, that means $R(x)$ is of degree $0$. I.e., $R(x) = R$, a constant.
But $P(a) = 0$, so $0 = (a - a)Q(a) + R$, and so $R = 0$ and we get just $P(x) = (x-a)Q(x)$. Since the degree of the product of two polynomials is the sum of their degrees, the degree of $P(x)$ is one greater than that of $Q(x)$, so the degree of $Q(x)$ must be $n-1$.
Now, if $P_n(x)$ is of degree $n > 0$ with coefficients in a field (e.g. $\Bbb R,\Bbb C)$ and $a_n$ is a root, then $$P_n(x) = (x - a_n)P_{n-1}(x)$$ for some $n-1$ degree polynomial $P_{n-1}(x)$. If $P_n(x)$ has another root $a \ne a_n$, then $a$ must also be a root of $P_{n-1}(x)$:
$$0 = P_n(a) = (a - a_n)P_{n-1}(a)$$
Since $a - a_n \ne 0$ in a field, it is invertible so cancellable, so cancelling it yields $P_{n-1}(a) = 0$ (beware that this step may fail in more general coefficient rings since there may exist nontrivial zero-divisors $\,cd=0,\ c,d\neq 0,\,$ as explained in this answer here).
Conversely, if $a_{n-1}$ is a root of $P_{n-1}$, then $$P_n(a_{n-1}) = (a_{n-1} - a_n)P_{n-1}(a_{n-1}) = 0$$ So $a_{n-1}$ must also be a root of $P_n$ (which may be the same or different from $a_n$). We can also apply the theorem to $P_{n-1}$ and $a_{n-1}$: $$P_{n-1}(x) = (x - a_{n-1})P_{n-2}(x)$$ for some degree $n-2$ polynomial $P_{n-2}(x)$. By combining, we see that $$P_n(x) = (x - a_n)(x - a_{n-1})P_{n-2}(x)$$ As long as we can keep finding roots for the reduced polynomials, we can keep this up. If we can find $k$ such roots, $$P_n(x) = (x - a_n)(x - a_{n-1})(x - a_{n-2})...(x - a_{n+1-k})P_{n-k}(x)$$ Then $P_{n-k}(x)$ has to be a polynomial of degree $n-k$.
If we can find $n$ such roots, then $$P_n(x) = (x-a_n)(x-a_{n-1})...(x-a_1)P_0$$ where $P_0$ is a constant (a $0$-degree polynomial). $P_0 \ne 0$, since if it were we would have $P_n(x) = 0$ everywhere. But then the degree of $P_n$ would be $0$ (or less - some people define the degree of the $0$ to be $-\infty$), contrary to our original condition on $P_n(x)$. So in this case, $P_n(x)$ cannot have any other roots distinct from $a_1, a_2, ..., a_n$, since any other value would leave all factors in the expression non-zero.
So $P_n(x)$ can have at most $n$ roots.
The Fundamental Theorem of Algebra says that any non-constant polynomial over the complex numbers has a root. This theorem requires a substantial development of the properties of complex numbers to prove. But by it, we see that the process above does not terminate until you get to the constant. Thus a polynomial of degree $n$ will always have exactly $n$ roots $a_1, a_2, ..., a_n$. But remember that the $a_i$ values do not have to be distinct. The number of times a particular value occurs in this list is called the multiplicity of the root. So you only get $n$ if you count the roots by their multiplicity.
Suppose it has three roots and $a\neq0$.
The hypotheses of Rolles Theorem are satisfied then there will exist two roots of the derivative and a root of the second derivative which is a constant $(=2a)$.
Let $$r=\frac{-b+\sqrt{b^2-4ac}}{2a},\quad s=\frac{-b-\sqrt{b^2-4ac}}{2a}.$$ Simple calculation shows that $$r+s=-\frac ba\quad\text{ and }\quad rs=\frac{b^2-(b^2-4ac)}{4a^2}=\frac ca.$$ Thus $$a(x-r)(x-s)=a[x^2-(r+s)x+rs]=ax^2+bx+c.$$ If $t$ is any root of the quadratic equation $ax^2+bx+c=0,$ then $$a(t-r)(t-s)=at^2+bt+c=0.$$ Since $a\ne0$ this means that $$(t-r)(t-s)=0$$ whence $$t-r=0\quad\text{ or }\quad t-s=0,$$ i.e., $$t=r\quad\text{ or }\quad t=s.$$
I think I can simplify the 'polynomial long division' answer. The special case of polynomial long division says that, for any polynomial $P$, and any real number $a$, $$P(x) = Q(x)(x-a) + R$$ for some polynomial $Q$ and constant $R$ (use long division to divide $P$ by $x-a$ and observer that $R$ is required to be a constant since it must have lower degree than $x-a$, which is a first-degree polynomial).
The above is true for all $x$, so substituting $x=a$ we get $$P(a) = Q(a)(a-a) + R$$ Obviously, $a-a=0$, so $R=P(a)$.
If $a$ is a 0 of $P$ ($P(a)=0$), then $R=0$, so $x-a$ divides $P(x)$.
Now, if we have any distinct root $a$ of a quadratic polynomial $P$, we know $$P(x) = Q(x)(x-a)$$ $Q$ must be a first-degree polynomial, since anything higher-degree, multiplied by a first-degree polynomial, would produce a higher-than-second-degree polynomial. So $$P(x) = (x-b)(x-a)$$ Now, assuming we're working over an integral domain (which $\mathbb R$ and $\mathbb C$ both are), $$P(x) = 0 \Rightarrow x-b=0 \text{ or } x-a=0$$ So $a$ and $b$ are the only zeros of $P$ (although it is possible that $a=b$).
Suppose not. Then there are at least 3 roots and so $$P(x)=(x-x_1)(x-x_2)(x-x_3)Q(x)$$ This is at least a cubic so this cannot happen.
If you wrote the theorem out, it would look like this
THEOREM: Let $a,b,$ and $c$ be real numbers with $a \ne 0$. Then $ax^2+bx+c=0$ if and only if $x = \dfrac{-b \pm \sqrt{b^2 - 4ac}}{2a}$.
That means
if $ax^2+bx+c=0$, then $x = \dfrac{-b \pm \sqrt{b^2 - 4ac}}{2a}$
and
if $x = \dfrac{-b \pm \sqrt{b^2 - 4ac}}{2a}$, then $ax^2+bx+c=0$.
Suppose $a \neq 0$. Then $ax^2+bx+c=0$ has the same set of solutions as $x^2+\frac{b}{a}x+\frac{c}{a}=0$ which means that we can reduce all equations to
$$x^2+px+q=0$$
without losing generality.
Suppose $q=0$, then it is $x^2+px=0$ when either $x=0$ or $x+p=0$, but $x+p=0$ has only one solution by definition $x=-p$
Next, take then $q \neq 0$, when
$$x(x+p)=-q$$
Take $y=x+\frac{p}{2}$ which makes
$$(y+\frac{p}{2})(y-\frac{p}{2})=y^2-\frac{p^2}{4}=-q$$
or
$$y^2=-q+\frac{p^2}{4}=b$$
So it comes down to a question of how many different roots square root may have.
Write two of them which we suspect:
$$y_1^2=b$$
$$y_2^2=b$$
$$y_1^2-y_2^2=(y_1-y_2)(y_1+y_2)=0$$
So it is either $y_1=y_2$ which gives the same solution or $y_1=-y_2$ which connects the solutions in such a way that there cannot be two different negative values of the same number. Since the connections of reduced solution $y_1,y_2$ are connected with $y_1=-y_2$ the starting set of solutions must have the same cardinality: no more than $2$.
Finally if $a=0$ there can be only one solution by definition.