I'm trying to better understand how things get proved. I ran into a proof of one of the corollaries of Mean Value Theorem:
Suppose that $f$ is continuous on $[a,b]$ and differentiable on $(a,b)$. If $f'(x)>0$ at each point $x\in (a,b)$, then $f$ is increasing on $[a,b]$. If $f'(x)<0$ at each point $x\in (a,b)$, then $f$ is decreasing on $[a,b]$.
The proof in the book uses Mean Value Theorem to show that $f(x_2)-f(x_1)>0$. But why did we need this theorem? Couldn't we repurpose our usual quotient of the difference limit? Since $f(x)$ is differentiable:
$$ \lim_{x_2 \to x_1^+}\frac{f(x_2)-f(x_1)}{x_2-x_1}>0 \implies f(x_2)-f(x_1)>0 $$
And since this is true for all $x_1\in(a,b)$, then it's true on the whole interval.