Say I was trying to find the derivative of $x^2$ using differentiation from first principles. The usual argument would go something like this:
If $f(x)=x^2$, then \begin{align} f'(x) &= \lim_{h \to 0}\frac{(x+h)^2-x^2}{h} \\ &= \lim_{h \to 0}\frac{2hx+h^2}{h} \\ &= \lim_{h \to 0} 2x+h \end{align} As $h$ approaches $0$, $2x+h$ approaches $2x$, so $f'(x)=2x$.
Throughout this argument, I assumed that $$ \lim_{h \to 0}\frac{f(x+h)-f(x)}{h} $$ was actually a meaningful object—that the limit actually existed. I don't really understand what justifies this assumption. To me, sometimes the assumption that an object is well-defined can lead you to draw incorrect conclusions. For example, assuming that $\log(0)$ makes any sense, we can conclude that $$ \log(0)=\log(0)+\log(0) \implies \log(0)=0 \, . $$ So the assumption that $\log(0)$ represented anything meaningful led us to incorrectly conclude that it was equal to $0$. Often, to prove that a limit exists, we manipulate it until we can write it in a familiar form. This can be seen in the proofs of the chain rule and product rule. But it often seems that that manipulation can only be justified if we know the limit exists in the first place! So what is really going on here?
For another example, the chain rule is often stated as:
Suppose that $g$ is differentiable at $x$, and $f$ is differentiable at $g(x)$. Then, $(f \circ g)$ is differentiable at $x$, and $$ (f \circ g)'(x) = f'(g(x))g'(x) $$
If the proof that $(f \circ g)$ is differentiable at $x$ simply amounts to computing the derivative using the limit definition, then again I feel unsatisfied. Doesn't this computation again make the assumption that $(f \circ g)'(x)$ makes sense in the first place?