I am reading Doerfler's Calculating Curves, and I've been puzzling over one particular derivation. I have understood most of the calculus involved, but I am still missing a step. In short:
We start with a smooth, implicitly defined function $F(u,v,w)=0$. At a particular point $p \in \mathbb{R}^3$, the partial derivatives of $F$ are all nonzero, so in particular we can apply the implicit function theorem to solve for $\widehat{w}(u,v)$ in a neighborhood around $p$.
We also have three planar curves, $\gamma_1,\gamma_2,\gamma_3$ with the very special property that $\gamma_1(u),\gamma_2(v),\gamma_3(w)$ are collinear when, and only when, $F(u,v,w) = 0$. The curves are furthermore well-behaved such that they are smooth and never intersect one another.
Picking any two values $u$ and $v$ defines a line through $\gamma_1(u)$ and $\gamma_2(v)$. Let $A(u,v)$ and $B(u,v)$ denote the slope and y-intercept of this line, respectively. (Let's temporarily ignore cases where the line is vertical.)
Throughout the neighborhood where we can solve for the implicitly defined function $\widehat{w}(u,v)$, the special condition in (2) guarantees that whenever $F(u_0,v_0,w_0)=0$, we have that $\gamma_1(u_0), \gamma_2(v_0), \gamma_3(w_0)$ are collinear. In equation form, letting $\gamma_i = \langle f_i,g_i\rangle$ represent the 2D coordinates of each curve, this is equivalent to saying that: $$g_1(u) = A(u,v) \cdot f_1(u) + B(u,v)$$ $$g_2(v) = A(u,v) \cdot f_2(v) + B(u,v)$$ $$g_3(\widehat{w}(u,v)) = A(u,v) \cdot f_3(\widehat{w}(u,v)) + B(u,v)$$ at every point in the neighborhood where we can solve $F(u,v,w)=0$ for $\widehat{w}(u,v)$.
The author claims that the Jacobian of $A(u,v)$ and $B(u,v)$ must be nonzero at every point in this neighborhood. I understand intuitively why this must be --- when $J(A,B) \neq 0$, it means that we can locally recover the values of $u$ and $v$ from the line $y = A(u,v)x + B(u,v)$. But I don't understand the author's brief proof that the Jacobian must be nonzero.
First, the author takes partial derivatives of the equations above, finding that: $$0 = \frac{\partial A}{\partial v} f_1(u) + \frac{\partial B}{\partial v}$$ $$0 = \frac{\partial A}{\partial u} f_2(v) + \frac{\partial B}{\partial u}$$
Next, we take a linear combination of these two equations: $\frac{\partial A}{\partial u}$ times the first, minus $\frac{\partial A}{\partial v}$ times the second. We find that: $$0 = \frac{\partial A}{\partial u}\frac{\partial A}{\partial v}(f_1(u) - f_2(v)) + J(A,B)$$
Clearly, at any point $(u_0,v_0)$ where the Jacobian vanishes, we have: $$0 = \frac{\partial A}{\partial u}\frac{\partial A}{\partial v}(f_1(u_0) - f_2(v_0))$$ there.
The author claims that this equation means that the collinearity condition "is reduced to an identity". This is a mysterious comment to me. The collinearity condition is the condition that whenever $F(u,v,w)=0$, we also have that the three points $\gamma_1(u),\gamma_2(v),\gamma_3(w)$ lie on a line. Expressed using determinants, this condition says that $$F(u,v,w) = 0 \iff \det\begin{bmatrix}f_1(u) & g_1(u) & 1 \\\\ f_2(v) & g_2(v) & 1 \\\\ f_3(w) & g_3(w) & 1\end{bmatrix} = 0$$ And as far as I can tell, "reduced to an identity" means that the determinant on the right is forced to be identically equal to zero everywhere, not just where $F(u,v,w)=0$.
But I don't see why this must be so. It seems to me that each of the three terms in $\frac{\partial A}{\partial u}\frac{\partial A}{\partial v}(f_1(u_0) - f_2(v_0))$ might be equal to zero at some point without collapsing the entire determinant.
Am I wrong? The original derivation is very short and light on rigorous detail --- just a single sentence: "Let us suppose $J(A,B) = 0$; then [the system of partial derivatives in (6)] yields [the equation in (8)], and [the determinant equation in (9)] is reduced to an identity."
I have expanded out the detail here as much as possible, filling in the necessary rigorous steps. I also note that if you expand the determinant along the bottom row and divide through, you get an expression which says that $g_3(w) = A(u,v)\cdot f_3(w) + B(u,v)$. I believe I must still be missing some other key detail.
For what it's worth, the author of the derivation is specifically very loose about describing the context of each expression — whether, given an expression $P(u,v,w) = 0$, we are considering a specific value of $(u,v,w)$ where the equation holds, or have implicitly solved $P(u,v,w)=0$ for $\widehat{w}(u,v)$ and are considering only points in the neighborhood where this is possible, or are imposing the requirement on $P$ that $P(u,v,w)=0$ everywhere. So it may be that what's missing is some additional context like that.
Edit #2: If we are ignoring places where the line through $\gamma_1(u)$ and $\gamma_2(v)$ is vertical, we are specifically excluding the case where $f_1(u) - f_2(v)$ is zero. In all other cases, a point where $\partial_u A \cdot \partial_v A \cdot (f_1 - f_2) = 0$ must be a point where one of $\partial_u A$ or $\partial_v A$ must vanish, and *this* is the condition that must cause the breakdown somehow.