Here is an explicit proof, for the enjoyment of a commenting troll. The following goes back to Gauss, from what I've been told.
Explicit LU-decomposition
Theorems
Notation. Fix a commutative ring $\mathbb{K}$. Let $A = \left(a_{i,j}\right)_{1\leq i\leq n,\ 1\leq j\leq m} \in \mathbb{K}^{n \times m}$ be an $n\times m$-matrix over $\mathbb{K}$. Let $u_1, u_2, \ldots, u_p$ be any elements of $\left\{1,2,\ldots,n\right\}$. Let $v_1, v_2, \ldots, v_q$ be any elements of $\left\{1,2,\ldots,m\right\}$. Then, $\operatorname{sub}^{v_1, v_2, \ldots, v_q}_{u_1, u_2, \ldots, u_p} A$ shall denote the $p\times q$-matrix $\left(a_{u_i, v_j}\right)_{1\leq i\leq p,\ 1\leq j\leq q} \in \mathbb{K}^{p\times q}$.
(Thus, when $u_1 < u_2 < \cdots < u_p$ and $v_1 < v_2 < \cdots < v_q$, this matrix $\operatorname{sub}^{v_1, v_2, \ldots, v_q}_{u_1, u_2, \ldots, u_p} A$ is the matrix obtained from $A$ by crossing out all rows except for the rows numbered $u_1, u_2, \ldots, u_p$ and crossing out all columns except for the columns numbered $v_1, v_2, \ldots, v_q$. It is called a submatrix of $A$.)
Let $A$ be an $n\times n$-matrix.
Define an $n \times n$-matrix $R_A =\left( b_{i,j}\right) _{1\leq i\leq n,\ 1\leq j\leq n}$ by
\begin{equation}
b_{i,j}=\det\left( \operatorname{sub}_{1,2,\ldots,i}^{1,2,\ldots ,i-1,j}A\right) .
\end{equation}
Define an $n \times n$-matrix $L_A =\left( \left( -1\right) ^{i+j}c_{i,j}\right) _{1\leq i\leq n,\ 1\leq j\leq n}$ by
\begin{equation}
c_{i,j}
=
\begin{cases}
\det\left( \operatorname{sub}_{1,2,\ldots,\widehat{j},\ldots,i}^{1,2,\ldots,i-1}A\right) , & \text{ if } j \leq i; \\
0, & \text{ if } j > i .
\end{cases}
\end{equation}
Here, the hat over the $j$ is a "magician's hat", which makes whatever comes under it disappear (so "$1,2,\ldots,\widehat{j},\ldots,i$" means "$1,2,\ldots,j-1,j+1,j+2,\ldots,i$").
Then:
Theorem 1. We have $R_A = L_A A$.
Proposition 2. The matrix $R_A$ is upper-triangular, while the matrix $L_A$ is lower-triangular.
We will prove these two facts below. Once they are proven, you can conclude that $A$ has the LU-decomposition $A = L_A^{-1} R_A$ when $L_A$ is invertible (since Proposition 2 shows that $L_A^{-1}$ is lower-triangular and $R_A$ is upper-triangular, but Theorem 1 yields $A = L_A^{-1} R_A$). When is $L_A$ invertible? The matrix $L_A$ is lower-triangular, so its invertibility is equivalent to the invertibility (in the base ring $\mathbb{K}$) of its diagonal entries. But its diagonal entries are
$c_{i,i} = \det\left( \operatorname{sub}_{1,2,\ldots,\widehat{i},\ldots,i}^{1,2,\ldots,i-1}A\right)
= \det\left( \operatorname{sub}_{1,2,\ldots,i-1}^{1,2,\ldots,i-1}A\right)$
for $i \in \left\{1,2,\ldots,n\right\}$,
which are exactly the "proper northwestern principal minors" of $A$ (that is, all northwestern principal minors except for $\det A$ itself). Thus, you can conclude the following:
Corollary 3. If the "proper northwestern principal minors" of $A$ (that is, all the determinants $\det\left( \operatorname{sub}_{1,2,\ldots,i-1}^{1,2,\ldots,i-1}A\right)$ for $i \in \left\{1,2,\ldots,n\right\}$) are invertible (in the base ring $\mathbb{K}$), then $A$ has the LU-decomposition $A = L_A^{-1} R_A$.
Note that Corollary 3 is only a sufficient condition for the existence of an LU-decomposition. It is not a necessary one (as the example in which $n \geq 2$ and $A$ is the zero matrix shows: the zero matrix has an LU-decomposition, but for $n \geq 2$ it has a proper northwestern principal minor equal to $0$). But it is necessary if $A$ is invertible (since then, both the L and the U factors must be invertible, but this means that their diagonal entries are invertible; but the northwestern principal minors of $A$ are merely products of these diagonal entries).
Proofs
Proof of Theorem 1. Write the $n\times n$-matrix $A$ in the form $A=\left(
a_{i,j}\right) _{1\leq i\leq n,\ 1\leq j\leq n}$.
From $L_{A}=\left( \left( -1\right) ^{i+j}c_{i,j}\right) _{1\leq i\leq
n,\ 1\leq j\leq n}$ and $A=\left( a_{i,j}\right) _{1\leq i\leq n,\ 1\leq
j\leq n}$, we obtain
\begin{equation}
L_{A}A=\left( \sum_{k=1}^{n}\left( -1\right) ^{i+k}c_{i,k}a_{k,j}\right)
_{1\leq i\leq n,\ 1\leq j\leq n}.
\label{darij1.pf.t1.LAA=}
\tag{1}
\end{equation}
We must prove that $R_{A}=L_{A}A$. In view of \eqref{darij1.pf.t1.LAA=} and
$R_{A}=\left( b_{i,j}\right) _{1\leq i\leq n,\ 1\leq j\leq n}$, this boils
down to proving that
\begin{equation}
b_{i,j}=\sum_{k=1}^{n}\left( -1\right) ^{i+k}c_{i,k}a_{k,j}
\label{darij1.pf.t1.goal}
\tag{2}
\end{equation}
for each $i\in\left\{ 1,2,\ldots,n\right\} $ and $j\in\left\{
1,2,\ldots,n\right\} $. So let us prove \eqref{darij1.pf.t1.goal}.
Fix $i\in\left\{ 1,2,\ldots,n\right\} $ and $j\in\left\{ 1,2,\ldots
,n\right\} $. Consider the $i\times i$-matrix
\begin{equation}
\operatorname{sub}_{1,2,\ldots,i}^{1,2,\ldots,i-1,j}A=\left(
\begin{array}
[c]{ccccc}
a_{1,1} & a_{1,2} & \cdots & a_{1,i-1} & a_{1,j}\\
a_{2,1} & a_{2,2} & \cdots & a_{2,i-1} & a_{2,j}\\
\vdots & \vdots & \ddots & \vdots & \vdots\\
a_{i-1,1} & a_{i-1,2} & \cdots & a_{i-1,i-1} & a_{i-1,j}\\
a_{i,1} & a_{i,2} & \cdots & a_{i,i-1} & a_{i,j}
\end{array}
\right) .
\end{equation}
Expanding the determinant of this matrix along its $i$-th column (whose
entries are the first $i$ entries $a_{1,j},a_{2,j},\ldots,a_{i,j}$ of the
$j$-th column of $A$) yields
\begin{align*}
& \det\left( \operatorname{sub}_{1,2,\ldots,i}^{1,2,\ldots,i-1,j}A\right)
\\
& =\sum_{k=1}^{i}\left( -1\right) ^{i+k}a_{k,j}\det\left(
\operatorname{sub}_{1,2,\ldots,\widehat{k},\ldots,i}^{1,2,\ldots
,i-1}A\right)
\end{align*}
(indeed, if $k\in\left\{ 1,2,\ldots,i\right\} $, then removing the $k$-th row and the $i$-th column from the matrix $\operatorname{sub}_{1,2,\ldots,i}^{1,2,\ldots,i-1,j}A$ yields the matrix $\operatorname{sub}
_{1,2,\ldots,\widehat{k},\ldots,i}^{1,2,\ldots,i-1}A$). Comparing this with
\begin{align*}
& \sum_{k=1}^{n}\left( -1\right) ^{i+k}c_{i,k}a_{k,j}\\
& =\sum_{k=1}^{i}\left( -1\right) ^{i+k}\underbrace{c_{i,k}}
_{\substack{=\det\left( \operatorname{sub}_{1,2,\ldots,\widehat{k},\ldots
,i}^{1,2,\ldots,i-1}A\right) \\\text{(by the definition of }c_{i,k}
\text{,}\\\text{since }k\leq i\text{)}}}a_{k,j}+\sum_{k=i+1}^{n}\left(
-1\right) ^{i+k}\underbrace{c_{i,k}}_{\substack{=0\\\text{(by the definition
of }c_{i,k}\text{,}\\\text{since }k>i\text{)}}}a_{k,j}\\
& =\sum_{k=1}^{i}\left( -1\right) ^{i+k}\det\left( \operatorname{sub}
_{1,2,\ldots,\widehat{k},\ldots,i}^{1,2,\ldots,i-1}A\right) a_{k,j}
+\underbrace{\sum_{k=i+1}^{n}\left( -1\right) ^{i+k}0a_{k,j}}_{=0}\\
& =\sum_{k=1}^{i}\left( -1\right) ^{i+k}\det\left( \operatorname{sub}
_{1,2,\ldots,\widehat{k},\ldots,i}^{1,2,\ldots,i-1}A\right) a_{k,j}\\
& =\sum_{k=1}^{i}\left( -1\right) ^{i+k}a_{k,j}\det\left(
\operatorname{sub}_{1,2,\ldots,\widehat{k},\ldots,i}^{1,2,\ldots
,i-1}A\right) ,
\end{align*}
we obtain
\begin{equation}
\sum_{k=1}^{n}\left( -1\right) ^{i+k}c_{i,k}a_{k,j}=\det\left(
\operatorname{sub}_{1,2,\ldots,i}^{1,2,\ldots,i-1,j}A\right) =b_{i,j}
\end{equation}
(by the definition of $b_{i,j}$). This proves \eqref{darij1.pf.t1.goal}.
Thus, $R_{A}=L_{A}A$ holds. This proves Theorem 1. $\blacksquare$
Proof of Proposition 2. For any $i\in\left\{ 1,2,\ldots,n\right\} $ and
$j\in\left\{ 1,2,\ldots,n\right\} $ satisfying $j>i$, we have $c_{i,j}=0$
(by the definition of $c_{i,j}$) and thus $\left( -1\right) ^{i+j}
\underbrace{c_{i,j}}_{=0}=0$. Hence, the matrix $\left( \left( -1\right)
^{i+j}c_{i,j}\right) _{1\leq i\leq n,\ 1\leq j\leq n}$ is lower-triangular.
In other words, the matrix $L_{A}$ is lower-triangular (since $L_{A}=\left(
\left( -1\right) ^{i+j}c_{i,j}\right) _{1\leq i\leq n,\ 1\leq j\leq n}$).
It remains to prove that the matrix $R_{A}$ is upper-triangular. Indeed, let
$i\in\left\{ 1,2,\ldots,n\right\} $ and $j\in\left\{ 1,2,\ldots,n\right\}
$ be such that $i>j$. Hence, $j<i$, so that $j\in\left\{ 1,2,\ldots
,i-1\right\} $. Thus, the matrix $\operatorname{sub}_{1,2,\ldots
,i}^{1,2,\ldots,i-1,j}A$ has two equal columns (namely, its $j$-th column
equals its $i$-th column). Thus, its determinant is $0$. In other words,
$\det\left( \operatorname{sub}_{1,2,\ldots,i}^{1,2,\ldots,i-1,j}A\right)
=0$. The definition of $b_{i,j}$ yields $b_{i,j}=\det\left(
\operatorname{sub}_{1,2,\ldots,i}^{1,2,\ldots,i-1,j}A\right) =0$.
Now, forget that we fixed $i$ and $j$. We thus have shown that $b_{i,j}=0$ for
all $i\in\left\{ 1,2,\ldots,n\right\} $ and $j\in\left\{ 1,2,\ldots
,n\right\} $ satisfying $i>j$. In other words, the matrix $\left(
b_{i,j}\right) _{1\leq i\leq n,\ 1\leq j\leq n}$ is upper-triangular. In
other words, the matrix $R_{A}$ is upper-triangular. This completes the proof
of Proposition 2. $\blacksquare$