I will give two of my personal favorite proofs here for matrices over real numbers (as is done in a first course in linear algebra). Importantly, these proofs avoid the rather tedious description of echelon matrix structures and can be presented quickly as independent proofs.
The first is from an absolutely delightful article by George Mackiw in Mathematics Magazine (https://doi.org/10.1080%2F0025570X.1995.11996337).
Let $A$ be an $m\times n$ matrix whose row rank is $r$. Therefore, the dimension of the row space of $A$ is $r$ and let $x_1,\ldots,x_r$ be a basis of the row space of $A$. We claim that the vectors $Ax_1,\ldots,Ax_r$ are linearly independent. To prove this, we consider a linear homogeneous relation,
$$
c_1Ax_1 + \cdots + c_rAx_r = 0 \Longrightarrow A(c_1x_1 + \cdots + c_rx_r) = 0,
$$
and we prove that $c_1 = \cdots = c_r = 0$.
Let $v = c_1x_1 + \cdots + c_rx_r$. Then, $Av = 0$ which means that the dot product of $v$ with each row vector of $A$ is zero. Therefore, $v$ is orthogonal to each of the rows of $A$ and so $v$ is also orthogonal to any vector in the row space of $A$ (because any vector in the row space of $A$ is a linear combination of row vectors of $A$). But note that $v$ is itself in the row space of $A$ because it is a linear combination of a basis of the row space of $A$. This means that $v$ is orthogonal to itself, which means $v=0$. Therefore,
$$
c_1x_1 + \cdots + c_rx_r = 0 \Longrightarrow c_1 = \cdots = c_r = 0
$$
because $x_1,\ldots,x_r$ were taken to be a basis of the row space (hence linearly independent). We have now proved that the coefficients in a linear combination of $Ax_1,\ldots, Ax_r$ are all equal to zero, so the $Ax_i$'s are linearly independent.
Now, each $Ax_i$ is obviously a vector in the column space of $A$ so $\{Ax_1,\ldots, Ax_r\}$ is a set of $r$ linearly independent vectors in the column space of $A$. So the dimension of the column space of $A$ (i.e., the column rank of $A$) must be at least as big as $r$. This proves that row rank of $A$ is no larger than the column rank of $A$:
$$
\mbox{row rank}(A) \leq \mbox{column rank}(A)\;.
$$
Since this result applies to any matrix, we can also apply this result to the transpose of $A$ to conclude that the row rank of $A^{t}$ is no larger than the column rank of $A^{t}$. But the row (column) rank of $A^{t}$ is the column (row) rank of $A$. This yields the reverse inequality and the proof is complete.
A second proof is posted here:
Sudipto Banerjee (https://math.stackexchange.com/users/19430/sudipto-banerjee), Looking for an intuitive explanation why the row rank is equal to the column rank for a matrix, URL (version: 2023-03-25): https://math.stackexchange.com/q/4367250
Briefly, define $\operatorname{rank}(A)$ to mean the column rank of $A$: $\operatorname{col rank}(A) = \dim \{Ax: x \in \mathbb{R}^n\}$. First we show that $A^{t}Ax = 0$ if and only if $Ax = 0$. If $Ax = 0$, then multiplying both sides by $A^{t}$ shows $A^{t}Ax = 0$. To prove the other direction, argue as follows:
$$A^{t}Ax=0 \implies x^{t}A^{t}Ax=0 \implies (Ax)^{t}(Ax) = 0 \implies Ax = 0.$$
Therefore, the null spaces of $A$ and $A^{t}A$ are the same.
Applying the rank plus nullity theorem (https://en.wikipedia.org/wiki/Rank%E2%80%93nullity_theorem) to $A$ and $A^{t}A$ we obtain that the sum of the dimensions of the column space and null space of both these matrices are equal to $n$ (which is the number of columns in $A$ and also in $A^{t}A$). Since the null spaces of $A$ and $A^{t}A$ are the same, we can conclude that $\operatorname{col rank}(A) = \operatorname{col rank}(A^{t}A)$.
Therefore, $\operatorname{col rank}(A) = \operatorname{col rank}(A^{t}A) \leq \operatorname{col rank}(A^{t})$, where the last inequality follows from the fact that the columns of $A^{t}A$ are linear combinations of the columns of $A^{t}$ and, hence, in the column space of $A^{t}$. This proves that $\operatorname{col rank}(A) \leq \operatorname{col rank}(A^{t})$ for any matrix $A$. Applying this inequality to the matrix $A^{t}$ gives the reverse inequality and we conclude $\operatorname{col rank}(A) = \operatorname{col rank}(A^{t})$.