However, because that proof requires the use of a matrix representation, that means a choice of basis is needed (gross, lol, /j). I am attempting to find an alternative that doesn't need a basis.
The Leibniz formula is perfectly fine as an answer to the question as stated, because you defined the determinant as a map from $GL_n$, which consists of matrices, not linear transformations. So the point here is that naturality holds because the Leibniz formula shows that the determinant is a polynomial with integer coefficients, which is a useful and crucial fact.
To make this discussion basis-free requires first making $GL_n$ basis-free, which can be done as follows. Instead of considering $GL_n(R)$ directly we will let $L$ be a free $\mathbb{Z}$-module of rank $n$, which we do not identify with $\mathbb{Z}^n$, and let $GL_L$ be the functor
$$\text{CRing} \ni R \mapsto GL_L(R) = \text{Aut}_R(L \otimes R) \in \text{Grp}.$$
Now we need a basis-free definition of the determinant which makes sense over an arbitrary commutative ring; neither of the definitions you've provided do this. Without such a definition you don't even know what $\det$ does over a commutative ring that isn't a field (unless you use the Leibniz formula!). For an $R$-linear endomorphism $T : V \to V$ of a free $R$-module of rank $n$ (which we do not identify with $R^n$), the determinant $\det(T)$ is the action of $T$ on the top exterior power
$$\det(T) : \Lambda^n(V) \to \Lambda^n(V).$$
All of the real work in this approach to defining the determinant goes into showing that if $V$ is a free $R$-module of rank $n$ then $\Lambda^n(V) \cong R$ (this isomorphism is not canonical) which allows us to identify $\text{End}_R(\Lambda^n(V)) \cong R$ (this isomorphism is canonical), which means $\det(T) \in R$ is a scalar. This defines, in a basis-free manner, a map
$$GL_L(R) \ni T \mapsto \det(T) \in R^{\times}.$$
This definition, in addition to working over an arbitrary commutative ring, also has the benefit that it makes the multiplicativity of the determinant completely obvious (by functoriality of the exterior powers), which neither of your definitions does.
Now the question is to understand why this map is natural. Naturality means that if $f : R \to S$ is a homomorphism of commutative rings and $T_S \cong T \otimes_R S$ denotes the extension of scalars of $T$ to an $S$-linear endomorphism of $V_S = V \otimes_R S$, then $f(\det T) = \det(T_S)$. This requires showing that the exterior power is natural in $R$ (not in $V$!), meaning it commutes with extension of scalars: that is, we need to know that the natural map
$$\Lambda^n(V_S) \cong \Lambda^n(V)_S$$
is an isomorphism (which will also show that $\text{End}_S(\Lambda^n(V_S)) \cong \text{End}_R(\Lambda^n(V))_S \cong R_S \cong S$; here we crucially need to know that $\Lambda^n(V_S)$ is free of finite rank). This follows from an inspection of their universal properties: by definition, $\Lambda^n(V)_S$ is the universal $S$-module which is the recipient of an $R$-linear map from $\Lambda^n(V)$, hence (using the universal property of the exterior power) equivalently an alternating $R$-multilinear map from $V^n$. Meanwhile $\Lambda^n(V_S)$ is the universal $S$-module which is the recipient of an alternating $S$-multilinear map from $V_S^n$. But extension of scalars identifies these.
This approach works but as you can see it's surprisingly involved, bordering on tedious. To my mind it threatens to obscure the real point: the determinant is a polynomial with integer coefficients. This is actually equivalent to the naturality of the determinant, by the following argument.
I will revert to talking about the determinant $\det : M_n(R) \to R$ on matrices, not necessarily invertible (none of the arguments above required invertibility and $\det$ continues to be natural on not-necessarily-invertible matrices; this is a stronger fact than what you wanted, and we get the result for $GL_n$ by restricting to invertible matrices). Both of the functors $R$ and $M_n(R)$ here are representable, by polynomial rings $\mathbb{Z}[x]$ and
$$\mathcal{O}_{M_n} \cong \mathbb{Z}[x_{ij}]$$
which is the polynomial ring on $n^2$ variables $x_{ij}, 1 \le i, j \le n$ representing the entries of the ($n \times n$) universal matrix $X$. This matrix specializes to every other matrix over every other commutative ring. Now by the Yoneda lemma, the determinant is natural iff it arises by pulling back along a single morphism
$$\det : \mathbb{Z}[x] \to \mathcal{O}_{M_n}$$
which is the universal determinant; morphisms from $\mathbb{Z}[x]$ just pick out some element, so the data of this morphism is equivalently the data of a single element $\det \in \mathcal{O}_{M_n}$, which I will also call the universal determinant.
So what is this mysterious universal determinant?
It is just the determinant $\det X$ of the universal matrix, regarded as a polynomial in $\mathbb{Z}[x_{ij}]$!
And what is this polynomial? It is exactly the Leibniz formula! In other words, if you believe that the determinant is not only defined over every commutative ring $R$ but is also natural in $R$, this is equivalent to believing that it must be given by specializing the determinant of the universal matrix, which is in turn equivalent to believing that it must be a polynomial with integer coefficients in the entries of a matrix.
Here is an approach to the determinant in which we define it in terms of the universal matrix very directly. The idea is that we pass from the universal matrix $X = \left[ x_{ij} \right]$ to the generic matrix, which is the matrix with the same entries $x_{ij}$ but now defined over the fraction field $\mathbb{Q}(x_{ij})$ of $\mathcal{O}_{M_n}$. The generic matrix is a matrix over a field, so ordinary linear algebra now applies to it: in particular it is invertible (you can simply invert it directly, by row reduction) and its inverse $Y = \left[ y_{ij} \right]$ is, on the one hand, a specific matrix with specific entries in $\mathbb{Q}(x_{ij})$, and on the other hand the generic inverse: because matrices have unique inverses (this is true over any commutative ring) the entries $y_{ij}$ are rational functions which describe the entries of the inverse of a matrix generically.
This is not abstract either, for any particular value of $n$ you can explicitly compute the $y_{ij}$ as rational functions by row reduction (you are performing the generic row reduction). What you will find is that each $y_{ij}$, when reduced to lowest terms as a rational function, has the same denominator! This is Cramer's rule, and now you can define the universal determinant $\det(X)$ to be the polynomial given by this common denominator (which fixes it up to sign, and we can fix the sign by requiring that $\det(I) = 1$).
This immediately defines $\det(X)$ as a polynomial, and in fact is very close to (there is a small gap here that needs to be filled) defining $\det(X)$ as the universal obstruction to invertibility, in that inverting $\det(X)$ takes you from the universal matrix to the universal invertible matrix, which represents $GL_n$. So here we define $\det$ via the key feature that it determines invertibility (that's what its name means!) and we completely avoid talking about exterior powers.
To my mind this is the most satisfying explanation of why we should expect the determinant to make sense and be natural (equivalently, given by a polynomial with integer coefficients) over an arbitrary commutative ring: it's because the algebraic operations required to invert an invertible matrix make sense and are natural over an arbitrary commutative ring, or equivalently, because we can perform a single set of algebraic operations once and for all to invert the generic matrix. This approach does have the downside that we need to prove Cramer's rule. The cleanest proof I know of this involves... the exterior powers! The other option I can think of is that we very carefully analyze the generic row reduction, in a sort of 19th-century style.