Adding an answer that might be repetitive, but perhaps will be useful for its straightforwardness.
A Boolean function on $n$ Boolean variables is a map from a set of binary variables to a binary outcome. This means that a Boolean function is, informally, a rule that returns a $1$ (or $\tiny{\text{TRUE}}$, etc.) given the outcome of those $n$ binary variables.
If we want to maximize the number of different outcomes, we will have a unique function for every possible outcome of the $n$ variables.
Let’s make this concrete and simple, then generalize. The NAND function for two variables $X_1, X_2$ returns $1$ iff $\neg(X_1 =1 \land X_2 = 1)$; so, of our four possible outcomes for trials of $X_1, X_2$, we have...
$$
\begin{aligned}
\begin{array}{|c|c|c|}
\hline
X_1 & X_2 & NAND(X_1, X_2) \cr
\hline
0 & 0 & 1 \cr
\hline
1 & 0 & 1 \cr
\hline
0 & 1 & 1 \cr
\hline
1 & 1 & 0 \cr
\hline
\end{array}
\end{aligned}
$$
So, how did we construct the NAND function? We wrote down every possible outcome of a Boolean experiment with two trials.
In general, with $n$ trials which each admit $m$ outcomes, we have $m^n$ values possible for the experiment (forgive the statistics language; that is how my brain works). Here, that becomes $2^n$. While you can consider this result as an application of the "permutation with repetition" formula, as another comment points out, this can be understood as a sum of combinations, where we sum the number of ways to have $0, 1, 2, ... n$ successes (for more on this connection, see my answer here).
Finally, we constructed the Boolean function by defining which of those $2^n$ patterns of successes were considered successes. In other words, we have a combination of combinations, or a combination where the number of objects available is itself a combination. There are, then, $2^{2^n}$ unique combinations of the outcomes of $n$ trials; since we said a Boolean function is unique if it returns $1$ for a unique combination of outcomes of the experiment, there are $2^{2^n}$ Boolean functions available for $n$ variables.
To see this in even more detail, consult Nisan and Schocken's Elements of Computing Systems, which tallies all $16$ possible Boolean functions for just two variables. For example, an XOR function is defined by the combination $(X_1=1 \land X_2 = 0) \land (X_1=0 \land X_2 = 1)$. It is tedious to list them all out, but you might try randomly picking a combination and looking up the name of that logic gate.
Obviously, this number can get very large, very fast if we add more variables on. To pick the next simplest example, if $n=3$, there are $2^3 = 8$ possible outcomes of an experiment with all three binary variables, and then $2^8 = 256$ possible combinations of experiments which we could define to trigger a $1$, so that many Boolean functions.