I have recently come to understand random variables from the perspective as deterministic measurable functions $X: \Omega \to \mathbb{R}$. I've been rereading some old statistics text books and realized that in this framework I no longer understand what it means to sample something.
For example, in a recent text, I read something along the lines of "you can sample a geometric random variable with parameter $p$ by flipping a $p$-weighted coin and counting the number of flips until tails is turned". I am not sure how to intuitively interpret this as a random variable in this measure theory framework.
Somewhat similarly, when texts say "sample i.i.d $X_1,\ldots, X_n \sim \mathcal{N}(\mu, \sigma^2)$" what exactly does this mean? By what process do we actually accomplish this? Does this just mean that we explicitly choose a bunch of functions $X_k: \Omega\to \mathbb{R}$ satisfying equality of distribution functions $F_{X_k}(\alpha) = \Phi(\alpha)$ and independence of laws: $\mathcal{P}_{(X_i, X_j)} = \mathcal{P}_{X_i}\times \mathcal{P}_{X_j}$?
Any intuitive clarifications would be really helpful! Huge plus if there's a nice way to formalize these methods for sampling and sampling i.i.d etc.