Think about modelling a statistical experiment. The standard approach is to model observations (measurements) as realizations random variables living on the same measurable space $(\Omega,\mathcal{F})$. (Thus, it becomes meaningful to talk about independence, convergence, etc.) Once 'Nature' chooses $\omega\in\Omega$, one immediately observes $X_i(\omega)$ for each $i$. Typically, instead of working with $(\Omega,\mathcal{F})$, one directly uses the corresponding sample space, that is, the space (say $\mathcal{X}$), in which $X_i$'s take values. A statistical model then is defined as a triple $(\mathcal{X},\mathcal{A},\mathcal{P})$, where $\mathcal{A}$ is the collection of possible events and $\mathcal{P}$ is a family of probability distributions on $(\mathcal{X},\mathcal{A})$. A statistician tries to infer the true $P\in\mathcal{P}$ by observing realizations of $X_i$'s.
Having this in mind, it is meaningless to think about different manifestations of a single random variable. Consider your example with a flying object. Its trajectory can be modelled as a stochastic process $X_t(\omega)$. For each $\omega\in \Omega$, the map $t\mapsto X_t(\omega)$ defines the entire trajectory of that object. Once you make 20 measurements at different time points, you get a sample, $\{x_i\}_{i=1}^{20}$, where each $x_i$ is a realization of the random variable $X_{t_i}$.