Everyplace I read about the central limit theorem (CLT), a sample (size n) is thought of as a sequence of random variables $(X_1,X_2,...,X_n )$, where each value comes from a random variable. These random variables could even have different distributions (Lindberg CLT).
A random variable is a mapping from some $\Omega$ to a value. i.e. one outcome in $\Omega$ is mapped to one value, and another outcome in $\Omega$ is mapped to another value.
Then why is, all of a sudden, every value in a sample viewed as a random variable "in itself" (viewed as a sequences of random variables). Isn't it more intuitive to instead just think of all the values in the sample as values of one random variable. The distribution of that one random variable would have the same distribution as the population, and the CLT could be used in regard to the distribution of that one random variable. I see this might be tricky if each value in the sample comes from a different distribution, but I am curious about the case of a sample being view as iid random variables.