Suppose one has a collection of i.i.d. random functions $\{f(\cdot,t):t\in\mathbb R\}$, where we write $f$ for the common distribution. Assume that $f$ is integrable a.s., and that $\mathbb E\|f\|_1<\infty.$ In a probability application, what I need is to interchange integrals and expectations like \begin{align}\mathbb E\int_{\mathbb R} f(t,t)g(t)\ \mathrm dt=\int_{\mathbb R} \mathbb Ef(t)g(t)\ \mathrm dt. \tag{1}\end{align}
I'm looking for conditions on both the collection $\{f(\cdot,t):t\in\mathbb R\}$ and the function $g$ such that this is true, though it would be convenient if the stronger assumptions are on $\{f(\cdot,t):t\in\mathbb R\}$. In any case, $g$ is a measurable function.
If we ignore $g$ for the time being, this comes down to a Fubini argument, given that the iterated integral of $|f(\cdot,\cdot)|$ is finite. However, to make sense of those integrals, we also need a $f$ to be $\mathcal B(\mathbb R)\otimes \mathcal F$- measurable, where $\mathcal F$ is the sigma-algebra on which the i.i.d. collection $\{f(\cdot,t):t\in\mathbb R\}$ is defined. For my probability application, this condition does not give me any insight in the problem. Can we come up with a more intuitive assumption justifying the interchange of expectation and integration? I don't mind if this turns out to be a stronger assumption on $f$.
When we look at a single random function, I assume that such a function $f(\cdot)$ is a seperable stochastic process (see also the Neveu, Mathematical Foundations of the Calculus of Probability, $\S$ III.4), which allows us to do measurability conclusions on such a random function. As for the i.i.d. assumption: to me it seems natural to assume that the autocovariance is of the form (something like) $$\mathbb E\|f(\cdot,t)f(\cdot,t+\tau)\|_1-\mathbb E\|f(\cdot,t)\|_1\mathbb E\|f(\cdot,\tau)\|_1=\left(\mathbb E\|f(\cdot,t)f(\cdot,t+\tau)\|_1-\mathbb E\|f(\cdot,t)\|_1^2\right)\delta(\tau),$$ where the usage of the dirac delta is motivated by an analogy with white noise; see also the comment by Snoop.
I am wondering if these assumptions suffice to justify the interchange of expectation and integration in (1). If yes, how can this be proved? If not, what additional assumption would help me?
Any help or reference is much appreciated. Please let me know if you need more details.