0

Suppose one has a collection of i.i.d. random functions $\{f(\cdot,t):t\in\mathbb R\}$, where we write $f$ for the common distribution. Assume that $f$ is integrable a.s., and that $\mathbb E\|f\|_1<\infty.$ In a probability application, what I need is to interchange integrals and expectations like \begin{align}\mathbb E\int_{\mathbb R} f(t,t)g(t)\ \mathrm dt=\int_{\mathbb R} \mathbb Ef(t)g(t)\ \mathrm dt. \tag{1}\end{align}

I'm looking for conditions on both the collection $\{f(\cdot,t):t\in\mathbb R\}$ and the function $g$ such that this is true, though it would be convenient if the stronger assumptions are on $\{f(\cdot,t):t\in\mathbb R\}$. In any case, $g$ is a measurable function.

If we ignore $g$ for the time being, this comes down to a Fubini argument, given that the iterated integral of $|f(\cdot,\cdot)|$ is finite. However, to make sense of those integrals, we also need a $f$ to be $\mathcal B(\mathbb R)\otimes \mathcal F$- measurable, where $\mathcal F$ is the sigma-algebra on which the i.i.d. collection $\{f(\cdot,t):t\in\mathbb R\}$ is defined. For my probability application, this condition does not give me any insight in the problem. Can we come up with a more intuitive assumption justifying the interchange of expectation and integration? I don't mind if this turns out to be a stronger assumption on $f$.

When we look at a single random function, I assume that such a function $f(\cdot)$ is a seperable stochastic process (see also the Neveu, Mathematical Foundations of the Calculus of Probability, $\S$ III.4), which allows us to do measurability conclusions on such a random function. As for the i.i.d. assumption: to me it seems natural to assume that the autocovariance is of the form (something like) $$\mathbb E\|f(\cdot,t)f(\cdot,t+\tau)\|_1-\mathbb E\|f(\cdot,t)\|_1\mathbb E\|f(\cdot,\tau)\|_1=\left(\mathbb E\|f(\cdot,t)f(\cdot,t+\tau)\|_1-\mathbb E\|f(\cdot,t)\|_1^2\right)\delta(\tau),$$ where the usage of the dirac delta is motivated by an analogy with white noise; see also the comment by Snoop.

I am wondering if these assumptions suffice to justify the interchange of expectation and integration in (1). If yes, how can this be proved? If not, what additional assumption would help me?

Any help or reference is much appreciated. Please let me know if you need more details.

Václav Mordvinov
  • 3,179
  • 1
  • 18
  • 42
  • Every decent non elementary probability theory book provides either the necessary measure theoretic backgrounds or refers to a book that contains them. In your case the product measure you want to apply Fubini to is $P\otimes\lambda$ where $\lambda$ is the Lebesgue measure. – Kurt G. Mar 23 '23 at 08:39
  • 1
    I overlooked your set up that the $f$ you want are i.i.d. Thanks @Snoop for pointing that out. Apart from that what I wrote holds. – Kurt G. Mar 23 '23 at 08:41
  • @KurtG. the problem of constructing uncountably many iid random variables is more subtle than it appears so maybe my comment could be misleading. I deleted my comment and I'll leave OP go down the rabbit hole. – Snoop Mar 23 '23 at 08:47
  • Thank you for your comments. Indeed, I was not hoping to construct such a collection of r.v.s on $([0,1],\mathcal B([0,1]),\mathrm{Leb})$, but I was intending to use a wider probability space, which is possible by Kolmogorov's extension theorem. – Václav Mordvinov Mar 23 '23 at 09:10
  • @VáclavMordvinov a quick google research will reveal to you that the concept of 'continuous-time white noise' is quite thorny especially when you want to manipulate it even if it exists see e.g. here. – Snoop Mar 23 '23 at 09:24

0 Answers0