I need to numerically integrate a large number of ODE's of the following form $$ \dot{X} = k_{1}\left[\rule{0pt}{4mm}U\left(t\right) - X\right] + k_{2}V\left(t\right)\left[\rule{0pt}{4mm}W\left(t\right) - X\right] $$
Here
- All variables and constants are positive real numbers
- $X(t)$ is the unknown function, that needs to be determined by integrating the ODE on the known finite interval $t \in [t_0, t_1]$.
- $k_1$ and $k_2$ are known positive constants
- $U(t), V(t), W(t)$ are known forcing functions, measured empirically on the interval $t \in [t_0, t_1]$ with a small uniform timestep $\Delta t$.
I don't need a very precise solution. A solution that has 5% error is perfectly tolerable, as long as the error does not explode towards the end of the integration interval.
I have used Scipy's SOLVE_IVP method to integrate the above equation, using different solvers, such as Runge-Kutta-45, and Radau. Since the above methods require knowledge of RHS of the ODE at arbitrary timesteps, I have used linear interpolation for the functions $U(t), V(t), W(t)$.
I have used numerical integration both on toy examples with smoothly changing forcing functions and known analytical solutions, as well as on real data. It works as expected for both. However, orders of magnitude more compute time is required to integrate real data.
I suspect that the main culprit is the noise in the measured forcing functions. On the one hand, it is undesirable that noise is not explicitly modeled by the solver: I suspect it could lead to systematic errors in the solution. On the other, noise in forcing functions likely results in adaptation mechanisms in the integrator (e.g. RK45) to pick very small step sizes, which result much slower computation.
I see 2 possible approaches to remedying this issue:
- Low-pass filter the data, and only model the slowly-changing effects. It is given that interesting effects happen at slower timescale and noise at faster timescale, but I do not know the exact timescale cutoff, there is likely at least some overlap.
- Include noise into the model, and use an integrator designed to handle noisy ODEs
Questions:
- Is de-noising prior to integration at least somewhat feasible for this problem? Or do I have to explicitly model noise in my integration routine?
- If yes, is there a rule of thumb to estimate the expected performance of a numerical integration scheme? E.g. how does the "smoothness" of empirical data affect the step size, chosen by the above integration routines. Would it help if I used a different interpolation routine (quadratic, spline, etc).
- What methods exist that are designed for numerically integrating noisy ODEs?