0

I saw this example of runge kutta online, I use the unity game engine, and am working on making softbody physics (squishy things like a sponge).

It was working, until you run it for like a minute or so, then the vertices start to explode (they start shaking really badly and the mesh does not behave like it is supposed to) I watched a youtuber name "Gonkee" and he made a video called "I built my own physics engine from scratch" or something like that. Anyway, he said the same thing happened to him, so I am 98% sure that this is not a coding problem.

He said he used Runge Kutta 4 because of "some rounding error". I found some guy's code on github. (FYI: On the first page you can see a code snippet, on line 14, the first parameter should be y, along with the other 3, to make 4 total)

I see that they use the examples of wovles, rabit, starttime, endtime, and steps.

Q: how do these examples he use corelate to adding force using Euler's method? Mine is like this: Add force per second, Apply force per second

Sorry if I am missing something and my question doesn't make sense, I just started learning about Runge Kutta and have almost no idea how it works.

gbe
  • 207

1 Answers1

2

This sounds like the effect of a step size $h$ that is too large, outside the stability region of the method.

Essentially, the discretized body has oscillation modes or frequency components of a high frequency $\lambda$. In a "smooth" state, these contribute to the whole state with a minuscule amplitude. During the time evolution, if $λh$ falls outside of a rather small set of radius about 2 to 4 around the origin, the amplitude gets catastrophically magnified, destroying the smoothnedss of the object and leading to other "unphysical" deviations (2D example for too large step sizes in this sense).


You could "blindly" reduce the step size, e.g., replace one step of size $h$ with 10 steps of size $h/10$. In general this will increase the time until unrealistic distortions become visible. If the number of sub-steps becomes large enough, the simulation remains stable forever, distortions changing to "normal" numerical errors.

The other variant is to obtain an error estimate, via step doubling or changing to a method like Fehlberg that already contains it, and adapt the step size so that the unit step error remains below some tolerance. While the error estimate also becomes increasingly wrong outside the region of stability, it also becomes large, leading to a reduction in step size.

Take notice that for large-dimensional systems the construction for norms becomes critical. You want to avoid that one state component overly dominates the others. The scales/weights/coefficients of the norm should also make sure that errors in the state component lead to error norms of a similar size.

Lutz Lehmann
  • 131,652