increase Physic and Logic calculations over 10000 FPS?

Dear Artists,

I am working on a physics simulation kind of like FEM (Finite Element Method).
I need time steps of less than 1 micro second which equals to 1.000.000 FPS.
The BGE only allows 10.000 FPS which is by far not enough to calculate real material properties.

Is there a way to increase both logic and physic maximum calculations beyond 10.000?

There’s no point stepping all bge systems at 10000hz. It is likely that what you are actually looking to step is just the simulation aspect. If you are using the BGE, it should be for rendering the simulation rather than calculating it’s state; realistically python is not designed for FEM. You could see some speedup with cython and numpy, but it seems more sensible to just do it all in c. I don’t think the BGE is really the engine for this either. Better to find an engine that you can manipulate data in c/c++ (like panda3d), or at least pass in your numpy data directly without copying it to python objects first.
Tldr, better explain what exactly you want to do

Can’t you simply set 1 micro second = 1 and scale the output?

@pgi: thanks for that idea! well I could apply a factor of 1/1000 to all force and apply a custom g force with more accuracy than the world setting enables. Ill take a look at it.

@agoose77: I am using blender because it was the fastest way of prototyping a simulation. Easier alternatives would be CAD systems, but that would involve a lot of research and most systems are not open source.

The whole purpose of this sim is to auto-evaluate perfect car suspension geometry and response to enviroment at real time.
In the long run a custom tailored engine will be used but for first prototypes blender was the easiest way to go.

I don’t know if I’d bother too much with g accuracy, you’re certainly not going to do an accurate physics simulation with IEEE floating points, you need arbitrary precision values to do the computation in the background and then you can freely use any game engine to present the result visually - if the presentation is meant to be merely aesthetic.

This sounds awesome, for real. Be sure to check back and post an update with how it goes once you get it to your liking. I’d love to see how it works.


here is a pic for easier understanding of whats being simulated. The colored vectors are stiff steel rods which tend to have huge peak forces due to relatively low physic tics. simulation is running at 7680 TPS which is way too low.

The reason for this kind of simulation, which would be useless in the normal CAD and matlab workflow, is because the suspension will improve itself with many iterations by changing geometry and materials (construct itself), this would not be possible in the very constraint CAD enviroments we have today. By letting the car drive around a predefinded course or lap I can evaluate forces and strain on suspension parts which will be recirculated back into regular construction.

The iterative part of changing the cars geometry to improve lap time will be done by some sort of evolution/ natural selection process.

@pgi your right with that accuracy :slight_smile: as an engineer you sometimes worry about the most unimportant things.
The real issue with accuracy will come when these rods will have a strain of a few micro meters which would resemble reality but wouldn’t be accuratly calculated with floats. But as this only effects oscillating forces which can not be damped in a stiff suspension anyway this is less of a deal. But we will see…

I read about a way to double float precision in bullet on the mailing list,

Blender uses the single-precision version of Bullet (it doesn’t #define BT_USE_DOUBLE_PRECISION)

perhaps someone can patch it in?(custom physics simulation build?)

have you read anything about pybullet?

It’s not the size of the number that matters here but the format. IEEE floating point numbers are basically numbers compressed using a lossy algorithm. What you need here is not something that can hold bigger differentials but a format that will always hold the exact representation of your numbers.
In other words, you want something that given 1 + 1 will tell you 2 and not 2.000000000001.
In the python realm we’re talking about libraries like this:
http://mpmath.org/

I think that double precision means 2x more accuracy** not float length

Single precision
The first bit is the sign bit, S,
the next eight bits are the exponent bits, ‘E’, and
the final 23 bits are the fraction ‘F’:

Double precision
The first bit is the sign bit, S,
the next eleven bits are the exponent bits, ‘E’, and
the final 52 bits are the fraction ‘F’:

As in single precision takes up exactly 32 bits… double precision takes up exactly 64 bits

That looks like a chicken or the egg problem, is it more accurate because it’s wider or is it wider to be more accurate?

Let’s all take a moment and remember what precision and accuracy mean. If you hit the bullseye every time, that’s both precise and accurate; if you hit all over the place but the average lies on the bullseye, that’s imprecise but accurate. If you always hit where you aim, but don’t aim at the bullseye, that’s precise, but inaccurate.

If your dartboard explodes, that’s rigidbody physics.

Some horrible observations:

  1. Bullet does not simulate real materials.

  2. Single-precision floats will ruin your day. Reducing the timestep goes a long way, but discretization error is only part of the problem. If the timestep becomes too small, quantization error goes nuclear, destroying your data. In layman’s terms, the error from loss of precision grows every time a calculation is repeated. 60 loops per second is fine, but one million? Bullet is not suitable for finite element analysis on a microsecond timescale. If you intend to use that much precision, you’re going to wind up with a suspension evolved to exploit errors in energy conservation.

  3. You could compile the BGE with double precision floats and an unlocked timestep, but this might be more painful than biting the bullet (pun intended) and implementing your own simulation. It also won’t reduce the error inherent to a rigidbody simulation. Vehicle bodies are flexible, and this flexibility contributes to the suspension behaviour. Given that massive inaccuracy, I don’t know what 10000 hertz isn’t doing for you that 1000000 hertz will. If you’re that intent on using Blender, even for a prototype, you might need to define your goals more clearly.

  4. Bullet’s primitives can’t be used to simulate a realistic pneumatic tire, another critical component of most suspensions. I hate to repeat what’s already been said, but what’s the goal of this project? Independent suspension for chariots? Admittedly that’s cool, and I’d totally back it on kickstarter.

@BluePrintRandom
Oddly enough, double precision doesn’t mean two times as much precision; it’s more like 4294967296 times as much. (Taken over the whole range of representable numbers.)

Since you have problems with instability in the simulation - what time-stepping algorithm are you using?
Simple time steps (aka Euler method) are horribly unstable for cases like this, you’ll at least want something like 4th order Runge-Kutta for that - best case would be an implicit solver.

Generally, doing a specialized simulation like this in Blender won’t give you great results - you’ll probably have to go C++ for that.


results are now way less noisy compared to old 7680Hz captures!

@BPR: pybullet will not be a choice because there are not enough contributors behind it, also python itself is good for prototyping but not a last resort.

@float precision discussion: How do floats work anyway? Are they calculated around some most significant digits? If that is so then there should be less of problem, a brute increase in Hz reduces deltas between time-steps significantly.
(Note the picture which shows rod-loads, this looks a lot less noisy than the old 7680 Hz graphs.)

I was also thinking about a dynamic time-step function which shortens time-step only if necessary. This should improve calculation speed by unimagined magnitudes.

@Raiderium:
“1. Bullet does not simulate real materials.”
Yes, rod-prototypes are being constructed right now and test results will be embedded into simulations.

“2. loss of precision grows every time a calculation is repeated.”
If I simulate a soapbox-cart it will probably end up at the same spot every time I simulate.
If I take a run in my newly garage-built soapbox-cart i’ll end up at a new spot every time I take a ride :slight_smile:

“4. simulate a realistic pneumatic tire”
Good point, and thats where Blender falls out. This will be evaluated during test driving the real prototype, definitely not inside Blender.

@lukasstockner97: I am basically using the one Blender gives me. But Ill take a look at what I can find out what’s the difference.

A learning algorithm on suspension. Nice! Reminds me of BoxCar2D.

When you’re doing simulations like this, sometimes it’s best to forget about the rendering and go far more mathermatical. With some effort you should be able to seperate your calculation of loads and impacts from the actual draw cycle. This will allow BGE to run at 60FPS, but the simulation to run at 100,000 ticks per second.
Unfortunately to do so, you’ll have to roll your own physics engine somewhat, but it looks like you’re doing that anyway with your suspension geometry.

How would this be done:

  • Pick a number of simulation steps per frame
  • Per step, calculate the physics collisions (namely wheels with ground followed by suspension response)
  • To get wheel collisions, use a raycast vertically down. Get the horizontal force from the normal the ray hits. So long as your wheel is smaller than the bumps of the surface you are using in your simulation this will be an accurate approximation.

Pretty much it comes down to using BGE as a display/visualization system.

BUT. I highly doubt timestep is your problem. Most cars have response rates of about 1-10Hz on their suspension, so even 200Hz of simulation should be pretty accurate. So it sounds like you have other issues that just happen to converge on a solution when you’re simulating it fast enough.


Floats work by storing a number (eg 2.65925) and a power (eg x10^30). This allows them to store small numbers with high precision and large numbers with low precision. I’ve found it plenty accurate enough for everything I’ve done. If you’re not concerned about the nano-newtons on your car, it’s probably accurate enough.

“simulate a realistic pneumatic tire” If you’re calculating the physics using the raycast method as I described (slightly) above, you can easily insert a tire model into the simulation. Tires can be represented accurately enough by a first order system.


I’m going to mirror what others have said: Just simulate it in matlab/octave/some other system designed for simulating this sort of thing. You can still do your genetic-algorithm tweaks as it’s all just math. All you’ll lose are the pretty visuals.

The instability in the simulation is not as a consequence of floating point precision. It will be the result of a divergent oscillation. If the step size dt is too large, for a stiff spring with spring constant k where k >> 0, a small displacement dx will produce a force kdx. This force, will lead to an increase in speed (k/mdt*dx). For large dt, dx or k, this leads to a large positional change when integrated.
In this case, stiff springs have a very large k, meaning that for non-miniscule timesteps they quickly explode. You can solve this naively using larger numbers of iterations per second, or using a higher order solver like Runga Kutta (4).

This floating point thing is going out of hand.
As far as I know - and I’d be quite surprise to find otherwise - the reason why you don’t do this stuff using IEEE floating points is that the IEEE format can exactly represent only some of the decimal numbers, precisely only some of those that can be expressed by a fraction whose denominator is a power of 2. It’s a binary problem, you need a non terminating string of bits to exactly represent those fractions.
If you use them, you can’t be neither precise nor accurate, you’re just getting the wrong number.

Floating point operations are no big problem when you consider their properties.

The precision is pretty high around within a specific range. As long as you calculate within this range you should not get much problems. For example it is a problem to calculate the motion of the objects in solar system with floats when you want to know meters (or even km).

More problematic in simulation is the increasing variation resulting of the imprecision on recursive calculation. I mean you do a calculation with a specific very small variation. The result is the input to another calculation resulting in an higher variation. This quickly raises to pretty large variations.

Imagine you turn yourself by 360°. As you are not perfect, this full turn is not really 360° but 360° with a variation lets say +/-1%. Now do 100 full turns where each one starts where the previous turn ended. The overall variation is still 1/% but you did 360°*100 = 36000°. The variation is 36000° * 1% = 360°. So after 100 full turns you can end up in any angle. As more turn as higher the variation.

The same happens on simulations with a large number of iterations. Small variations at the beginning result in large differences at the end of the simulation (like: Does the Flap of a Butterfly’s Wings in Brazil set off a Tornado in Texas?).

I’m having a deja vu, don’t know why.
Anyways, it’s not about precision, it’s about what you can represent with the format of a floating point. You cannot represent certain rational numbers, not because they are big or small but because they end up being an infinite sequence of bits. In the same way that 1/3 can be represented as a fractional but not as a decimal. That’s why we don’t use IEEE floating points for financial stuff or calculus in general.