It is a deja vu because it means the same. The limited range of expressible numbers results in a specific value range with a specific precision.
I meant a real deja vu, like “hey, I already wrote this reply to this discussion”. It was super weird.
I can see how the representation could be carved into a range but that would be a very peculiar range. Exemplifying, we’re not talking about: I can have all the numbers from 0 to 5 with five decimals or precision, we’re talking about I can have the numbers from 0 to 5, with five decimals of precision unless the number is 3, because there is no 3.
If you have 8 bits, you can represent 256 values (2^8). Either you can represent those 2^8 values using fixed precision giving 256 evenly spaced values or you could use 4 of them to represent a value (-8 to 8) and an exponent (-8 to 8). This gives a range of 2048 to -2048, but at that range, the smallest increment is … 256! So you cannot represent values between 2048 and 1792! BUT things change as you get close to zero because you use a negative exponent. 1*2^-8 is 0.039. So you can represent values very accurately near zero.
Why do I point this out?
What is the difference between 1 and 2? Well it’s clearly 1. What is the difference between 101 and 102? It’s also clearly 1. But between 1 and 2 it is 100% and between 101 and 102 it is under 1%.
And this is the way most data falls. You work either with small values that need good resolution or big values that don’t need as resolution nearly as fine.
As I mentioned before, I’ve never had issues with precision being the cause of issues. With single precision, if you need your number to be within 0.0005 of the actual value, you will only fail to meet this at … 8192
As in, you can have the value 8192 and 8192.00005 and they will be represented separately.
This is sufficient to represent an object’s distance from the center of earth at a location near the surface within ~1 cm.
(By the time you’re at double precision you can represent smaller than picometers at the radius of earth)
Anyway this is a completely separate topic to the original post.
I’m still not convinced that we’re talking about the same stuff here but I will check my position against the IEEE standard. Some day, really not now.
If you don’t intend to correctly model the tires in this prototype, why would you try to model the minute deformations in metal rods? The tires have much more influence on the correctness of the simulation.
Perhaps the purpose of the prototype has eluded me.
@agoose77: Ill take a look at Range-Kutta-Methods together with a genuine mathematician very soon! And by writing an independant engine (probably C++ but suggestions are welcome) with per object simulation and simulation-pipeline. Managing changing time intervals will be easier this way and implemetations in limited hardware possible.
@sdfgeoff: Ill probalby use doubles thanks for the illustration.
@Raiderium: I like your attitude but this is not going to be a BGE game of sorts Tires will be implemented, no doubt, but this is my least issue. Also these highly part dependant things, even rod length matters, will be evaluated once real prototypes have been tested and once a set of tires has been chosen to map them!
So I have gone over Approximation-Methods like Euler and Range Kutta and the result is that they are usefull if complex differential equations are to be computed, they can save power or can be completely obsolete as in my case. The issue is, there is no differential equation at simple physics simulations.
It all comes down to the simple formulas we all learned in school: s(n+h) = sn + vnh + (ah^2)*0.5
Now there is also orientation not just position, but for embedded simulation this is all you need.
To get accurate results the only variable is timeperiod h which can be set to small values if needed.
What are you calculating at each timestep?
The only thing that is being calculated is position. Forces, loads, momentum, rates and damping factor are all derived from it.
resulution is inverse proportional to timestep
So you’re still calculating the integral of function, where the acceleration is proportional to the extension. These integrals are solved with more stability using higher order integration methods simply because they consider the function of acceleration as non constant over the integration range. Hence, rk4 should yield better results than plain Euler for your application.
For example, a fixed spring with ma=kdx - bv can be solved numerically or analytically. Of course, in a game engine, where there are other forces that are variable, you must only consider numerical integration
followup: to answer your suggestion more specifically, Runge Kutta might be a nice solver and by far better than the current Euler, the only difference is: Bullet would have to do the calculations anyway. The number of calculations stays the same, they might just get a little less complex and the code is executed faster, as less or no objects and variables are being used.
But these solvers are just approximations and compared to euler, range kutta is quite a power consumer.
Brute calculating physics per object might be less elegant but might even consume less computational power.
The thing that will boost simulation speed significatly will be the time-step-evaluating part.
Displaying and reading data off the simulation can happen overhead and time independant.
Thats all I can tell for now but Ill keep you updated
This sounds great, really. Be sure to check back and send an update to how it goes when you get it to your liking. I’d love to see how it works.