Game Engine Floating-Point Calculations on Different Machines

I’m working on a online game project and am considering sending just user inputs instead of player positions, orientations, etc. (There are quite a few online games that use this system, and it is pretty much required in RTS games where there are many units)

The main thing I’m worried about is if the BGE’s floating point calculations will give the same results on different users’ systems. I need to use the physics engine for a few things such as collisions and ray casts, so I need to know if I can rely on it to give consistent results. Any differences in calculations between users will result in them desyncing.

Some computer architectures use extended or double-precision floating-point types when performing floating-point operations. I already know that 32 and 62-bit versions of Blender will get different results, but I don’t know if it’s guaranteed that two clients both using a 32-bit version will still get the same results. If anyone (a BGE dev maybe?) knows more about this, I’d greatly appreciate their input.

This is an issue that Is floating point maths in Python. All languages struggle in some context. It’s best not to rely on assumed synchronisation and instead to handle it yourself. The butterfly effect is something you’ll notice with your current approach.

Yeah, I would assume you’d synch up position and rotation data at key points (i.e. every few seconds, or something like that).

Floating point numbers lose their accuracy when you demand more precision from them. So a number like “0.01” will likely be the same across all systems, but “0.00000000000001” may have error between systems. The question is: how much precision do you really need? What is an acceptable difference? Probably the player will not see the difference between “0.00000000001” and “0.0000000000099”

Keep in mind that the scale of your game will have significant impact on the level of precision you need. In Blender measurements, “1.0” is equal to one Blender Unit (BU). Making your character models 0.02 BU tall (this is really small) will require an extra 2 digits of precision over making them 2.0 BU tall. (For reference, Bullet physics considers 1 BU = 1 meter, so approx 2.0 BU is the right size for an average male character.)

Also, be aware that the Game Engine looses precision as you stray from the Origin (that is, [0,0,0]). (This is a lot like what happens if you stray from the Origin in real life. :wink: ) So massive environments can be problematic (by reducing precision) if you don’t have a way of keeping the player centered on the Origin.

Where do you need that much precision on user input?

You don’t, but the problem (as agoose pointed out) is the butterfly effect. Any differences between two simulations will lead to them diverging greatly over time unless the differences are quickly corrected.

I may just end up designating one player as the server. Other players can just send their input to the server, and the server can tell players where things are. That way all the floating-point results come from one machine. Players can still of course predict objects’ movements based on their own input and then make any corrections afterwards.

It’s best that you work out a synchronisation framework that supports this. Otherwise, how does the server check if actions are “allowed”?. For RTS systems there are a few good examples. How many units are you thinking of? My system may be able to cover it, and provides the framework for most things you’d need (player-controlled units, RPC calls, variable replication). It’s still in development, and requires a custom build of blender though.
There’s an interesting article here that could be translated over.

There aren’t any “disallowed” inputs. For example, if a player tries to jump when they’re stunned and can’t move, the server will simply ignore the input and send the player their old position.

For synchronization, I was planning on having the current game frame sent with every message (yes, I know this means I’ll be using a lockstep system, but that shouldn’t be a problem for me). The only major issue I anticipate with this is handling clients who have slower machines. I’d like to be able to have clients skip rendering frames if they’re running slow to help alleviate the problem, but I don’t know if there’s a way to do this in Blender. If a client falls too far behind, the server can just assume no input for the frames it hasn’t received yet and send them the state of the current frame

There should always be allowed inputs. For example, say you send a request to shoot a target. This needs to be verified server side. If you purely source it from inputs server side that is fine, but you increase render work. It’s mainly used for things like UI buttons (can you buy this item?), movement requests… things like that.

Lockstep makes things easier, but also harder. I’d recommend looking at some of the articles over at in regards to design principles (good and bad). Personally, I dislike lockstep for most applications because it tends to flail in medium latency cases. You don’t want to be waiting on the slowest player all of the time. However, I’d be really interested in a design for a system which takes account of context and events. Something like a timeline that can rewind itself if neccessary. That would be very interesting.

Personally, I find the two biggest problems I face are;

  • Determinism - You can’t (in the blender binaries released) access the deltatime of a specific physics frame, nor call a physics simulation for a specific frame. Hence, if the server disagrees with the client, you have to snap the correction, regardless of how far away it is from the real value (you can do some smoothing but you need to stay below the threshold of total desynchronisation). A note from experience, trying to find work arounds for fundamental limitations is a very, very painful process, and I’d advise against it. Most of the time I spend working on multiplayer systems is purely in the BGE physics. It’s painful. (This said, It’s currently workable after I patched a few things)). To solve this I intend to finalise a design for a user-defined gameloop, so we can take full control over Blender Game (within reason, we’re not going to write the entire engine in Python).
  • Context - It’s really hard to manage a networked game as a built-upon system. The BGE wants to run the game as it does, and the network in another way. You can add a layer of abstraction to unify the two but it’s never perfect. Therefore, it’s pretty hard to make a network game out of the BGE by default. If you use logic bricks, properties and so on, the engine doens’t really realise that this can only operate sometimes, given certain criteria. It would be interesting to consider making multiplayer more logic brick friendly for HIVE (I hope to merge my project at some point with an interface) but until then it’s just a nightmare.

If you decide that everything will run at 30 fps. that makes things easier. But that’s a bit of an expectation, and if the user cannot meet that requirement, or wishes to run the game at a higher framerate, you have unhappy players.

I’ve offered you to try out my system, but I have a suspicion that you’re rather like my in this regard; that you’d much prefer to write one yourself, for the experience. That’s a good attitude to have I find (Although, in hindsight if this was common practice by everyone we’d never have unified protocols, systems as everyone would write their own slight variation!

Feel free to hit me up for any questions. I have made an assertion here that I am worth talking to, but regardless I enjoy networked discussions!

The server doesn’t need to render anything, and the render work on the client remains the same in either case.

if the server disagrees with the client, you have to snap the correction, regardless of how far away it is from the real value

Networks are both slow and unreliable, meaning that some form of correction would still be required, even if the BGE provided all the information you think you need.

In either case, the server should run the simulation, authoritatively; other approaches are simply not viable.