How is game animation done?

…not sure where to post this - suppose there’s a horse in my game, and if I whistle to it, no matter where I am, it’ll canter over to me. How is the horse’s animation done?? Since the horse can be anywhere in the world, there aren’t a set no. of steps, ie. no chance for a pre-made (by an animator) animation to be used for the walking, right? So - is it done by software? How? If I wanted such a horse in MY game, how would I do it (using one of the FOSS engines, ie. Armory or Godot)?


Well, animating the horses legs would be done by a so called walk cycle you animate one whole set of movement from a initial state to where everything comes back to be in the same position. Now you just loop this animation while your horse is travelling from point a to point b.
For the wayfinding you might either calculate a line between the horse and you and move it along it or you get a vector pointing from the horses location towards you and move it in that direction.
Pretty basic stuff.

Since the horse can be anywhere in the world,

Yes it can be anywhere in the world, but more then likely if its further than say 100-200m away it will be culled away. Location is remembered, just not drawn.

When making the horse animation blender, it’s advised to have the horse at the center of the blender world. The horse never leaves the center while running or walking. Although you might have him rise up and down while running and walking, or even jumping. The model in the game while doing the animation, is then moved around by the game engine. The game engine will also move the model for the jumping, but it can be programmed into the animation. It depends on the programmers design. The up and down movements while running or walking should be done by the animation. It woudl be very difficult to program the game engine for this.

To answer your question, first we need to distinguish between:

  • Game-world actions by the player within the game-world … “what you perceive that you are doing.”


  • Game animations … "what you actually observe with your eyes and other senses. (Also: the movements that you make with your fingers, hands, and body against physical devices.)

Game programmers construct elaborate computer simulations of their game-world (which are, in fact, “the entire manifestation” of that world …). They write logic which somehow maps “every physical thing that a player can physically do … touch the screen, tilt the machine, shake the thing” … into a corresponding event within that [purely imaginary] world. Most of these “game-world inputs,” of course, result in a “game-world output.”

And then …

… they determine what the game hardware should next display, for this-or-that ‘game-world output.’ That’s where “animation” comes in. The game engine first determines what the next [high-level] response ought to be, then the [lower-level] animation system (be it “2D” and/or “3D”) carries it out. During the course of any game, many such animations are taking place simultaneously.

In 3D games, “the game-world engine is very much ‘3D aware,’” in that it’s fully aware of the [game world] relationship of the various actors in [game world] 3D space, and the visual-effects subsystem [somehow …] knows how to translate this to the “purely-mechanical pixel space” that the happy user sees on his/her screen.

So, conceptually, there is a game layer, which creates and manages an entirely-virtual game world, and a display layer which manages the presentation on the hardware (and interprets, for the game layer, the user-interface actions that you generate). The two layers are loosely connected.

Regarding the programming side of things, for example it would be easier to think of a horse as a cube (or an Empty in Blender’s terminology) firstly because it will clear things up in the side of development. Then once this works model loading and animation can be attached into the main program as extras.

Then for the horse animations, it should have 3 animations in simple terms, idle / runningslow / runningfast. Your code will decide which animation to play based on special rules you define (eg: how you want the animations to play and when).

I’d like to respond to Jim’s “hidden” off-topic comment (which I really don’t think needed to be voted off the island …) and say that … “an ordinary video compositing tool, which should be common to the broadcast industry, should be able to do this sort of thing directly,” i.e. “in hardware.” Yes, today it would use game-like logic. But then, if people know that you are “on location,” is “a virtual studio set” really going to be convincing, or, necessary? I’m really not convinced that you need to go to the trouble.

If the hardware to do it is Not available, then what does one do? (ie. in software) (I’ve been kind of curious about this myself for a long time, actually…:slight_smile: )