Controlling Render Order of Elements in a 2D Scene

I’m working on a proof-of-concept for a 2D isometric game (like the old Black Isle CRPGs on the Infinity engine) using the BGE, and I’m now to the point where I’ve hit a wall concerning how to set up and layer various game assets so that they give the illusion of the characters travelling around and behind different objects and structures in a 2D scene.

So far, I’ve created a scene in 3D, and rendered out both a normal image of it, and a z-depth map image if it, but that’s where my knowledge ends right now. I’m not even sure if I’m on the right track with doing that, either.

What I’m stuck on now is:

  1. How & when to mask off certain elements of a scene in order to control render order.
  2. How to control render order of a 2D scene via the BGE logic/Python.

I could be going about this the wrong way for all I know; so the more information, techniques and suggestions I can get, the better.


Well I am not really experienced with 2D games in the BGE, but I would say you can controll it via the Z position of objects.
That way objects with a higher Z position will have a higher render priority.
An alternative way would be to adjust the Z depth of the materials, I am not sure whether this works as expected but it might be worth a try!

As a guess, are you trying to do an isometric game traditionally (i.e. via setting up sprites and things looking down on the Z-axis)? I would recommend basically building the scene in 3D, and just using an orthographic camera. With an orthographic camera, it would basically look 2D.

Yeah, that’s what I’m going for; doing it the traditional way, like the old games. Partly to save on memory usage, partly because of the level of detail I can fit into a 2D image, compared to a 3D scene.

I started out that way, but my intent is to learn how to create a traditional 2D isometric game, rather than fake it by taking the “easy” way out using 3D and an orthographic view point.

Indeed, the orthographic view is the way to go. It looks entirely 2d. With the orthographic setup, you can also use the world mist for world depth by moving the 2d sprite objects closer and further from the camera. Another advantage is realistic hit boxes and 3d physics in a “faked” 2d space.

Sent from my Galaxy Nexus using Tapatalk

EDIT: this was written before prior post.

Basically, this leaves you with stacking planes on top of each other or using draw functions to draw everything and move them, you might want to check out an engine designed specifically for 2D games like GameMaker, Stencyl, or Construct. The Blender Game Engine was built around the idea of making games in 3D (in part because it’s part of a 3D program), and while some like SolarLune have managed to create a very 2D look to their BGE projects, they have still essentially modeled the levels to an extent in 3D.

Also, the latest version of Unity now has a dedicated 2D engine using a 2D physics engine, but I’m not sure of how they manage to make it work unless they’ve created a separate 2D workspace that can replace the normal 3D one, it’s a lot newer than the systems used by the engines I mentioned above so it may or may not have everything seen in engines designed for 2D from the ground up.

That sounds like a plausible solution. What Python and/or logic bricks would I need to pull that off?

Plane-stacking/masking is how I figured I’d do it, assuming I’d be able to control the render priority of each one via z-depth data from the original 3D image.

The developers at Obsidian Entertainment are building Project Eternity with Unity, and that’s a fully 2D isometric game. Thing is, their software developers have written a lot of custom code to extend and customize the engine’s 2D capabilities. I have no idea what they have and haven’t done to the engine to get it to do what they need and want it to.

If you’re looking “overhead” at the scene, with X and Y representing left-right and up-down, Z should represent depth. If the camera’s orthographic, then you can’t visibly see the depth change, other than how the objects are layered when on top of each other. If you want to change the depth, or a particular coordinate, you can easily do so just by using Python to alter an object’s worldPosition vector:

from bge import logic

cont = logic.getCurrentController()

obj = cont.owner

obj.worldPosition.z = 5 # Change the object's world position to be 5 on the Z-axis (leaving the other two axes unchanged).

Just put this code in a text file, and use that text file in a Python controller attached to an Always sensor. Beyond this, you should probably think about your design in particular (how and why you want to control the depth of objects).

As a side note, you might have a tough time with this if you ever have to deal with height (i.e. larger objects that are more than a single object high). I would probably recommend just going with 3D drawn like 2D, or looking up how to do this in a general sense with a 2D engine, rather than BGE-specific.

You want to simulate a 3D scene within a 2D environment that runs in a 3D environment.

To me this sounds a bit strange.

All these “tricks” with renderorder were implemented because the missing 3rd coordinate in a 2D world. This is not the case in the 3D world, so you do not need to care renderorder. It is there already. You simply place the object where they should be.

It will be much easier to create a 3D world that looks like a 2D scene, but it will not save memory nor will it become much easier than a 3D game.

I’m assuming that would function the same way if I did the following:

  • Created 3D elements and rendered out 2D images of them from the proper perspective.
  • Created a plane for each element and applied the appropriate 2D image to it as a texture/material.
  • Used that segment of code to adjust the world position of each plane so they stacked in the correct order and place.
  • Created and orthographic camera and pointed it directly at the planes, so the space between planes is hidden.