What rendering algorithm does Unreal Engine use?

…like, with Quake, it used BSP trees because the game world was comprised of Walls, which were one-in-front-of-another. With UE, where the world can simply be…OPEN, IS THERE ANY RENDERING ALGORITHM POSSIBLE, EXCEPTING SIMPLY DRAWING BACK-TO-FRONT, i.e. the painter’s algorithm? How would you do that when the scenery can be anything? (Well, raytracing I suppose, where it is back-to-front…)

And also, what algos do the other engines like Godot or Armory use? Anything different (to UE)?

“deferred rendering” is what i think you are looking for. most common nowadays due to its efficiency at rendering lights. opengl is more commonly “forward rendered”.

not gonna explain it here since theres other posts by those who know much better the technicalities.

fragment shading is the most common method of computing lighting on surfaces. pretty much just research how 3d is realtime rendered and you will find much information.

Ummm … That’s a pretty huge topic. :slight_smile: Lots has already been written about the techniques used by popular 3D real-time engines, both older and newer.

From the “Peanuts” comic strip: “Explain World War II. Use both sides of paper if necessary.”

I have not 100% perfect knowledge on this, I might be dumb here, but the general thinking is this.

The graphics card has the depth buffer and in this way it can know exactly when to plot the pixel. For example in the image, this plane in front of the camera hides everything. Say all of the pixels of the screen at depth:0. If the renderer reaches the first sphere it will find out that the pixel depth is about 10 and since 0 is less than 10 it will ignore it completely.
If 2020-10-27 19_18_36-Blender

The most compute intense part in rendering is calculating the pixel at the PixelShader (with texture coords + lighting + specular + shadows + SSAO + parallax mapping + etc) but querying the pixels depth is not a problem at all.

Some practical knowledge on depth buffer is here:


Another important thing to consider, is if actually makes sense to try to draw all of these spheres. As for example your scene might have about 5000 spheres and you want to avoid doing a 5000 foreach loop. So next idea is to do a camera occlusion test, take the bounding volume of the sphere and try to see if is within camera - most often is completed with one raycast calculation.

But again since in 3D graphics are insane you might end up having about 50.000 spheres in your scene. You really need to avoid doing 50.000 raycast tests. So you need to organize the objects in a far better way and not put them in a linear list. For this reason there are things like Octrees or BVH trees that are more or less the same.


And this means that the final output is to do the same job in only 1.000 raycast tests which is by far super efficient. To understand better why this is important you can look at this.

Unreal has both a Deferred and a Forward renderer. Forward is a bit faster, but also has a lot of limitations that Deferred doesn’t.

But your question really seems to be how does it do culling, moreso than ‘how does it render’.

Both use a Hierarchical Z-Buffer, and there’s an option to use the Z-buffer for occlusion culling (‘don’t render things that are completely behind other things’), which can under some circumstances be faster than it’s regular occlusion culling method, which uses the bounding sphere of an object to determine whether or not to actually render it. It essentially does a quick(ish) fake render on the CPU that just renders the bounding spheres of everything inside the camera frustum, checks which of those are actually visible to the camera, and then sends that list of objects to the GPU to actually be rendered (this is a really abstract description of what’s going on so not wholly accurate, don’t @ me). This can actually be tricky to manage for outdoor stuff if, for instance, you have a short wide object (like a model of a strip mall) that can appear fully occluded, but the radius of the bounding sphere will be large enough that the bounding sphere will be be visible, and you’re suddenly forced to waste time rendering the thing.

Godot only does frustum culling for 3D rendering (at least for now, or unless you roll your own culling system), which means it does all the rendering for anything that’s in front of the camera, even if the camera can’t see it.