Post-Processing System Limitations(what could be better)

Hello!
Today I decided to write a little post that mentions the major limitations of BGE post-processing system. This can give some hints for developers for some aspects that need extra attention. Why this is important? Because in nowadays most of modern game engines have 95% of shading done in post-processing and the basic rendering is mostly used only for GBuffers and transluency. Why wouldn’t we do at least similar in BGE? If we don’t calculate lighting in post-process filters, there are still many good things. Some of them are:

  • Bloom
  • SSAO
  • SSRR(local reflections)
  • Atmospheric effects and scattering
  • Eye adaptation and high light ranges

So let’s go through each of the components and what they require.
SSAO - we can already get it working and nice. 16-bit buffer could improve quality and multiple render targets with generated mipmaps would allow to better blur it with minimal peformance loss. But those are minor improvements that aren’t essential so in overall I can tell that BGE post-process system is good enaugh for screen-space ambient occlusion.
SSRR - although it’s possible to make SSRR in BGE post-process filters, it has big disadvantage - the 8-bit normal and depth buffers are not accurate enaugh and thus the reflections become jittered, distorted. The quality is low. Again, 16-bit is in the list of requirements.
Eye adaptation - this works like so - su is like 1000 times brighter than bulb. However, when you’re indoors, the bulb doesn’t seem dark, it indeed gives light, but when outdoors the sun doesn’t glare you and make you see all white. Eye adaptation can do this. It’s already possible to fake this effect(which, though, requires some tweaking of numbers), but there is a huge issue - this results in banding. And banding is very, very bad. Yeah. What’s solution? Using 16-bit depth for bgl_RenderedTexture would allow to have lightsources in range between 0.0 to 256.0 brightness tonemapped and have no banding or other issues. It’s also good idea to seperate bloom for better, more effecient 2-pass blur. This requires multiple render-target support.
Atmospheric effects - in fact, this contains a lot - microscopic dust particle simulation(which is the godrays), physical fog etc. And at this point there is slight problem - it’s again that we can’t use multiple render-targets. And thus, we result in either low-quality filter or it is using up all the resources. HDR would also be cool as it can tell which parts on screen contains strong light and thus can create godrays or such.
Eye adaptation - all it needs is 16-bit depth color buffer(bgl_RenderedTexture). And the user can than apply a tonemap filter. Several luminance storing methods are also essential. Sampling some pixels on screen won’t always work(usually it works, but it’s bad to do it in detailed scenes). The best way is using the LOD levels(mipmaps) of bgl_RenderedTexture. The last mipmap(1x1) will contain the average luminocity of scene which is very accurate measure.

As you can see, most of them tends to miss just the same features.
The biggest, most important of missing features is 16-bit color depth for HDR rendering. Currently we can’t have correct sun light in relation to bulbs. If we set bulbs to low, we won’t see them, but if we set sun to high, than it’ll make everything white and washed out. Correct light range needs more numbers and 16-bit depth gives 256 times more than 8-bit. With 16-bit image we can render a sun with 256.0 intensity nicely with a 1.0 intensity bulb by it and than have the 1.0 intensity bulb seem bright when going inside a room where sun doesn’t shine in.
The second problem is rendertargets. The good 2-pass blur method(which takes 2x single axis samples instead of single axis samples squared and thus in 5-sample case will take 10-samples instead of 25) requires to work in a seperate sampler. And, um… For that we need seperate render target where to store the sample. The render target can also be useful to erase the darker parts and leave only those above certain brightness range(e.g. discard all below 2.0 or even better - check for current exprossure to determine what to discard).
Of course, there are a few other problems(like no aviable mipmaps for post-process images) that could be fixed, but those are minor.

it’s, of course, just my opinion, but there is a fact that all of this is essential for high-quality shading. Without all of that stuff your game will either look bad(unless you’re making it in a different graphics style) or it’s gone lag a lot on medium/low end computers or you’ll have to say good bye to high-res textures and mid-poly models.
However, with all of this implemented me and martinsh could work on some cool filters that would allow to polish up your games!:slight_smile:

I agree. So far the bge graphics looks like a PS2, but with adding the HDR, the graphics will start to look like PS3.

If BGE devs did all that I mentioned in my post, we could get UE4 graphics or better by ourselves just simply writing some 2D filters:D

Well yeah and the only thing left will be an deferred shading, then no more slow raster. :smiley: You see the bge devs are gone making an addon for the Panda3d. So this is more a request a discussion for the Upbge devs. :slight_smile: But in my opinion creating a bge game, that is using UpBge since bge is frozen(dead, and users should really switch to UpBge), then from the money you make get a paid dev to add or enhance the engine, hehe. :slight_smile:

Would your suggestion help keep the framerate at 60 for openworld videogames with diggable terrain?Because i like idea of visual programming with openworld videogames with diggable terrain?