render only certain objects at lower reolutions

Is it possible to render most of the scene normally and then certain objects have a lower resolution? I’m guessing this would be done with some sort of shader or something.

use a LoD for it?

Look up bueraki, (bge game), the source files will teach you how to use a LOD system for terrain that can also include changes in texture. This way uses the native camera frustum clipping and built in LOD system.

If that’s a bit much (it involves a bit of python, however knowing some of your work that shouldn’t be a problem for you.) Try looking up ThaTimster’s video on Terrain LOD. It uses the same method as bueraki except with no python, I’m pretty sure. I don’t have time right now, but if you need me to show you or explain it I’ll be more than happy to send a .blend- I’m actually working right now on making my game use texture LOD and somehow implementing BPR’s proof of physics LOD method, but I’m waiting for the latest build to see whats up. Let me know.


Do you guys even know me? I’m not that dumb. I have been using the BGE for several years and I know very well how to make LOD’s

From your question one would think that you’re trying to improve the performance, and since lowering the resoultion wouldn’t really help much on that all the suggestions go to LOD. Lowering the resolution of an object would mean lowering the resolution of that part of the screen, something that only the render engine could do (if you implement it yourself), it wouldn’t be that effective and would have very poor use cases so there is no way BGE has it. Now an alternative would be to simulate that effect with a shader, wich you can do but that will actually reduce the performance of the game.

So in short, use LOD. If the build-in LOD is not enough for you implement your own in Python.

The objects would be isolated and drawn to an offscreen FBO. A shader would composite it into the finished scene later, upscaling the colour and depth information in some artistically preferable manner, performing depth testing, colour mixing, and generally reimplementing a small section of the fragment pipeline.

Unless the object is particularly expensive to render per-pixel (for instance, raytraced volumetric data) this technique tends to increase the pixel-filling work the GPU has to do. It also interferes with typical methods for handling transparency and shadows.

It would be most troublesome to implement in Blender.

I’ve made this function before… but there’s still some weakness:

  • the custom render is always 1 frame behind… been finding the solution for weeks but no result :frowning:
  • only works on standalone mode

the left one is custom rendered, while the right one is the default render

customTex2Dfilterstandalone.blend (566 KB)

thanks. It’s something like this that I needed