Blender efficiency in large scenes?

Hi all,

I’m looking at the new Sintel model and it’s flat out gorgeous, but it’s got me thinking about how the Durian team plans to handle the inevitably HUGE scenes that they are going to be creating. I know they have a plan, I’m justing wondering what it is. :slight_smile:

First and foremost, whether they use displacement maps or multi-res (the latter is the plan, right?), those models (especially the dragon) will be pushing millions upon millions of polygons. They are going to NEED some kind of LOD system, or, more appropriately, a good adaptive subdivision system.

Adaptive subdivision could poly reduce models per frame based on it’s distance from camera, and, even better, can do things like render the camera-facing side of the model with detail, and then cull all backside polys and polys not visible to the camera out completely. This could be the single best optimization they could do for the project. I hope they are able to implement an adaptive poly system for their displacements and subdvs and not have to rely on a far more inefficient method like model-replacement LOD or something similar.

Another area of concern is texture maps. If they are rendering at 4k they are going to need multiple HUGE maps per character, all of which has to be held in RAM at render time. Renderers like Mental Ray and Renderman have their own image formats that the renderer can use that cache the maps to hard drive, only pulling small chunks at a time into ram. This can be a huge memory saver. But I think you need a bucket renderer for that to start with, don’t you…? I’m not too technically savvy on the back-end of renderers, but I don’t think blender’s renderer is a bucket renderer at the moment, or is it?

It sounds like they are pretty much covered concerning fur/hair rendering. BBB made sure fur could render in large amounts, but I’m curious about the opinions of people who have used it on whether fur rendering is suitably optimal or not.

That’s all I can think of for now. This is not any kind of slam against blender’s renderer, I’m just incredibly curious on what the Durian team’s plans are to keep render times and RAM usage down.

Thanks!

Adaptive sub-division and LOD’ing would possibly be the best option for reducing rendering/ memory usage, but also making use of instances will probably play a bigger part.

Of course your going to have huge texture maps, but not everything will be a texture map, you will also have procedural shaders, which although take some time to compute are, in my experience a lot more efficient than one large image map being dumped into the RAM.

Back face culling works for games because you don’t need to calculate shadows, reflections, refractions, which would show that the back faces are missing, so I don’ think culling would be a good idea.

I think what it comes down to is how well the Durian team manage their assets and scenes, if they optimize and model/ texture/ shade carefully they will be just fine after all, they will be using render layers as well which means far less system resources than you may think.

With the render layers it is now possible to render out each pass separately, even different objects separately.

Even in the blender view-port you can easily avoid problems like too many high-polygon models etc, by using low-polygon proxy models, layers and grouping.

Proxy objects :slight_smile:

The Durian wishlist talks about geometry bucketing, which is what a lot of renderers use to reduce memory usage.