Octree Resolution and Render time

Hi, I’ve got a problem, I’m trying to render a fairly simple scene in 1280x1024, but the filling octree part takes an insanely long time. I’ve set the resolution to 512 (the fastest), but it still doesn’t start to render within 30 minutes (haven’t waited longer then that).

What settings could I lower to decrease the render time without compromising the result of the render?

Thanks

I think that octree resolution of 64 is the fastest?
in fact i render at 1:28 at 64 vs 3:00 at 512 so try to lower that to 64 and set your x and y parts to 10 each if your on a dual or quad core- tell it you have more threads then you really have x2 or x 3 and use OSA 5 unless your metal starts to look terrible or you see artifacting.

IMO for all of this.

try to size it at 800x800 or about half the size you have unless its going to print in a mag- and the more lights you have using ray trace the longer the time - try setting just your main 1-3 lights to ray trace and let the others use the generic shadow buffer but set the tolerance to .1 so the shadows are better.

No doesn’t work. It’s so wierd, it worked a second ago, even in 1280x1024, and that took half an hour. Now I can’t even render in 800x600. :frowning:

The best octree resolution depends on the scene so you have to test different resolutions to find out which resolution is best for your specific scene. The greatest impact of octree resolution is the render time. Not so much the build time.

Building the octree is another story. The only explanation for such a long time for filling it is that there are much more polygons in your scene than there used to be. Check if you may have increased the sub-D surface subdivisions to some insanely high number.

From my experience and from what I have read, the octree resolution depends on scene. If you have a lot of vertices in scene, higher octree resolution will probably lower rendering time.

I agree with ypoissant, that you should test the rendertime with all octree resolution. I personally do it everytime my renders took a long time. Sometimes higher octree resolution solves it, but sometimes the effect is inverse.

Thanks for your help, I found what the problem was: a really small hidden object with Subsurf set to 6. I changed it to 4 and now it works. :smiley:

Six is an insanely high level of subsurfing. Four is still unusually high. I get good results using level two, rarely if ever needing to go up to level three. You might do some experiments to see whether the higher levels are warrented visually.

what’s best for really large scenes? (eg 10-20 000 blender units?) Is scaling the scene down the best way to speed up renders?

the ocrtree resolution feels like a blunt instrument in these cases

The size in Blender units, and thus scaling the scene, have nothing to do here. It is more to do with how the objects are distributed in the scene than anything else. To understand what is going on, you need to understand what an octree is used for so here is a crash course:

When raytracing a scene, the renderer needs a way to find which of all the polygons in a scene intersects a ray and it needs to find which polygon is the closest to the source of the ray. The dumbest way to find that out is to test every single polygon in the scene with the ray. For scenes with small number of polygons, that works OK but as the number of polygons increases, it becomes less and less efficient to test every polygons. So renderer subdivide the whole scene space into smaller boxes and first decides which of the boxes intersect the ray and then test only the polygons that are in those boxes. This subdivision is known as an acceleration structure.

There are several acceleration structure schemes and the octree is one of them. The octree subdivides the whole scene into a grid of equal box sizes. An octree of size 64, for example, subdivides the scene into a 3D grid of 64 x 64 x 64 boxes or cells.

The octree is relatively easy to implement but is a kind of a dumb acceleration structure. It does not adapt to the scene complexity. Testing a polygon takes time but testing an octree cell takes time too. The art of setting the octree resolution is to find the correct tradeoff between too much time testing the cells and too much time testing the polygons in the cells.

The main problem with octree is known as “The teapot in the stadium problem” and this is a situation that Blender artist should try to avoid or at least minimize. It goes that way:

Suppose you have a scene relatively complex with, say 100 000 polygons but all objects occupy the same neighborhood. So the polygons are relatively well distributed among the octree cells. When the renderer hits a cell, there may be a few hundred polygons to test. All goes well and the render goes relatively fast. Then you decide to add a floor that extends 10 times the size of the scene. You just created a problem. Because of that, most of the cells are now empty and a little number of cells now have a large quantity of polygons. The render time have increased dramatically. Another way to create a problem is to place an object, it does not matter its complexity, it could be a single plane, but place it, say 10 000 BU apart diagonally in both the X, Y and Z dimension. Now, most of the objects are in one single cell and the render time is huge.

That’s all there is to it. I hope this clarifies why the octree resolution is dependent on the distribution of the polygons in the scene.

Thanks for explanation, ypoissant.

So the point is to minimize number of empty cubes in an imaginary lattice, which is a limitation of whole scene?

For example: If I have only one sphere in my scene and set octree resolution from 128 to 512, the rendertime will be longer, because there will be much more empty cubes in imaginary bounding lattice of the object. So the renderer will be computing a lot of empty boxes (of the imaginary bounding lattice) in vain.

In other hand: If I have only one cube in my scene and subdivide it 10 times. If I higher the octree resolution now, there will be more boxes (of bounding lattice), all filled with mesh faces. So the computation will go faster?

Ok, this is a situation I get into often…

A large “backdrop” area low polygon with large textures
The focal area for the scene, (less than 10% of the area, large number of polygons…

You’ve explained why the octree sucks in this case…

Would render layers help?

eg render the lake and mountains as one render layer, render the foreground and it’s detail in another?

or is the octree shared for the scene?
If I split the scene out into foreground and background scenes and render them separately would that be best?

@REiKo Rhoemer: You got the idea right. But your single subdivided cube example is tricky. You still need to experiment with different octree resolutions to find the optimal one. And once you found an optimal octree resolution, if you subdivide the cube further, you may need to experiment again. It costs much less to test the octree cells than to test polygons. But when there are millions of cells, it starts to count. So the the best compromise is not always obvious.

@Michael W: The octree is built exclusively for whichever objects are being rendered. Objects that are hidden or OFF are not included in the octree. Rendering separately is the only sure way to get good performance. I don’t know about rendering in layers. I never tried that. It depends on the implementation and it might work.

@Michael W: rendering in layers and using composite nodes to combine the layers is a huge timesaver. Not only saves time in rendering, but once you have the nodes set up, you can modify things without rerendering.

Eg: suppose you want the background a bit brighter, or less blue. Add the appropriate node, and it’s done instantly (well, almost instantly, but in much less time than it takes to render, since the node simply adds a filter to the render output, rather than redo all the render calculations.)

Thanks for info, guys. Finally an usefull thread :).