octree resolution?

Argh…

I’m rendering a bit of video… been working on it for over a week… doing low quality renders cause it was taking between 30mins to an hour and a half per frame.

Last night I upped the octree resolution to 512 and it’s now taking like 3 to 4 mins per frame!!!

So what is the rule of thumb for octree resolution. Can someone explain it so that I really understand how to work it?

When the function was first implemented I tested it and it didn’t seem to do much… I know it has something to do with “scene size” but since size is relative in 3d… how do you know whether you’ve got a large scene???

thx

Read z3r0_d’s post:
https://blenderartists.org/forum/viewtopic.php?t=29189&highlight=octree+resolution&sid=0d64c0a79417e81c73d30c01357760d2

A small scene can take longer to render with a high octree resolution than with a low resolution because all those cubes have to be calculated. But a large scene will benefit from it. I don’t really know a lot about raytracing, but I think those cubes z3r0_d is talking about are a kind of bounding boxes for objects in your scene. When a ray leaving the camera checks if he hits something, normally he would have to check everything in the scene. But it’s faster to check which cubes he hits, which are a lot less than the number of faces in a typical scene, and then only check if he hits faces inside that cube. So I think the right octree resolution depends on the number of faces visible to the camera.

Someone please correct me if I’m talking crap…

edit:/ Another explanation of bounding boxes at the bottom of this page:
http://www.cgl.uwaterloo.ca/~mmwasile/cs688/project.html

It is yet another example of a “divide and conquer” algorithm, much like a binary-search in a sorted array. In fact it basically is a binary search algorithm.

In order to determine if a collision has occurred, the computer has to (potentially) compare against each and every object. That would take forever. So the computer looks in very large areas; selects one; looks in a similar subdivision of that area; selects one; and so-on. Thereby “zooming in” quickly on the actual set of objects that do need to be considered for collision.

The size of the objects, vs. the size of the cubes, needs to be well-matched.

The tradeoff is the familiar one: speed vs. space. But ‘chips are cheap.’

Right I get it now.

This is the sentence that really explains it for me the size in the octree specifies the total amount of subdivision along one side, so, 256 would mean that it is subdivided to be 256 cubes long along any axis

And since I was doing AO with very short distances I could easily divide into more “cubes” without the renderer having to compare multiple cubes against one another.

But when you have a bigger scene with wide open spaces. that could mean that shadows cast go across a great many cubes if you use a spot light or such with raytraced shadows.

I still think it must be possible to put in an octree size automation script, though it may not be perfect. At least I think I get it now… basicly… shorter shadows with a higher octree resolution will make it quicker.

I found it tremendously useful to pick up a basic book on computer graphics algorithms – which I bought for about $6 on a clearance rack. It was pretty unreadable stuff but I simply skimmed it, looking for the big-picture of what was trying to be accomplished and how the computer went about doing it. Details were unimportant. Spending a nice afternoon with the Blender source-code is also an interesting and practical exercise. Not to understand it but simply to skim.