octree really puzzles me

octree size… I don’t get it. Seriously, I don’t get it.

What is it for? Memory management optimization only or something more intricate?

Of course, I read the 2.33 release log but I still don’t get it fully, only can guess at.

It talks about large scenes, but is it large in kilo-polygons or large in blender units? Or perhaps both? Seems to me that’s only large in blender units but I’d like to be sure… Of course I understand that a large polycount will increase rendering time, it’s not the point here.

According to the figures, when unsure, it seems to me that running with octree size=256 should be always the best mean solution for size/rendering time ratio and that the extreme values should be avoided except for very small scenes or very large ones…

Am I right or is it only for this particular demo file? I don’t want to bother before each render in choosing an octree size value (as it has no signifiance for an average talented end-user like me) but I’d like to get the quickest renders as possible.

Couldn’t an algorithm be put altogether in order to ‘evaluate’ and automatically ‘hint’ at the good octree size value? I know that POV-ray has such evaluating code in it for algorythm optimization (for Radiosity, IIRC) but I don’t know anything about coding nor the ‘physics’ behind raytracing.

Hope that someone could come with a good rule-of-thumb (if any, of course) about this matter. Time counts, not only for animations, but also for stills, so these answers matter to me.

Thank you for your feedbacks/experiences about octree size usage.

an octree partions your scene, which makes raytracing faster

essentailly, your scene is put in one big cube
which is subdivided into 8 cubes
which is subdivided into 8 cubes
which is subdivided into 8 cubes
… on and on

the size in the octree specifies the total amount of subdivision along one side, so, 256 would mean that it is subdivided to be 256 cubes long along any axis

in general, higher numbers are faster but eat up more ram

if you have a low poly scene, there will not be an increase in speed for a higher octree setting, and perhaps there will be a decreased speed because of the extra cost to calculate the octree

also it improves the raytracing capabilities - or so i have found! :wink:

It only affects speed, not the result of the calculations.

Martin

Thank you for your answer. Things are much clearer in my mind. So going with a 256 is a good thing most of the time if you have, say, around 512 Mb RAM.

I understand however, that increasing octree for small scenes will cost extra time for reallocating datas accordingly to the new octree block size.

Thank you z3r0 d! And all who answered too :wink:

I suggest that you should leave such settings alone, unless you have a problem that you absolutely cannot ignore, and you discern, by actual experiment using very small steps, that the problem is solved in this way.

All computer algorithms exhibit one basic trade-off: speed vs. space. Furthermore, algorithms executing in a typical virtual-memory based operating system have the impact that the operating-system is “paging” information between RAM and the swap-file based on frequency and age of use. When a VM system starts to “thrash,” it’s not a pretty sight.

Octree Resolution = 512 is the key to LIFE man.

I wish I would have known this earlier, as I already missed one deadline and cancelled my project because of slow speeds (now I know that the octree resolution of 64 may not have been such a good idea… )

Micah

This sounds cool. How do you change the reselution? I have 256 MB of ram and my complex renders are very slow. Will this help?

olivS, I just want to say thanks for bringing octree resolution up. Just shows what you get from a fresh perspective.

I new what octree is, I even tried to test it out when it first arrived to blender, but after your post I thought I’d look again. Results: dramatic. Without raytracing the increased octree resolution actually slowed down the scene a bit, with ray tracing enabled the render times went from, I don’t know, way more than 5 minutes to about 50 seconds. A real jaw dropper. The tradeoff was an increase of memory usage to the tune of about 250 megabytes (for a very small scene). Kind of expensive, but it makes a big difference. I’ll be keeping the tool in mind.

[sigh] I’ve just tried increasing octree to 512 on a large scene with quite a few vertices and blender is crashing. Don’t know if it’s the vertice count or if it’s something else. 64 through 256 resolutions are ok. 512 setting causes blender (offical, Intel, et. all) to terminate without warning. At the time of render system memory usage jumps to around 650 Mb which is about half of total ram.

Good news is, the crash doesn’t seem to be because of vertice count. I’m not really qualified to say so no guessing from me. Here is the scene that is crashing. Just lower the octree resolution to render.
http://ebbstudios.ms11.net/octree crash test.zip

Would it be wise to post this to the bug tracker? Hope this helps.

http://ebbstudios.ms11.net/octree%20crash%20test.jpg

I have also heard increasing your x and y parts can increase render speed. Is this true? Also, how can I find the octree settings?

Dividing in parts won’t make it faster, but it will reduce memory usage.

Martin

Chrashes with me too, says the octree branches are full

Just wanted to say that I decided to post the crash scene to the bug tracker. In fact the bug has already been repaired and commited to the CVS. Here’s the email that was returned:

>Comment By: Ton Roosendaal (ton)
Date: 2004-10-27 11:58

Message:
Logged In: YES
user_id=103

Great, a reproducable crash! :slight_smile:
Fix easily found, commited fix.

And from the comit logs:

Using octree resolution 512 easily could overflow fixed sized array that
holds all node branches. Had to jack that up…

No fear.