rendering efficiently

Hi all,

I’m trying to push myself to grasp a better understanding and knowledge base of rendering efficiently for animation. I’ve done a fair share of research so far, but I was hoping some might have some useful information for me. Especially those with experience in professional production environments.

In particular, what is the deal with the octree resolution setting? Everything I’ve read about it tells me that a proper setting can cut down the time of a build for rendering a scene… And that it varies based on how large the scene is.

But after several timed tests I performed on various parts of my scene, my experiments showed the octree setting to have completely arbitrary results.

Basically I desire to find any reasonable way to cut down on render time for an animation I am working on currently. At the moment 1 frame renders @ about 4 minutes, which ain’t too bad, but with a production time frame in mind it just won’t cut it.

The project calls for the Yafray raytracer

I am using OSA 8, I have avoided using Ambient Occlusion to cut down on some time, I have modeled as economically as possible with my meshes.

I understand there are a lot of tricks to setting up lights properly to save resources (like shadow buffers). But unfortunately the only things I am aware of (spot light shadow buffers) aren’t available in the yafray engine. If anyone can help me with that that would be great.

Thanks

i think the octree setting in blender does not affect yafray art all.

ahhh, okay, interesting. That would explain that then. Thanks

Hi decius

I’m trying to do the same, but i’ve tried to keep away from Yafray because i thought that with the advances with the ‘internal’ renderer, passes etc that i’d get better control and that i could cut render times per frame by rendering passes and composite after.

But I also prefer Yafray for IBL and Skydome, renders just look far more realistic than with ‘internal’, but they just take too long to render per frame.

So i’m trying to get my head around lighting the scene to get realism, using Jeremy Birns book Texturing and Lighting 2nd edition, but it seems even harder lighting for animation than just stills.

I have a 20min per frame time limit as i’m using Respower’s Renderfarm. Problem is getting realism.

Good Luck.

thanks,

Yeah, I totally understand. Elephants dream is a prime example of the power of blenders stand-alone internal engine. From what I read and watched through video conferences, they hardly even used much ray-tracing at all, but relied on a lot of spot shadow buffers.

But unfortunately I’m not quit that skilled yet to get the profound depths of realism they did with so little resources. Plus I need to use DOF for the camera at part of the animation, which I understand is possible only for still images in blender’s internal.

Thanks

yes. well, no. DOF is done using the Z-buffer. Blender handles it beautifully, but there may be other renderers out there that do as well; use the OpenEXR format to save Zbuf info. So, yes, it is easiest to do DOF in Blender, but not absolutely. I dunno if Yaf cares about Z; i doubt the Z of every vertex is in the XML file. I wrote the current DOF in the wiki; C&C appreciated. It works directly with a RenderOutput, not just still images, so you can render DOF on the fly. Usually it is done with your anim because you have rendered in layers. Layers and thus rendering simpler scenes speeds up rendering, or allows you 3 times 20 minutes max, for 3 layers that are later composited. There’s a defocus node in 2.43 which should simplify the noodle greatly, but the idea remains the same.

In order to render ED on my machines, I had to increase X & Y parts to 35 & 10, render only EMO layer, after killing all background processes I could.

There is a renderer out there that just renders raytrace until you tell it to stop; so you could start it on a scene before bed and see what the sun brings up. But if Yaf is an absolute, you’re stuck.

ED used a negative light underneath everyone. See 07_04 Emo flips out for example.