Improving rendering times (AO)

Hi everybody!

Since my models have grown far from 800.000 vertices, my rendering times have grown exponentially.

Today I was rendering a 1M model with a resolution of 1280x1024 by 16x OSA and 8x sampled ambient occlusion. (two point lights for filling, one 1x sampled AL for the main shadow) Aftre 6 hours about 25 percent of my image have been rendered. I think this is way too much since professional packages do the job in less than 2 hours.

Am I doing something wrong? Are there some options to improve the rendering? How long does it usually take to render with the options mentioned above. (P4,3GHz HT, 512MB).

Maybe someone has a suggestion.

GreetZ

(the always impatient) ReSeT

EDIT : Without AO the job is done within 15 minutes!

I’m a bit envious that you can even do scenes that big. I think you may have answered your own question at the end. AO is pretty slow at rendering. I heard there were versions of Blender that made AO much faster (2x faster?) by fixing a bug or something - see blender.org. Also, getting optimized versions for your system will help.

I know that the Blender internal renderer is supposed to be really fast but I’ve noticed that it’s slower at doing certain things. For example, in Maya, I would render a 320x240 full OSA, raytraced image with true motion blur and it would take just over a minute. In Blender, there are no sampling options so you always get 8 renders per image - impossibly long for complex animations. Mental Ray gives some better options:

There are two alternative motion blur algorithms. The regular motion blur algorithm is based on adaptive temporal oversampling, and is enabled if the shutter period is nonzero (i.e. the shutter time is greater than the shutter delay time). The ``fast motion blur’'3.x mode is based on coordinated spatial and temporal oversampling, which requires far fewer samples (typically by a factor of five) for motion blurred pixels. Fast motion blur mode is enabled if the shutter period is nonzero, and if the time contrast is set to 0 and the min and max sampling values are set to the same value.

all i can say is less OSA or less samples… if your testing turn AO off, and osa to minimum or off.

Am I correct that you have gobs of RAM in this system? If you don’t, that’s gonna be a major problem with models that big. The screen-size is huge, the model is huge…

Do you have to use AO? Do you have to use AO for all of it? There are some really amazing things you can do with layers and lighting, even compositing, that might make an enormous difference in render times…

ok, since I have a very weak computer, I have found a ton of tricks to make renderings go a lot faster.

For this particular case, I would suggest getting rid of oversampling. Render it at 3 or 4x size, then scale it down in your favourtite image editor. This has a couple of other advantages too…edges are much sharper and the OSA hasn’t had a chance to kill any textures you might have.

Second thing I would suggest is to get rid of the area lights, they are horribly slow. Try f00f’s/Blackmage’s distray version of blender and use the softshadow option instead (but this is only a suggestion)

Third thing, scale down the scene as much as humanly possible, if you have a huge ground plane that you might no necessarily need, the render times will shoot through the roof…

Any chance of a screenshot? I can figure out other methods if I knew exactly what you were doing.

cheers.

Are you using the Intel/Amd optimized version?

Change the octree resolution to 512 :smiley: YEY IT WORKED

Peace

If you’re using the official 2.34 build, there was a coding error that made the AO hellishly slow. Recent CVS builds have this corrected.

Thanks everybody for his suggestions, I’ll try them all to get better render times.

I think one problem is the memory consuming and it would surely be better if I upgrade to 1GB. (My harddisk is permanently working while rendering)

maybe try to split up your scene render those areas that need ao and render thos who do not need ao with out it and compose it in photoshop later!

or just use 3delight as a renderengine.

claas

all i can say is:
https://blenderartists.org/forum/viewtopic.php?t=17626

Did you put up the octree resolution? If you have more than 500 mb ram, it really really helps. Alot. For me with AO and raytracing, it gives me render times that are less than one half of octree 128.

I slightly remember while reading through the blender book that you can render the image in parts, quadrants to be exact. This might speed things up because blender would have less things to load into the ram, meaning you would use less hard disk as virtual memory, meaning you would render things a whole lot faster on an image that size and at that level of complexity. This is just a theory though. I’ll look it up when I get home. Also, I also vaguely remember reading somewhere that command line rendering speeds things up as well because it doesn’t have to display the image and the blender interface is not running. Again, I’m not for sure, but this thread does interest me (my last two projects broke the 1 million mark with two to three levels of subsurf on top of that!) I’m off to do some benchmarking.

Command line rendering always increase render speed, sometimes by a lot (depending on the GFX card).

Rendering in part only decrease memory usage. IIRC, it shouldn’t have much of an effect on speed unless rendering as a whole would cause memory swapping.

Martin

Here are the results of the tests I did:

Official Blender 2.34 - Normal Settings - 5 min 6 sec
Official Blender 2.34 - Parts (8X8) - 6 min 33 sec
Official Blender 2.34 - Command Line - 5 min 13 sec

Blender Optimized - Normal Settings - 1 min 19 sec
Blender Optimized - Parts (8X8) - 2 min 43 sec
Blender Optimized - Command Line - 1 min 19 sec

So I came to the conclusion that the best options for my computer and the file I was using was to use blender optimized normal settings. Now of course, results will vary from computer to computer and from file to file.
The file I used was comprised of hundreds of meshes at subsurf level 2 and 3 totaling 1.3 million vertices. No materials, no textures, AO level 4, 3 lamps, OSA 16. All were rendered with my Athlon XP Mobile 2500 chip and 512 megs of ddr400 ram. Splitting up the file into many parts did slow down the process quite a bit. I know that I split up the file into 64 parts, but I tried it with less parts (2X2 and 4X4) and the speed difference was still negative.

But, what interests me is that maybe this technology can be used to render the image over a network. Just like we currently have network renderers that render animations frame at a time on different computers, we could have a network renderer that renders parts over the network. You could split up the image into a grid labeled with the alphabet on one side and numbers on the perpendicular side. Then tell one computer to render A1 and tell another computer to render A2 and another computer to render A3. Then when their done, render another part. I think this may work. Are there any coders listening? I’m going to go to blender.org to try and get some feedback.