How to speed up renderings.

orry to bother you with all my problems regarding blender. But I wonder if there where a way to speed up the rendertime as mine renderer is so very slow. even for simple scenes. My latest project takes forever even when I am trying to make a preview with small resolution. (250*250 pixels) I had it going for many hours and I didn’t see any progress. I wonder if its something wrong with my computer or my blender. I have reinsalled blender 2 times to try to fix the problem. I have also tried to maximize the memory as it needs a lot of memory. It peaks at 1,36 Gb and staays there. The system window show the renderer is running.

By the way I got an AMD 2200+ with 1 Gb of ram and about 1,5 Gb of swap file. My graphics card is Nvidia Gforce 6600 with 128 Mb of ram. IU have installed WinMem Optimizer that supposedly should free up some ram by dumping unused programs and file from the memory. The program is running in thew background. Could that have something to do with it?

Attachments


Hello and welcome to the forum. The indicator at the top of your 3D window shows that is not a small scene at all; it has over 740000 vertices. Is the hard drive thrashing when you try to render?

There’s no need for such simple objects to have such dense meshes; try remodeling them more efficiently. Or you may have the subsurf modifier set way too high. Also turn off Ray in the rendering panels if you don’t need ray tracing.

To expand a bit on what CD38 said, add a Decimate Modifier to each of those (two ??) objects) … or each object if there’s more than two there, then adjust the Decimate/Ratio downwards to the minimum that still gives you a satisfactory result. Then (optionally) press the Apply button beside the modifer to make the changes permanent.

Also turn off the Ray(tracing) and OSA buttons to decrease the render time. Ray especially will make the biggest difference in reducing render times.

Mike

Harddrive thrashing? what youmean with that? I am not so very good in computer technical stuff really. Sorry for that.

Welcome to the forums. That means that your hard drive is continually running.

cheers,

Bob

On Windows, with 1GB memory and 1.36GB memory usage, I’d say you’re thrashing!:smiley: You want to stay within your available memory, 'cause otherwise the OS sends some things to the hard drive, and this is muchmuch slower. Window seems to swap at the drop of a hat…

I would disable WinMem. Your scene is big, but a rendering that size shouldn’t be locking you up. Try turning options off (raytracing, OSA, AO) until you start getting reasonable results, then restore things one by one. And don’t use so many polygons!

RS

Fits best to the topic: Do the different programs and renderers have the same limits regarding memory usage or do some programs have advantages? How are high poly scenes rendered then? Or is the optimization to reduce polycount that extreme?

Best regards,
Martin

The ultimate answer: It depends :smiley:
Raytracers often have very efficient ways to reference the same object thousands of times, so you can have a forest with a total of several billion polygons easily, but the trees all the same. On the other side, many scanline renderers have the ability to do efficient micro-triangle displacement, hair, fur etc. something that is a lot more difficult to do with raytracers (though not impossible).
Acceleration structures in raytracers are also a trade-off between memory usage, preparation time, and performance, developers may have very different priorities there…the ultimate solution has yet to be found.

But in general, if you need 0.7 Mio polygons for a scene like the one above, you’re clearly an inefficient modeler and will probably never be able to make as detail rich scenes as good artists unless you compensate with very expensive hardware :wink:

Detail not only comes from raw polygon masses, in a static scene, having several times more polygons than your final rendering has pixels often indicates you’re wasting a lot of polys in places where no one will ever see them.
With animations, things are not so easy of course…and some (only some?) people also do complex scenes for pure love of detail, which can only be seen in detail renderings or extreme resolutions.

Raytracers often have very efficient ways to reference the same object thousands of times, so you can have a forest with a total of several billion polygons easily, but the trees all the same. On the other side, many scanline renderers have the ability to do efficient micro-triangle displacement, hair, fur etc. something that is a lot more difficult to do with raytracers (though not impossible).
Acceleration structures in raytracers are also a trade-off between memory usage, preparation time, and performance, developers may have very different priorities there…the ultimate solution has yet to be found.
Any experience with Blender/Yafray so far on this? I never observed something like this in Blender. Even linked duplicate always creates more vertices.

But in general, if you need 0.7 Mio polygons for a scene like the one above, you’re clearly an inefficient modeler and will probably never be able to make as detail rich scenes as good artists unless you compensate with very expensive hardware
Correct. That is why I found the thread -:D- I did not create my models very smart. That’s why I had to start from the beginning again.

Detail not only comes from raw polygon masses, in a static scene, having several times more polygons than your final rendering has pixels often indicates you’re wasting a lot of polys in places where no one will ever see them.
So far so good for the internal calculation in Blender. But coming back to Yafray: only traced details get calculated, aren’t they? Example: if I have a high poly object behind a wall, is it taken into calculation? I have a feeling of answering yes to my own question. Because this object can, of course, send indirect light to something my camera sees. Obvious that I am referring to radiosity/global illumination set-ups.

… and some (only some?) people also do complex scenes for pure love of detail, which can only be seen in detail renderings or extreme resolutions.
Got me … :evilgrin:

Best regards from Germany
Martin

First, let me say that you have to follow the advice given above; nothing I say will help such a dense mesh. BUT, for others that are looking at this for help:

In your buttons window, Scene Render buttons, RenderLayers panel, Layer buttons: disable Halo, Ztra, Sky and Edge. In the Combined buttons, disable Z

In the Render panel, disable Shadow, EnvMap, Ray. Disable OSA and MBlur. Render at 25% size.

Render only layers with visible objects. In answer to your ‘not visible high-poly’ question, in general, yes. Dont take the chance that your renderer will waste a lot of time calcualting its colors only to throw it away when it runs into a opaque wall between it and the camera. Or something off-screen that does not cast a shadow or radiosity light into the scene

you could raise the number of tiles in which the calculation is being performed this should free up some ram, on the scene panel(F10), under the OSA button there is a xparts and yparts button, play with that number until you dont use your swap ram otherwise even with a high polycount things will get nasty.

No, that won’t work in this scene. I could be wrong, but I’m pretty sure that tiles are mostly useful when using a lot of textures - i.e. a texture is only loaded into memory when a tile that uses it is rendered. Geometry isn’t like this though (yet) - all geometry gets pre-processed at the start of the render.

Looking at your attachment I see no reason for you to have so many polygons in those objects, you could easily cut it to less then a tenth of the origional number and not lose detail in a noticable way.

YafRay does support “instancing” and the export code also should export linked duplicated as such.

@broken i see, so this only works for textures, good to know thanks for the heads up.

I am not disagreeing in any way with your explanation, but I have in fact seen noticable reductions in memory usage by upping the number of X and Y tiles used to do the render. In fact, I often prevent memory crashes (my laptop is pretty sensitive) by increasing the number of tiles!

I have no technical explanation for it, though. But it does not rely on textures, because it works even with untextured scenes.

Yeah, I think it also works for some other things too, like shadow buffers, envmaps. Anyway, I’m pretty sure I heard that straight from Ton - the lack of geometry bucketing was one of the last things he wanted to do with the renderer to make it a bit more reyes-like, but he wasn’t able to get around to it yet.

You can see how the geometry is all pre-calculated at the start of the render, while it says ‘Preparing Scene data’. The vertex and face counts increase and increase until they hit the total amount, then all the tiles start rendering with all the geometry in memory. You can also test it by making a scene that has very little geometry in the center of the frame (where the first tile will render) but lots around the outside edges. The ve/fa counts and mem usage will still climb right up even before that first empty tile in the center has rendered.