Network rendering

Has anybody written a program that will split up the rendering of a single frame over a render farm?

http://www.drqueue.org

i didn’t mean splitting up a animation, I meant splitting up a single still, getting the network to render each part, and then stitching it back into a singel image. Does this program do that?

it would be slower than on one computer

there isn’t anything to do that, you can have blender do it (one instance) by using parts rendering

Your right, in most cases it would be slower. However, with raytracing I have a still frame renders take up to 3 hours at 12801024. However if I render it at 320 256 it takes much less time(obviously). So if i set up the blend with four cameras that, when put together, seemlessly show the entire picture, and then render each camera on a seperate computer. By gimping the pics together I’ve saved around two hours of render time.

Now lets say I wanted to do a really high res image for a poster. It would take hours or even days to render. By splitting it up into 4 12801024 images and rendering them on different compurters I could get a 25602048 image in three hours.

Does “parts rendering” acheive this? How do I use it? And if it doesn’t, has anybody written a script for this purpose?

Sorry to keep bugging everyone.

as I tried to mention, parts rendering is only on one computer

the problem you will have with rendering by parts on different computers is that blender doesn’t allow you to have a camera face one way, and look another. (I mean the render part of the image part) which would result in seperate renders only being combineable if you used an orthographic camera

As fas as I know, the only render engine that handles that by default (split frame rendering) is Mental Ray. It´s so amazing to see that in action… Really speed things up! Works even on simple preview renders. If you got many computers hooked up, it screams!

Would be great to have that in Blender. Maybe a hack on the parts rendering, so we could specify which part to render. Then on the other machine specify another part.

For animation splitting, here´s another feature that I used to do a lot in Lightwave is backwards rendering for animation. One CPU rendering frame 1 to 100 and other CPU rendering 100 to 1. When they meet, I would stop the render. This I think would be easy to implement in Blender.

Alexandre Rangel

What we’d need is command line access to Parts.

There’s a script for that, http://www.selleri.org/Blender

YAY!

Stefano

Thanks Stefano, but that´s kind of clumsy… how do you get rsh for Win?

“A perl script for distributed animation rendering over a LAN of computers (Ideally multiplatform, but you must have rsh, which is rarely the case with Widows.”

I think it would be easier to just tell Blender in the render tab to render from 100 to 1 on the second machine. Still a workaround, thou.

Rangel

It’s for us, command-line men :wink:

And there should be rsh for win, I’ll have a look at winfiles…

YOu need a render daemon to do that!

The rsh demon stays there and accepts command from other machines, a render demon accepts only blender commands, hence is safer, but such a demon is not (yet) working while rsh is UNIX standard :wink:

Anyway you should always render your animation from the command line and never from the GUI tabs! Unless your animation is really fast to render it is safer to use command line and deparate targas!

Stefano

Hello, all:

Just to let everyone know, the Blender Render Daemon project (aka Network Render on the blender.org site) is active and working. There are several developers working on improving it and making it more robust and easy to use. At the moment you need to compile it yourself & configure it manually, but we are working on changing that.

It works on all supported Blender platforms as far as I have been able to test.

As for rendering in Parts, that’s a very cool idea, and one that bears looking at as we continue work on the Render Daemon.

-Bischofftep

Even command line access to render by parts would rock!!! Can’t wait to start scripting in linux; if it ever gets done. My comp and my dads will plow away while those tons of crappy out of date computers I have won’t know what hit them!!

Lived,

In the short term, have you considered using the old-fashioned sneaker net method? Install Blender on 4 systems, copy the blend file to each, and manually reposition the camera to a different quadrant for each for rendering? It would still be a lot faster than waiting for a single system.

I haven’t looked into the Python and scripting interfaces at all (except for a quick perusal to get an idea of what was there) but it seems to me that you could come up with a small python routine to reposition the camera as necessary, along with a small OS script or application to copy the blend file to all of the different systems and start up Blender with the appropriate script and parameters.

Just a thought.

Dave S.

when I have the time… which won’t be for a while, I might tinker with the source a bit and see if I can get command line access to render by parts. It shouldn’t be too hard. That said I’ve never even looked at blenders source before.

Also, as mention before, for that to work the camerra would have to be orthographic.

Any new here? I´m sure the orange guys could use that… and so do i!

if you don’t mind linux you can try to use parallelknoppix:
http://pareto.uab.es/mcreel/ParallelKnoppix/

Well, i checked out the other Program mentioned somewhere at the beginning of this thread. I use Windows, and MacOS… so it should be ok for me. But a solution from blender itself would be nice…