Affordable Blender cloud rendering with image sequence files. Brenda? Something else?

I’m looking to do affordable Blender rendering in the cloud. I’ve successfully experimented with Brenda for running low-cost render jobs with AWS’s spot market.

Brenda has almost everything I want from cloud rendering, except that it has the limitation that images need to be embedded in the .blend file itself. And that’s the problem.

The render jobs I want to run involve a lot of image sequence files. Like, potentially, hundreds of megabytes of image files.

It wouldn’t be realistic to fit all these image sequence files into the .blend file even if it were possible. Nor would it be any fun to upload all of those image files every time I want to run a job – especially since the image sequence files rarely change once they’re referenced from a .blend file.

I’m not interested in a free distributed render farm where I have to gain credits by letting others render on my computer. I am absolutely willing (and expect) to pay to get fast multicomputer rendering. But I want to keep the price down as much as possible, hence the appeal of Brenda and AWS’s spot market.

If there is no reasonable turnkey solution for what I’m looking to do, I’m thinking of using Brenda for rendering, except I would create my own Blender AWS image. I would upload the image sequence files to the AWS image separately. The project I send to Brenda would then reference wherever the images are on my AWS image. A few months back, I began setting things up like this, but I was a long way from a working solution. If someone has done something like this already, I’m all ears.

Anyway, that’s what I’ve done so far and that’s what I’m looking to do. What are my options?

To my limited knowledge (haven’t read the Brenda code much) it unzips the uploaded archive to some folder in each instance. You could add a bit of code there, which would download some additional stuff from some other archive or bucket folder and unpack it into the same folder as main blend archive. If you use relative paths in .blend, things should work ok.

Another way would be, as you suggested, to prepare an AMI that already contains the data which does not change. It should be possible to do it the easy way: pack everything you need into one zip file and run the render script for one instance. This will run the instance, download your data and render something. When instance is running, create a new AMI from it. Now you have a new instance that contains all the stuff you need in the folder where Brenda unpacks the archive, and what must change will simply be overwritten when instance downloads the new data on each run.

Thanks for the great ideas. I will definitely look into both of these options if I ultimately end up going with Brenda.

One additional note, in case if someone is planning to try the same thing. The reason my image sequences are so huge is because, to my knowledge, Blender does not allow you to switch an image based on the frame. What I mean is, if you have a 2000 frame animation and an image needs to change 4 times (say, at frames 1, 500, 1000, and 1500), you need to supply 2000 images – one for each frame. So there are a lot of duplicate images when I have image sequences in my Blender projects. (If there is a way around this, please let me know!) A project with lots of image sequences can easily swell to a gigabyte of image sequence files for just 30 seconds of material.

Some file compression algorithms look at duplicate files in an archive and will treat duplicates as one file when creating an archive and then will duplicate that file (renaming them to the appropriate filenames and file locations) when expanding the archive. This can save a lot of uploading time. I’ve found that WinZip’s Zipx format works this way, though I would be interested in finding a format that doesn’t rely on a paid shareware app. With WinZip’s Zipx format, I was able to squeeze a gigabyte of largely redundant image files down to something like 15 MB, which is quite reasonable.

So if I go the Brenda route, there will need to be some sort of file compression consideration as well. Either I will need to upload the archive, uncompress it, and put the results in a bucket or AMI…or include the highly-compressed image sequence into the zip archive that is sent over to Brenda, and then alter Brenda to unpack the archive locally before proceeding. I’m not sold on any of these ideas yet, though. Just trying to figure out the best way to go about this. Any suggestions are welcome!

You can switch an image based on frames, you need a little python, but it’s certainly possible. You use something called a frame handler, which is code that is executed every time the frame changes. Within that code block you test if the current frame is one of your special frames and if so, you swap out the image path of (presumably) some node. To test for 4 frames/images you would need about 8 lines of code.

You can use mix or switch or similar node in node editor (shader or comp, does not matter) to switch between still images on appropriate frames. Just manually keyframe the factor value for mix or input value for switch.

Oh yeah, duh, you could just keyframe mixrgb nodes. If you had alot it might get ugly and hacky. With python you could even make it so the filenames on the images determine which frame they switch on.

It looks like a modified version of Brenda console/Brenda will work for this. I’ve been experimenting most of today with the Brenda console and Brenda and if you send up a zipped version of the .blend file + the image sequence files with their appropriate relative location, you can indeed successfully render image sequences.

As far as huge projects with lots of image sequences, upon looking more closely at the Brenda documentation, it seems that you can use EBS volumes instead of uploading a project file every time. I will most likely look into this option eventually.

Did you consider BitWrk? It is a distributed rendering service but based on Bitcoin so there is no need to ‘earn’ credits. Also, it tries hard to handle data upload well by exploiting redundancies in data.