I have used Amazon EC2 virtual machines and James Yonan’s brenda scripts for a few projects recently and thought I’ll write down some things that I did, would do differently and so on.
As I have set the scripts themselves and EC2 related things up some time ago already, I don’t remember the details about this side. Unfortunately the brendapro.com forum which had a very good description about setting up your own AMI is down. I used that info to set up an AMI, install brenda and blender there and get everything running. If someone has these steps somewhere or can write it down again, please do so!
First off, I’m on Windows, so to use Brenda scripts I launch an ubuntu virtual machine in Oracle Virtualbox. There are other ways but I found it the easiest.
Where to get the scripts?
Which instance to use?
I have mostly used c4.8xlarge VM-s because they seem to give the best bang for the buck (36-core machine, costs around 0.5$ per hour). New c5 instances are probably also nice, but I haven’t tried them. Always use spot instances! This is way cheaper than on demand and I have had only one occasion where spot instances were not available in the volume I wanted. Linux instance runtime is calculated in seconds, so the previous worry of launching lots of instances and spending too much money on hourly runtime is gone.
How much does it cost?
Data transfer costs are relatively low, so main cost is VM runtime. For example, If I want to render 60 seconds of material which at 25fps is 1500 frames and one frame takes 5 minutes, it becomes total runtime of 7500 minutes or 125 hours. At 0.5$ per hour it is 62.50$, which is not bad for such amount. And if I launch, lets say 100 instances, I get this 60 seconds worth of footage rendered in 1 hour and 15 minutes.
How to get more instances?
Fill the form in Amazon admin panel to increase your spot instance limit. Default is pretty low (around 5-10 currently I believe) but I had no problems raising it to 100. It took about two days to process my request.
How to get data up and down?
Currently I use Cloudberry S3 Explorer which is rather cheap software and has multithreaded upload and download. For downloading lots of frames (tens of GB-s of data) speed is pretty helpful.
How to render different scenes (I use different scene for each shot in animation)?
As Brenda scripts don’t have argument for setting active scene I prepared a frame template for each scene with scene name. In this template I set the active scene using basic Blender command line scene argument. And to render different scenes I simply create jobs for each scene using these frame templates.
How to make sure settings in each scene are correct for EC2 rendering?
Fiddling with stuff and most of all, switching between local GPU render and CPU render for EC2 can leave a mess from render settings. To straighten this up there are two non-manual ways: 1) use either a python script that modifies render settings for each scene in blend file before you upload it to S3; 2) use a python render script (as text block) that sets all necessary settings and run this script using python script argument in frame template. The second variant is actually pretty neat because it allows to modify the script for each render session as necessary. Main things to set are render device, tile size, sample count etc.
What can go wrong?
Lots of things Blend file doesn’t contain all linked data (textures not packed for example), you set up frame templates wrong, drivers make funny things (force python script autorun using command line argument in template) and so on. Usually I launch one VM first and check the first frame or two to see that stuff is actually rendering and then launch more instances. The tail log command in brenda-tool is pretty handy, especially if rendering one frame takes a long time. This allows to probe if instance is actually rendering or has thrown some error.
Thats it for first quick batch of thoughts, any ideas, suggestions, questions etc are welcome!