Cloud rendering with Elastic Beanstalk

Hello,

I am planning to render 3D scenes in the AWS Cloud and am now trying to plan some infrastructure.

I know I can set up an Ubuntu server with EC2, however I am wondering if it is also possible to compile Blender as a Python script and use Elastic Beanstalk.

Does anyone have any experience with this?

Best regards

Beanstalk appears primarily based around web applications and so I assume would be more concerned with traffic rather than compute.

For Blender, and similarly heavy compute operations, you’ll likely want to use AWS Batch instead. AWS Lambda may be an alternative, and perhaps slightly easier to setup, but I’ve never needed anything but Batch.

For management you’ll likely want to read up on Blender Foundation’s render management software, Flamenco, if you don’t want to code something custom.

Keep in mind that unless you’re comfortable with Linux admin and Python it’s going to be quite an uphill battle.

2 Likes

Thanks for you response!

I think it makes sense if I describe my intention in a bit more detail. In fact, I want to build a web app that can send a render request to the cloud via HTTP request or AWS SDK.

Ideally, it should be able to process multiple render jobs in parallel. It should be noted that we are only ever talking about static images and not animations. So there is no need for frames to be split. I am reading into Flamenco as I think the manager is very helpful for parallelism. Whereas for now I will probably start without a manager.

I would like to use CUDA (or better OptiX), which is why I need access to an NVIDIA GPU. I’m not sure if OptiX is even possible, but CUDA should definitely work. Unfortunately, Lamda functions don’t seem to be able to access a GPU, so this option is already out.

I don’t have any experience with AWS Batch yet. However, since I only ever want to render individual images, I wonder if I need this at all and not just use EC2. Of course, as I said, I would also like to be able to submit a new render job without having to wait for the first one to complete, but the jobs are independent of each other. How do you see this?

Of course I want to build the whole thing as cost efficient as possible. In terms of EC2, I’m leaning towards g4dn.2xlarge. But how fast this one is in the end, I will test then.

I haven’t looked prices for a while, but GPU instances are typically far worse for cost to performance unless you’re doing something related to machine learning. And for spot instance they’re incredibly unstable and illogically priced.

Batch is basically a render queue manager. You submit a ‘job’ (json) to it using the AWS sdk and it’ll launch docker based instances based on the compute requirements of the job. You don’t need to think about types of instances, you just tell it how many processors/gpus and how much ram the job needs then it’ll handle the rest. I think Lambda is a more abstracted and easier to use version of batch when used for rendering purposes but I’ve not had a need for it yet.

1 Like

Thanks again!

I have been looking into AWS Batch for a while now. I learned that probably ECS or EKS would be better suited for my project. It’s important to me that when rendering small images, I don’t have to wait minutes for AWS Batch to even run the container.

AWS Batch Dos and Don’ts: Best Practices in a Nutshell states:

However, not every workload is great for Batch, particularly:

If you need a response time ranging from milliseconds to seconds between job submissions, you may want to use a service-based approach using Amazon ECS or Amazon EKS instead of using a batch processing architecture.

It seems that AWS Batch is designed more for longer processes, like rendering animations.

You said you’ve only used AWS Batch so far - so you have no experience with ECS?

How are your startup times with AWS Batch? Rendering a small image in <30 seconds is not possible or? Does it change anything if you use Fargate instead of EC2?