I’ve been struggling with my cycles render times - just completed an animation which took >80 hours to render 90s of video on my RTX 2080 SUPER + GTX 970.
So I wondered if I could get better performance on AWS EC2 which offer some beefy GPUs. Just thought I’d share my results, as they’re not as impressive as I’d hoped given the price tag of the K80 and V100 GPUs.
Time to render reference frame (1920 x 1080, 512 samples):
My PC: 2m41s
p2.xlarge (Tesla K80): 8m52s
p3.2xlarge (Tesla V100): 2m00s
Is this as expected? Any hints for better performance on EC2? The costs to render an animation could be quite significant on AWS (for me, a hobbyist, not Pixar).
Interestingly got this working with AWS Batch. It’s quite a neat workflow, my local script just takes the .blend file path and the parameters (scene, size, samples, frames etc), uploads the blend to S3, and submits a AWS Batch job. AWS Batch takes care of creating ECS clusters, EC2 instances etc, and scales up and down (to zero) as needed depending on how many jobs you have submitted, and your compute environment setup. It uses Spot Requests to minimize the EC2 costs. Then I just pick the results from a different S3 bucket.
The render actually runs in a docker container on the EC2 instance, but I believe this is using Nvidia Docker, and I have confirmed it is actually using the GPU to render.