Caution, wall of text ahoy.
Blender is great. Its goals seem to be set high, with its feature set encompassing most of cg/video functions usually found in separate commercial applications.
In Blender you can:
- Create/Edit and animate 3D content
- Paint and sculpt in 3D
- Video edit
- Track cameras/Plane track
- Make games
So we are already aware (or should be), that Blender can’t be the best in all of those fields if development resources don’t cover the refinement needed. It is my opinion that with small tweaks Blender can however be a application where a project can be done from start to finish with little or none need for other 2nd party applications.
You may ask, why would one want or need that? Well as you will see there are times when an import/export process may need so much computer time and artists supervision that an integrated application is something one can really appreciate.
So where does Blender fall short? I will try to explain through real world commercial projects I’ve done in Blender and where I got stuck. Feel free to correct me if there are any inaccuracies regarding Blender’s capabilities.
Case 1. Closing titles for a film project
I’ve been tasked with creating some animated titles for a movie that is just been released in festivals and so I can’t really show any images/video since its not widely released yet.
The idea was that its suppose to feature photographs of main actors on a dirty background with names floating in 3D above them. Camera is supposed to emulate handheld feeling with quick jumps between different photographs.The duration was about 3.5 minutes at 2k/24p. So I went to After Effects for this as it seemed as a fit for such a project.
It was fast and easy setup just until I needed to animated the camera and then the crash fest/slowness started. After effects isn’t good with high res images. Especially if motion blur is involved. And this was 16k background with photographs at 4-5k pixels each.
Render times were about 5-7 hours for the 3.5 mins at 2k. There was also the finicky graph editor for the camera animation which gave me some grey hairs in the past so I tried to use mostly expressions but it soon become clear that it needed some tweaking by hand and the deadline was approaching.
So, I’ve fired up Blender. First it was for kicks only, just to see if it would crash or what with all the images loaded, and also see how it would get the vector art (titles) in. An hour later I had the sequence completed and playing realtime in Blender. A week later, the project was finished and rendered out of After effects and edited in Premiere for final deliverable.
Why go back to After Effects you say? Go back with about 5500 2k frames X 3 layers (bg, photos, titles) that need to be rendered out of Blender and then again from After Effects and then again as Prores4444 from Premiere…
So what happened? I needed one small effect that needed Blender compositor and then map the output back to a distorted plane (the photographs). I needed to highlight faces on the photographs through a mask and blur the rest of the photo when the camera was over them. And I needed that to work in 2d. And then map the output back. But you can’t do that, there is no texture output from the compositor.
Only way is to render out the sequence in whole and map back. I’m not doing that, not with 5500 5k frames (remember the photos were 4-5k). I don’t need that, that is not the final result. And then there is the issue of sync and speed tweaks. I need all that live and in context to be able to judge the final result.
So I did what hurt the least. Rendered all out without the effect and then imported in After effects and thanks to great After effects exporter by Bartek Skorupa I got the Blender camera and location nulls (Empties, whatever) back to After Effects and parented the layer with effects to them and finished the title sequence that way. Now that After Effects had to deal with only 2k it worked well.
So how much time was wasted by Blender compositor not being able to render an intermediate result back to a texture without going to disk first? About 3-4 days.
And its silly, because the compositor has the result already in the memory, it updates on frame/value change. Its a freaking bitmap dammit. Get the pointer to the result and assign it to image texture same as a disk based image, jeez.
Its not only for this limited use case I had, think of things you can create and animate in the compositor and then think how could it be used as a texture. Sculpting, texture painting stencils anyone? All procedural textures using masks and all other compositor features with or without (with adaptive rasterizer-see svg project) finite resolution. Someone was talking rigging with displacement (hey I did this 10 years ago with XSI compositor), control wrinkles from compositor with drivers. Motion graphics, hell yeah? Add baked input layers from the render like curvature/ao and you essentially have an integrated Substance designer. BTW, thats also something I did in XSI 10 years ago, rendermap the procedurals/ao on data change events so I could use them in other materials/effects. Here is the proof:
Anyways, this is part 1. of Blender and integration of features, this is my take on how to improve Blender with small steps, bringing big results.