Project workflows: a lesson from "real" film-making

If you ever get to watch “actual” filmmakers at work, you’ll see that they customarily have several cameras positioned to capture whatever action is taking place. (If any sort of camera-movement is required, today they usually have a robotic mechanism to do it so that each “take” is exactly the same.) They have a list of pre-planned “shots,” but they might also extemporaneously come up with another one, because “this is the one-and-only chance they have to get it ‘in the can.’”

Of course these days they’re not using “film,” but high-resolution video cameras. There is no plastic, and there is no “can.” So it goes.

In this way, they accumulate “footage.” And there are often other staff members elsewhere on set who are doing rough video-edits on the fly, so that the director can see and evaluate the emerging product as it is being produced. (Quite a change from the old days, where they had to look at “the dailies” next day.)

There are certain camera moves and editing practices which are used in “real” filmmaking to which all of our eyes are accustomed. Even though we do not consciously regard them, we instantly (and subliminally) sense when they’re not there. We really don’t think about “how the magic was done.” It never occurs to us that the famous “shower scene” in Psycho required 78 camera setups and 52 editing cuts.

When you look at some – amateur – CG shorts, one thing that you might notice is that the camera moves and shot-sequence don’t “feel” right. You might have an entire scene that is viewed from only one camera angle in one continuous “take,” or a camera dolly-move that lasts for more than thirty seconds. And the technical reason was probably that it took so damned long to render it! The only render that they did was “final” – laborious and expensive – and they figured it out in advance.

I’d like to now propose a different way of doing it . . . a more efficient way which will lead to better results, faster. A strategy based on the “real movie” production workflow.

Sir Alfred Hitchcock referred to the editing process as “assembly.” I think he was on to something. Faced with – at that time – literally miles of very expensive celluloid, the editor is the one who actually “assembled” that infamous shower scene. The cameramen merely recorded it. (And, simple math tells us that some of those “camera setups” were never used. The point being that the editors had them to choose from.)

Yes. At the time they had “miles of plastic, a razor blade, and cellotape.” From this they made immortal movie magic, and scared the hell out of millions of ticket purchasers. :smiley:

What we’d really like to do, then, is to be able to take a lesson from what “real” filmmakers do – of course, “minus the plastic” – in order to make our computer-generated films more realistic and to avoid wasted(!) computer time. Here, then, is a workflow that works really well for me.

• Set up each “shot” on the set, and work out what the virtual actors are going to do. Next, place cameras on the set at various positions, and name each one. “The more cameras, the merrier.” Go ahead. Splurge.

• Through the mechanism of “linked scenes,” film the action from each of the cameras, storing the outputs separately. Shoot more film than you think you need – capture the actors walking into the scene and walking out of it. Use the “Workbench” renderer, and use the “Stamp” feature to identify each frame: filename, scene (camera) name, timecode, frame number, etc. The results will be comparatively crude, but they will be accurate, and that’s what matters at this point. When the “real” renders eventually replace (some but not all of) these quickly-rendered frames, they will match exactly.

• Now(!), go straight to the editing room. You are going to “edit, then shoot.” You can use Blender’s internal editor or something else. You’ll be quite surprised, initially, at how many creative decisions you make at this point.

  • Eventually, you wind up with a “shot list.” A list of shot-names, scene (camera) names, and frame-ranges. This is what you need to render. This is also what the music and sound-effects people will need.

• Render each frame-range that you need, using the renderer of your choice and of course without “stamps.” The output now replaces the original stand-in footage, using the same filenames. (Empty the destination directory first.) Go back to the video editor very frequently to check your work.

The key idea is: "edit, then shoot." You very-quickly shoot “miles of film,” not yet knowing how you’re going to “assemble” it but giving yourself enough leeway to make good creative choices at that time. The “stamp” information tells you everything you need to know about exactly where each frame came from. (I always render to frame-per-file “EXR” files.) The final shot-list, which is completed before any “final” renders are done, tell you precisely what you need to “final render,” and the finals exactly replace the Workbench stand-ins in the emerging video final product that you are constantly reviewing.

The "real- render cameraman" is a very expensive employee who costs “a thousand times Union Scale,” whereas “the Workbench cameraman” is an intern. That’s why you do it – to avoid paying the expensive one any more money (and time) than necessary.

Also – “you really don’t appreciate ‘the editing room’ until you’ve given yourself the chance to actually be there.” Choosing the exact frame-number at which to make the next “cut.” Scrubbing back-and-forth to find the “rhythm” of the sequence. This strategy will allow you to “actually be there” with your computer-generated film … without breaking the bank to get there.

“Hope this helps!”

1 Like


It’s like how blocking out your model first is important, you should also “block out” your scene first.

“Big-Budget” studios tend to follow the phases of concept art, storyboarding and finally an animatic to determine shots well in advance in the pre-production phase.

Blender (and Grease Pencil) is a powerful concepting tool that can roll the traditional concept art, storyboard, animatic, production and post-production phases into one if you’re a indie creator. In the end the animatic should dictate what cameras you use and where the cuts are and is basically the “edited” version which can then be iterated on to make the final animation.

Good luck!

Yes. And, if you are not a gifted animatic artist – as I unfortunately am not – you can also initially substitute *"named polygons, each to scale." Set them up as linked assets, but initially just let them be dummy geometry with the correct form-factor. Actors can be an assembly of cylinders and lozenges and cones, all to scale.

Set up a scene “as it might be,” and just shoot it. Then another and another and another. “When you can’t draw, this works.”

The human power of imagination is a wonderful tool – you can actually wrangle a dramatic performance from a [literal …]cone head.” :smiley:

But – the extremely-important thing is: "to scale." Exactly how big is this room? This prop? This actor? If you grabbed a measuring-tape to find the distance to the camera – as real-world filmmakers are constantly doing – exactly how far would it be?

Notice, also, that I am proposing much more than simple “blocking out.” You can actually cause your initial efforts to become(!) the final scene. (Or at least, a “final-scene candidate.”) One-by-one you then replace the “to-scale polygon” linked-objects with the real assets. The "final cut" is actually initially finalized before any(!) “final rendering” takes place.

When you take the “initial final- cut version” and superimpose it upon “the finished film,” they match exactly. Because you made all of the creative decisions in advance, then replaced the elements one-by-one.