"Edit, Then Shoot"

yea, that just my preference… what ever works for you is what I should have implied… That just happens to work for me…

@MusicAmg Ah, I see it now. A big benefit of your workflow for me is memory-usage. Besides that we can now organize blend-file in two categories: The scene in one file, and modeling of seperate objects will be done in all other blendfiles. A downside is loading time each time I want to edit linked library. Proxy is needed so you can move/translate the object.

Thanks

The various kinds of “linking” have to do with exactly what is linked. (I’ve never particularly cared for the terminology.) After some looking-around I found this description:

Each Blender object type (mesh, lamp, curve, camera, etc.) is composed from two parts: an Object and Object Data (sometimes abbreviated to ObData ):

“Object” Holds information about the position, rotation and size of a particular element.
“Object Data” Holds everything else. For example: […] Each object has a link to its associated object-data, and a single object-data may be shared by many objects.

The referenced page goes on to describe the various kinds of linking that are available.

It is very(!) important that you conceptually clearly-understand the hierarchy of “data blocks” that make up the Blender “DNA and RNA,” and how these things relate to one another – and how they may evolve in subsequent versions of the system. The “Outliner” actually gives a lot of insight into this, if you poke around with it long enough …

A key reason for using linked libraries is that you can make a change once and the change will by definition be reflected in every other file that links to it. As your project gets bigger this becomes an important consideration. The various “How It Was Done” videos that have been made for official Blender movies over the years always talk very extensively about this subject.


An important realization about this workflow is that “these are not mere ‘animatics.’” They will become the real shots, if they survive the editing process. You "get behind the camera and start shooting" as quickly as you can, knowing that what you are doing is not wasted effort. Because I basically can’t draw, this is a way that works for me to try things out. All it takes is a good imagination. It pushes you to think about choreography and cinematography, and, yeah, story-telling, very early on.

so very true… This version of Blender is going to be so much fun for me. Just waiting on the key things to function, like the audio baking in Animation Nodes. The things I will be able to do is just mind blowing. Since I compose music, I will be able to trigger ANY animation through audio.

As an example, music, or a song, may be made up of several instruments, recorded each on their own “track”. I can render each instrument on it’s own and use that as the basis of triggering animations. (Each individual audio file would be the exact same length in time, regardless if there is silence or not to the actual “song”) . So, since baking doesn’t use or require that audio for playback, all is needed is the completed stereo “song” in the VSE timeline. Which is the exact same length of the individual baked “tracks” (known as stems in audio production)

oh what fun! lol

1 Like

Blender 2.8:

Blender 2.8 makes some changes to the above-described workflow – “OpenGL Preview” appears to be gone, but we have the very-advanced EEVEE engine (based on real-time game technology), and we have a much-less discussed but very-fast renderer known as Workbench. It appears to me that Workbench is an OpenGL-based renderer that is much cleaner and better than Preview. (I’m still experimenting, as are we all.)

(The “Stamp” feature is now in the Output tab under Metadata.)

The basic workflow idea, though, remains the same: use the fastest available rendering method to shoot film, then use this footage to edit your show together before you settle down to the process of rendering the actual shots. Use stand-in objects that are to scale and perhaps display labels so you can immediately see what they are. Use asset-linking so that you can replace the stand-ins with the actual models. “Stamp” all the frames with relevant information.

A key part of this workflow is that you “shoot film, and don’t care how much of it winds up on the cutting room floor.” Now, Eevee has made this considerably easier to do, and it quickly gives you renders that are much closer to “final.” Nonetheless, this can also be distracting – it tempts you to do too much too soon. There will be time to get each shot “looking just right” after you have finalized what the shots are going to be. You want to “get into the editing room” as quickly and as early as possible.

Yeah, often it is best to create a very abstract version of the final rendering first. Not only because you want to get it to the compositor/cutter/reviewer guy as soon as possible but also because this it makes designing the image composition easier.

Now, some will probably say that this is nonsense because you usually work from concept images which of course allready have the image composition nailed. Unfortunately, in reality there are a lot of jobs ( usually lower budget advertisment but sometimes higher budget ones as well ) where your concept images consist of a few spoken words via telephone. :slight_smile: