speed up the modelling to animation pipeline

hello guys wouldn’t it be cool if we could continue modelling a mesh whilst using the same mesh to animate with…

for example i am modelling a charecter which aint finished but i decide to link that charecter in to my previz animation…whilst still being able to work with the original character…

Do you mean to model a character after you rigged it? If so then you just adjust the model and the adjust the weights.

If you limit the rigging to “bone envelopes” only, you can make substantial changes to the mesh without “re-weighting”.

It takes a lot of extra bones, since Blender has little to no ability to customize the shape of the envelope, but it can be a time saver if you have a base mesh that needs rigging and animation, but still requires some mesh editing at the same time.

I am not big in animation but I believe that is possible through instancing. Or linking from another file with the “work in progress” character. They did it in Sintel I believe.

if thats the case how was it done…

just incase some may have misread the post…
I start modelling a character …the charecter is incomplete/work in progress
despite this i decide to use that uncompleted charecter in the previz/animatic
the animatic is eventually completed with full shots and animations however the LINKED character is incomplete…(in the blend files properties window/panel there is an option to update meshes)
i then go back to the work in progress charecter complete it…

i the return to that animatic/previz file and voilla on load all meshes are updated with the respected changes found in there original linked blend files…

thus saving a lot of time from re-aniomating the entire shots/ animations again…all that is left to be done is finall animation tweaks…

I think this is what you are looking for:
And more indepth:
Good luck.

I’m thinking for sintel they used a low-poly generic mesh for much of the initial stuff. I remember for a community animation sprint, they had a low-poly mesh with the same armature that sintel & other characters had. You could do this as well, create a low-poly character and rig it with the same armature the real character has. Then when you link in the character, turn off visibility/renderbility of the real character and work with the low-poly character. Once the real character is done, turn off visibilty/renderbility of the low-poly character and enable the real character and it’ll work just fine. Of course, you must have the armature that you are planning on using on the real character finished before you create the low-poly character and use that armature when creating the actions.

I think this workflow is how all professional 3d cgi work is done. You work with low-poly characters when animating, so viewport viewing of the animation is fast, and test renders of the animation are done fast as well, then for final rendering the hi-poly character is used. I just completed a project where I did much the same thing, but in my case, the low-poly character was the same as the final character, just without materials. This allowed me to render the animation in about 2 hours for preview purposes. After adding materials, the final render took over 36 hours with 3 networked computers as a renderfarm.


a revival of one of my old threads…
look this is wat i’m talking about

We’ve linked in scenes from every blend file into a master timeline in the video sequence editor (making significant use of the mapping cameras to timeline markers)- which is both cool and not-so-cool. On the plus side, it gives immediate updates. Whenever anyone updates an animation or a model, all we have to do is update SVN and bam! It’s in the animatic.

The downside to this is that the VSE was never really designed to do this. It can, but from what I hear the feature was implemented more as a joke than anything. As the files have become increasingly complex, the time to open/save the timeline has increased to rawther absurd lengths, and since it has to load each scene in order to play it in the timeline, there are often long lags between cuts, which makes judging timing difficult. I have to render the entire film in order to see how the timing actually works out, and it shouldn’t surprise anyone that that can take a while (Although, huge thanks to Campbell for putting in an ‘openGL animation render’ option in the VSE, like we have in the 3d view! Without it, this wouldn’t be feasible at all).

All that to say, we’re probably going to switch over to

this is the feature hopefully they have this in the actual svn rather than in there own repository svn