Linking rig collection

Hello all. I have separate blend file with character rig I want to animate, to keep my file small I’ve decided to try blender linking system. My character rig is pretty complex - it consist from many collections, - different collections for meshes - cloth, body parts, hair, low poly, hi poly versions; physic simulation meshes, colliders meshes, rigid bodies, some props, widgets, source metarig and rig itself (customized rigify). It’s pretty much big mess, but I’ve did my best to make of it more or less organized system. I switched visibility for collections that supposed to be hidden from render\viewport, for example physics collections. Everything work fine for me. Now when I link root collection of my character rig to another file I got all collections and objects visible. So I have to go to ID > Make Library Override Hierarchy and mannually set visibility for collections once again. Is it normal blender behavour or am I doing something wrong? Also is there any good and understandable tutorials how to use linking for animating complex rigs?
Thanks in advance and peace to all!

Hello,

In your original file, did you set the visibility using the eye or the icon that looks like a computer screen ?
image
The screen should be used …

As a side note, the metarig shouldn’t be included , keep only what’s useful for animation.
Including the props is a bit debatable, if they are attached to the rig ok, if not they may be better in another collection, or another .blend file, maybe with their own rig.

But that’s another discussion !

1 Like

Thank you for reply. Some collections I hide using eye icon, cause I need them to participate in simulations. For example I have cloth mesh which I want to be simulated, but I don’t want to run cloth sim on this object directly, instead I’ve made low poly mesh (physic mesh) for cloth and added cloth modifier to it. And my actiual cloth using surface deform modifier to get deformations from my physic mesh. As I unsderstood is that screen icon disables object from viewport so it won’t affect the scene which I don’t want. ButI am not sure here cause recently pressing monitor icon on collection with physic objects didn’t removed physics from scene. Maybe it’s something about cache, I not sure.
As for props, you are right these are part of rig, they are parented to specific bones to be easily animated inside one armature and one action. I did it intentionally.
Also I have question about all these library overrides in right-click menu in outliner. There are lot of options, I just lost a bit, what all these does mean. I definitely need learn more about linking.

P.S. I moved metarig in separate collection apart from character root collection, avoiding linking it. Thanks for advice.

1 Like

Hum, you need to make some test on that. Worse case scenario the low poly used for sim is only disabled at rendertime.
The eye icon is scene dependent, so setting it in the rig file won’t change it in the anim file.

That’s worth trying, because the cloth is dependent on the lowpoly object the lowpoly object may get evaluated even if it’s disabled using the screen icon.
Anyway, you need to bake it at some point, so you’ll have to change visibility in the anim file at some point. Or I missed something ?

To be honest I didn’t tried Lib Override that much, but I worked a lot with the old proxy system.
When I tried lib override there was too much bugs to be useful but probably now it’s getting solid.
Can you make a screen capture or tell me about the settings that you want to know more about ?

I think, most of the time once the rig is overridden you don’t have to worry that much about the rest.
If you want to change a particular settings (like an object modifier) then you can override it ( Not sure that’s automated or you have to override it by hand) . You have also the possibility to bring it back to it’s original state and remove the override.

Because you’re dealing with simulation, you may need to go a bit deeper and override stuff on the object/modifier level. Rather than using only the rig.

It may be worth looking at the documentation to know more about these settings.
At least, if you can animate the rig in the shot file you are on the right path.
Many option are just here to manage overrides but it may not be always necessary to use them.

1 Like

Yeah. Some things gets a bit clear after some testing with linking rig. If eye icon is scene dependant thats ok, setting collections visibility isn’t that big deal for me - few minutes to fix. I’ve played with these overrides thingies and managed to do some things I need.

But there are still some things I can’t get. For example in my rig I have few rigid bodies and rb constraints attached to empties (working as joints), all these connected to rig bones via parenting\constraints. I’ve made basic set up for these rigid bodies and their joints, but what if I need to tweak their settings like mass, or angle limits. At the moment I could’t managed to get access to edit them. I go to ID of object and choose: Make Library Override Editeable. But still can’t change settings:

Thank you for helping me.

P.S. Interesting, when I do Make Library Override Editeable for mesh with cloth modifier it let me tweak it’s settings.:
1

But not with rigid bodies and rigid bodies constraints:

1 Like

Cool !

I can’t help you much more here …
It could be that rigid bodies aren’t supported by overrides. Or that you need to make stuff more or less local.

Let say you want to change material on one object. At some point (maybe that’s solved now) you need to make the object data (the mesh) local in order to change material.
And then make the material local too if you need to change a setting in the material.
That’s a bit messy because you end-up with data that are linked, others that are local , what happen when you change some stuff in the original file ?

So in general these things are done latter in the process, once animation is done and hopefully nothing needs to be changed on the original character.
Because most of the time these changes are either last minute design changes, or rig updates, and most of the time at the rendering stage the characters tend to be definitive.

Bear in mind also, that this is an advanced workflow aimed for production. It allows lighter files, broad changes in assets, collaborative work etc… but it’s a bit more complex to manage than just having everything local. Or the set linked, the character local.
Most of the time in a production there is always someone with scripting abilities that will automate some tasks or add little tools to workaround blender’s limitations.

Given that you mix cloth sims, rigid bodies and regular animation, some little cheat / automation may be needed. Maybe you’re dealing with realistic character so things like cloth settings may not change that much.
In cartoon animation you may need to tweak these settings from one shot to another because of the extreme / unrealistic moves a cartoony character can do… That add a some work and production cost.
A cheap way is to animate stuff by hand. It adds some time/cost too, but it’s easier to handle and you don’t have back and forth between animation and sim department. So it takes less time to see the end results and that’s quite valuable to make the production smoother.

Would be great to know how you’ll deal with that eventually !
I’ve managed to write so much things to say I don’t have the answer … I should think about doing politics !

Good luck !

1 Like

Hm, thanks for sharing your thought. Well, I am gonna make cartooney animations. To make animating process a bit easier for me I’ve made all those rigid bodies for arms and legs to get automatic secondary motions and\or physic based animations for them - like hanging legs from edge, or for example dragging animations so limbs animation should be automatic in such cases. Good thing about that I am able to mix physic animations with IK\FK animations to smooth out or fix some extreeme poses. But I still need much to learn how to properly use cache system for physic simulations. And here is big problem for me. And it touches not only physics but whole rig in general. When I do some animations I always find ways or abilities to improve rig and add some feature to it, to make result more satisfying. Then I just make changes to my metarig - add new bones, new constrains etc. And that’s way my rig inflatted to current state (but I can’t say I am not satisfied how it works), but more challenged animation task is, more changes I feel I need to add to my rig.

Thinking of your words about leaving my rig local (if I get you right) and animate it inside same file, I believe that for me it could be best decision, considering how often I change\add some features I need. Ideally I would like to have some kind of universal rig, which will work for most tasks inside my project, and this is the way how it goes now. So returning to physics, you mentioned baking physics at some point. But here I have same problem by it’s nature. When I’m making animation, I always find the way it to be improved - today I think it’s ok, but tommorow I notice that some things may be better and I change some keyframes or f-curves, and this process is constant. And thank to blender developers they made baking physics optional. I mean after hit play my physics calculating automaticly each time I’ve made changes to animation or cloth\rigid bodies settings, and I don’t have to bother to manage all those cache files to bake\rebake etc. Only thing they could improve is automatic cleaning bakes, At the moment to clear caches I have to go 0 frame and move some bone in my rig, or enter edit mode on mesh with physic.

All in all if linking doesn’t support editing rigid bodies settings I think it may bring me more headaches than it could solve. Probably best way for me would be copy rig blend, link there some enviroments and animate it inside this blend. :slight_smile:

1 Like

Ok ! That’s complex decisions to make !
You may do some more tests and find the good recipe that fit your project !
Don’t rush it too much because these decisions will be what you’ll have to deal with until the end of the project :smiley:

I would avoid involving rigid bodies to handle bouncing of the limbs. You may end up doing too much counter animation. Like, you do a first animation , then comes the rigid bodies, then you animate on top of that to smooth rigid bodies, then you want to tweak something : Maybe better to do it in the last step to avoid redoing everything. But if it’s something global ( that belong to the first step) you’re going to struggle a lot. Rather than just adding keys/tweak curves to improve the animation. You may lost too much control here. These stuff can be great for secondary animation like hairs, clothes. Where you can turn on or off the simulation.
But because I don’t know what you’re doing, I may be wrong.

About the rigid body and linking rig :
You can use soft bodies, with a very simple mesh ( like an edge) and constraint the bones to that.
You may run into some dependency issues depending on how the rig is setup, but that should be manageable.
Then in the shot (where the linked char is) you make a little python script that add the softbody mesh local, and constraint the rig to that. Then you keep the char linked, and the soft body local.
If you’re a bit adventurous it’s not that complicated doing that with python. That can be done manually but it’s a lot of monkey work for every shots.

About improving the rig every time :
It’s ok to improve the rig while being in production. I see that often. But better to do small tweak.
If you’re not sure yet , maybe it’s best to do animation tests, or a few action that (if everything goes well) you reuse latter. If it’s your own project you can delay a bit the time when you start to create shots and make sure the rig ( and character design) won’t change that much.
Changing stuff while not breaking the rig (and the animation) is complex. Better to save this for last minute features.

From what I see , some animators are quite picky with rig features, some other don’t care, they just animate with what they have. But both type can manage a good animation with a crappy rig.
But of course it’s great to provide something good so they can concentrate on doing good animation rather than working around the rig.
How do you modify the rigify rig ? do you edit the code or edit the generated rig ?

In general I tend to keep things simple as much as possible. I think we tend to over-complexify things in the beginning, which is great for learning, but in the end it’s better to go for simple solution and keep complexity were it brings the most benefits. That lead to less energy wasted.
Keeping things simple let some space to add complexity latter. When you have a lot of dependency and constraints that complexity requires , adding a little things tends to be quite complicated.

It’s hard to give you good advice ( that you didn’t ask for in the first place) because I would do things differently, like for me avoid baking physics is quite dangerous or near impossible when you render with a renderfarm. But that may work in your case. Same with rigid bodies limbs etc…

What I do in general when setting a pipeline for a project is that I write down a lot of stuff.
What are the different scenarios like liking the character vs making it local. The pros and cons of both methods. What needs to be tested, etc… How to handle every step of the process from shot layout to rendering / comp ? … What are the most complex shots, how will you handle them ?

And within a few days I end-up answering myself and move forward to a good solution.
Or I ask a friend on a specific issue if I’m really stuck.

Good luck with all that, it’s not easy to make your own project !
But you’ll learn a ton of new stuff, it’s awesome !

Well, I’ve tryed soft body method. It worked well for some parts of rig to add free secondary motions - it was good at joints which rotates in all 3 axes. But problem with limbs that shin and forearm rotating only by one axis and has pretty much limited angles at x-axis, besides that I wanted limbs to have collisions, so they should interact with objects in more or less believeable maneer. With soft bodies I didn’t managed to do so - soft bodies has no limits for rotation angles\degrees they just bends in any direction without any limits, I found it hard to control by setting limit rotation constrains to bones driven by soft bodies. But maybe I’ve done it wrong way - I didn’t found good tutorial for such things. Creating rigid body system was really tedious work to set up, but in few tests they showed satisfying results - arms or legs never bends wrong direction when simulating. I’ve made simple switch system with help of constrains and drivers, so when I need legs or hands or both to be simulated I just switch to FK and move bone controlling rigid body physic on. And then FK chain is driven by rigid bodies. I can blend simulation with IK chain to stabilize extreeme\unwanted poses. Something like that.

There was problem how to make this system to be flexible for edits, so I could easily regenerate rigify rig, then scripting helped a lot. I wrote huge and ugly post generating script which do for me all dirty work, - creating needed constraints, setting needed flags and such things. It wasn’t simple task for me, I am not programer mind person, there are too many abstract things in scripting for me. But in the end I made it working. Only thing I didn’t managed to transfer from metarig to rigify is drivers. So after generating I have to add about dozen of drivers manually. It’s very annoying. Also I’ve found that using basic.raw.copy bones are very usefull - they can relinnk constraints from metarig, only frustrating thing that they won’t respect deform flag, so I had to script it , just set deform to TRUE. And it’s lot of bones, I’ve added to the rig, and each bone listed in my monster script (because I don’t know more elegant way to get bunch of bones and set their parameters in one line). And now if I will rename some of these bones I have to rename them in my script to make sure everything is working. But anyway I think renaming bones without need is bad idea anyway.

OMG I agree with you on 100%. I love to have simple but effective solution for any kind of task. Easy to create, easy to manage, easy to animate. But it’s not always possible. Often if things are easy to create and manage it will be hard to animate, or vice versa. I saw many tutorials for characters and saw how much efforts people spend to animate subtle motions - especially secondary motions, and it’s only simple short cycle - run or walk. And what if you want to do something more complex? It would be hell to animate hudrets bones to get believeable result and will take lot of time. But talking about complexibility. Well, rigify by itself is overcomplex rig. For my project it’s seems like overkill, but I love some features it has and I couldn’t to bring it to my custom rig I used before, - switchable parent, FK-IK snap function etc. That’s why I decided to go rigify even it’s seems to me overcomplicated itself, and
just added some needed for me features. Only thing is bother me with rigify at the moment is UI. I’ve tried cloud rig - it’s kinda custom rigify rig. And it has very comfortable rig panel, evrething is organized very well. I wish we have ability to customize this panel so it stays even after regenerating.

Oh I always take advices in account. Cause 3d is really complex thing to make all just your own way without enough knowleges.

Thank you for conversation it was very usefull.

1 Like

Cool ! Thanks for sharing a bit on your process !

Indeed now I see better why you used rigid bodies. It won’t work like that with soft bodies.

It’s cool that you managed to do a bit of scripting ! That’s so much useful when it comes to rigging.
It can come handy when managing shots, you may want to automate some things here too.
With time you can learn to write better code, at least it works ! Many artists start with similar dirty scripts without knowing completely what their’re doing, it’s totally ok !

And about rigify, yes it’s quite advanced with many features. Still simpler than making your own rig from scratch. But yeah could be a bit overkill.

And for animation, it takes time for sure.
It started with a bunch of people doing drawing, painting and filming every frames…
On a personal project I tried to include mocap to go faster with a better result, and made a similar system than yours to be able to animate on top of it. It turned out to be quite complicated to manage. I kept a few shots that are a mix of mocap and hand animation. And hand animated others, in the end that was simpler to manage and they ended up being not that bad (for my level) , some of them better than my (not so simple in the end) attempt with mocap.
Animation is long but it can be very fun to do.

I’m curious about the results you get with your system. If at some point you post animation tests or sequences I’ll be quite interested in seeing them !

1 Like

Well, I’ve made quick test, nothing fancy but enough to check parameters for joints and rigid bodies settings. I love that rigid bodies are colliding with each others (and other rigid bodies objects) not allowing too much intersections, and of course angle limits - angle limits are must have in such set ups for limbs.

I’ve animated torso and chest to make wavy animation:

keyframed animation.mkv (736.8 KB)

Rest done by rigid bodies:

result with rigid bodies simulation.mkv (785.9 KB)
At any time I can blend this simulation to IK bones by those four bones visible on video, controlling left\right arms and legs separately. When I don’t need simulation for limbs I just move these to rest position and rigid bodies are set to animated not wasting computing time.
Though rigid bodies are a bit annoying to set up, but once done they do their job very well, also in terms of performance they are really fast.

Pushing forward this set up I was thinking on implementing more procedural workflow into character animations, at least for short loops it may be really helpfull. I remember when I started 3d I was using 3d max, it has very cool rig - the CAT, so it can produce fully procedural cycles - walk, run, jump etc. It was fun to play with it. I think potentially blender can do even better than CAT. We already have some addon called ballance run, it can do pretty interesting things, at least it can be used as basis for further animating.

P.S. Not sure how to embed video into this page. It made downloadable link instead. :slight_smile:

1 Like

Hey that’s cool ! That makes a rag-doll !
I see how that can be useful in some case, I was curious about a real shot, with a cartoony character.
How that can help or go in the way of a precise animation !

I’ve seen CAT rig, indeed there is interesting tools to mix and create animation more or less procedurally.

I looked at the demo of balance run, I didn’t know about it. Well, I get a kind of mixed feeling.
It’s awesome how it handle movement, yet it’s a bit weird… You get something that kind of work quickly but how to deal with all the weird stuff then ?
But I see how a more advanced version can be really cool !

Maybe try .mp4 instead of mkv ?

1 Like

Yeah I even called collection Ragdoll_rigid_bodies. But it’s a bit advanced partial ragdoll in comparision to those I saw in few tutorials. Rigid bodies can be posed and animated inside rigify rig, and they respect initial position when simulation started. Only thing I can’t achieve in my workflow to get even more controll for rigid bodies is that rigify doen’t support transfering\relinking drivers from metarig. That’s bothers me most atm. Don’t get what they are thinking of, imagine if one want to use rigify to mesh with shapekeys how would he bypass destroying all drivers he made in rigify if he will need to regenerate rigify? This is very limited and unuseful imo.
It would be very cool to have system controlling angle limits and other physic atributes all at once or separately - depending on situation. In that case I could use this system to many animations adding subtle secondary motion in chains - swaying arms a bit while walking, jumping, running and any other similar animations. It’s possible even now but it a bit messy to change those parameters by hand on each limb. Also I am thinking to add such rigid body chain to spine to get subtle automatic secondary motions, it will require additional tests. But whole system looks to me for now satisfying. I wish blender developers somehow integrated rigid bodies to armature bones to get rid off this annoying set up process.

BTW did you tried Cascadeur it has pretty advanced animation system. Basicly you set up key poses in timeline and rest is calculated by physic engine. Results are pretty amazing.

Hey !

Yes, it’s an interesting experiment you’re running… I would find it more useful in VFX , or for classical use of ragdolls but still I’m curious. And indeed it looks more refined that regular ragdolls without any constraints.

Yes, well I think they made some improvement at blender foundation, by including lattices and other stuff, not sure about shape keys. Of course rigify is limited but great nonetheless. It allows to quickly rig a character of any shapes and that’s awesome especially when you don’t know how to rig. But for sure, when you start digging and look for a more refined result you may find that’s not the most suitable system. Autorigs always have some limitations in one way or another.

I think a solution is to do just like you do, create a simple script that add missing things. And if you don’t script , well, it’s best to not regenerate the rig too much then :smiley:

In most projects I worked on , rigs where custom and none of the auto-rig solutions available would fit.

I ended up making my own autorig system that is more inline with what I need. It works for me, but probably not for you because you have other constraints.

Have you looked into wiggle bone addon ? Looks like something that you’ll like but maybe not advanced enough.

I’ve seen demo, looks awesome never tried it !

Problem with wiggle bones last time I’ve tried it after enabling wiggle world my FPS dropped down a lot. It was unusuable on my system. I have pretty much old PC so maybe it can’t handle all calculations behind this addon. Tho I’ve tried another addon bone dynamics or something that one worked much better if I remember correctly it utilize cloth physics to drive bones similar like you prevously suggested with soft bodies. Well maybe you are right it worth to try. About drivers I found clumsy workaround to make it easier to set up. I 've made temporal object where I’ve created needed drivers for rigify and then after regenerating I just copy-paste them to corresponding rigify bones.
I’ve used custom rigs before but failed to implement snap IK-FK function and switchable parent. To me it’s very usefull features saving lot of time. Good thing about custom rig is that you controll complexibility at any part of rig but at cost of time and efforts to building one. Also recently I’ve found really nice addon for rigify called animbox with lot of cool features for animations it’s free and really nice one:

Also I’m curious how you deal with hands and especially fingers? I mean in usuial animations it’s ok to use some poses to make basic deformations like curl, fist, loose etc. But what if you need close shot where fingers interact with animated obiects, like grab or touch surface without intersecting geometry? I’ve tried different ways and all them failed. I’ve tried to build some IK system for fingers but it’s unusable too much work to setup and manage lot of IK chains and result was still unpleasant because even if fingertips are connected to object uneven surface - rest bones can go through it or bend in some wrong angle. Let’s say you want to animate hand sliding on character animated face how would you achieve that?

Hey !

Super interesting Animbox, I didn’t know about it , I’m curious to see how it’s done internally.

Generally I use a similar rig that the one used in rigify : One main controller to pose the finger with scale + rotation. And a FK chain for more precise animation.

Super interesting question ! Sadly, I can’t give you a simple answer.
I’ve got some little animation knowledge but I don’t animate professionally. But I’ve worked a lot with animators, as a rigger, cg sup and director. I’ll give you my POV on that, could be interesting to get the POV of a pro animator.

First a project is always in relation to a budget/deadlines. That give the tone of what we can expect from the rig, the animation, and how we handle the visual storytelling.

I’ve worked on small budget project (mostly series), where we tend to avoid things too complicated to do at very different levels (writing, storyboard, layout, animation) .
Is what you describe something complicated ? Not that much, but it will probably be made simple and safe. The most important isn’t the action in itself but how it relates to the story.

Animator may have some room for adaptation, and do it in a sightly simpler way.
Will there be interpenetration ? Probably, but they’ll manage to make them unnoticeable.
In the end, that action may work better if the hand pass slightly through the face, rather than just sticking to the surface. To fake the sense of the hand pushing on the face.

They’ll probably use creatively the controller of the face to give a slight interaction between the face and the skin. It can probably work without …

If something gets really complicated to do and the rig don’t allow to animate the action properly, then we find an alternative. And probably the animator will naturally go for it without asking.

That’s for low-budget project, the short answer is that we need to stay on budget and can’t afford too much time spend on a few seconds. Especially if that’s not an important story point. it’s our job to avoid these stuff beforehand, or simplify the action to say the same thing but in a simpler way.
But once again, probably in that case the animator will manage something, we won’t go too close up with the framing, it may not be the best shot of the show but that will do the job !

Now let’s talk about a “Pixar” movie. Where every frame should be awesome. It’s not about ok-ish animation, each shot , each single action need to be at it’s best.
Then , they can afford more time in the rigs, what about having an IK chain for the fingers on top of the other controllers ? Probably the face rig will be more defined and allow more interaction at animation stage.
They also can afford to sculpt animation as a last step, or do a FX pass on the face. Like making the face local and adding a few shape keys only for that shot.
It’s totally ok to have someone spend a day to deal with that kind of detail on that shot. Someone may have already spend a few days on animating the shot.
Where in low budget the animation rate is between 5 to 15 second of animation per day. And if there are 10 character on the shot it’s the same rate than when there is only one.

In the end, animators manage very well interactions, it’s part of their job and they do a quality check pass on these. At least in the show I’ve worked on, there are always small interpenetration, but they are controlled. Animator knows where we’ll be looking at . The model may be completely wrong, but it will look good from the camera view.
You can’t look everywhere so most of the time you follow the action and don’t notice details. And of course in a Pixar movie they have a lot of more room to fix things and QC the work.

This is a shot from a show I’ve worked on :


The blue character sitting on the floor : that was surprisingly quite complicated to handle.
For various reasons we couldn’t change the action or the framing.
But the animator managed something great. Still a few interpenetration remains. The bottom of the feet pass through the floor. But if you show the episode to someone I’m sure he won’t notice it until he knows were to look.
Except that , his body is quite distorted, but that looks natural. Probably his ass is completely through the floor. And it’s left foot through the red door in the set.
That was a funny one, there are far more complex action/posing on that show, but this one was hard to make and didn’t end up as polished at other , more complex shots.

Hopes that helps.

1 Like

Thanks for sharing your experience. Very interesting. And what about your own project do you have any apart from your work? I mean to make something without budget and deadline things. Just for fun or to express yourself. Recently I’ve made thread here about indie artwork asking people why most of the artworks are just still images, not animations or 3d comics or even 3d short movies but it has only one answer replying that to create something more than still render is overcomplicated for one person and it barely could be monetized. He said that creating 3d comics is almost as hard as creating 3d short movie, it has to have good plot and lot of work to be complete to tell story. Thus it turns out that we (hobbyists and single artists) stuck only with option to make still renders to tell our stories or express our ideas? Sounds to me very gloomy and sarcastic considered how much technologies advanced thes years allowing to render really great pictures on our home PC. What most hard and time consuming part of production short 3d movie is in your opinion? :slight_smile:

Hahaha, yet another interesting question that need an elaborated answer !

I have a few of them, it’s hard to finish them !
I managed to finish this one : https://vimeo.com/204653491
And that one : https://www.dailymotion.com/video/x5ns60 (damn 14 years ago …)

Some people are better at doing personal shorts, I’m not the best example.

Indeed making still images is much simpler. It can take from a day to a few weeks.
Or put it differently, what’s the goal here ? :
You like modeling character : then you take a few days to do one, render a turntable with good lighting : you just made a personal project.
You like animating, you grab a rig on the web, do a cool animation, bim ! you’re done in a day.
You’re an environment artist , you model a sci-fy factory, grab some megascan asset and put the factory in the middle of the jungle. In a few days you’re done.

Now when you are a generalist / film maker that want to touch everything, you run into a lot of caveats :
You need to be equally good in many fields. Who wants super rendered images with poor animation ?
You have also to handle sound, editing, visual storytelling/ cinematography ,voice recording or charlie chaplin mode ? etc…
That’s not something you can do in a few days, because you need to work too , that project may span over a very long time period.
I’ve got one quite close to finish (that was ~5 years ago) , but because I waited for too long (too much work) , now blender is different (rigs doesn’t import well anymore) , I’m different, and I’m tempted to start over many shots. Or go back to 2.79/2.8 :S which I’ll be quite sad with. So maybe in a few years I’ll be done with :smiley:

But I think the biggest caveat isn’t much how much work it can represent. Because that can be managed to some extent.
Most of the time, we do personal projects for two reasons :
1/ we have an idea, or a visual universe that we want to express.
2/ we want to improve our blender / art skills.

Because of the second, we may loose the original goal that was simply telling a story.
Instead of finding a simple technique that allow to work fast, we tend to became animation studios on our own and learn/improve while doing.

A 5/10 minute CG short film may take 6 mounts to make ( for production) with a team of 4/10 people.
That’s 600/1200 days of work.
Going for the same quality , alone, on weekend … I let you do the math… And you should add extra days for learning while doing…

When I work professionally , the first thing I’m presented with is the idea. Then I ask for the budget.
And from that we work to make the best of the idea, while staying on the budget.
If I like the project I may end up working 12/16 hours a day instead of 8, but still the deadline won’t move, it’s bad to undercharge, so I stay pragmatic. And I always deliver projects on time. Even very tricky ones.

Somehow, when I work on personal projects because I aim for the best of my abilities, I want to learn, I tend to forget about all that, and fail at finishing the project.
That lead me to the conclusion that I may not be ready yet for telling stories, or doing my own shorts. But that’s something that I really look forward to, maybe latter in my career.

So one may ask himself what is the most important between telling stories or improving technically.
If it’s telling a story, what about doing it with a boring technique, that is well mastered, and easy to do ? That doesn’t mean the result should be bad visually , it’s just a style, and the story matters most.
If the sake of learning/improving prevails, then even if it’s not finished, the goal is already achieved.

Now, to not leave my reply too pessimistic, because I see from time to time very cool personal short, I’d like to share this one :

Look how simple it is, yet super cool. Very great cinematography and from someone quite new to blender.
But yeah, the guy behind it is an experienced cinematographer / director. So even if he is new to blender there is years of experience with live action and visual storytelling behind.

And some people manage to stay organized and motivated and do their “pixar” movies :

I see some cool shorts done by individual from time to time, but we can open some philosophical bets here :
Given than making these shorts are long, many stays unfinished , and not every CG artists do shorts…
For how many good still renders, simple personal project, you’ll see a short movie ?

Hope that helps, because I find super cool to make your own shorts. That’s an awesome thing that you can share them on internet. And you have all the technology to do them. Yet the issues may remain in the human behind the computer !

Good luck with your project !

1 Like

Another good example that I like very much :
https://www.davidoreilly.com/please-say-something

Yeah it’s very very complicated thing creating 3d movie. It demands from one who want create short movie to have knowledges in so many aspects from art to technical, even programming is needed at some level. It’s a bit overwhelming. I came to blender from 3d max and to 3d max I’ve came from game modding. Game modding is difficult too, but it seems to me now much more simple than creating something beliveable in 3d art in terms of telling stories. There a lot of great modifications to games that can be called without doubt as masterpieces, and many of them are made by single modders. Incredible amount of works is done in such projects, - music, voices, animations, level design, quests, visual arts, LOT of scripting, clothes, armor, weapon etc. But still it looks to me easier than to create even short film few minutes duration while in mod you can spend hour and hours playing enjoying story, animations and immersion made by some talented people. I used to participate in some modding projects and even started my own - it’s still wip, and managed to do lot of things, even quests. It was hard but fun and satisfying when you see your mod is working and you can express yourself by telling story you have in mind.
Thanks for kind words. We will see if I manage to finish my project at some point. Because I already have spend a lot of time on rigging my characters and even creating some environments. But I feel it will be long way to go. Also I will need to upgrade my PC, I am getting tired from lags and low FPS in viewport even in solid mode.
On positive note at least during this project I’ve learned lot of things and even more things left to learn. :grinning:

1 Like