ANIMAT: Widgetless, On-Model Animation in Blender UPDATE 1: 13 Oct. 2014

Thanks for the heads up. I definitely need to study the linking system a bit more to make sure there aren’t any glaring bugs. I would be thrilled if they moved forward with a more standard system though :smiley:

Love the concept. Here’s a thought for the free version. Make it so that the Zone don’t repopulate, or similar. Something that is worth buying out for professional/Full timers. You want free-timers to feel the power of the plug-in and (eventually) step up; but at the same time they have to feel its worth investing in.

G

All I have to say is:


I could hug you. Many studios are doing this and its wonderful. Imagine using a Cintiq and “sculpting” your poses. Thank you so much for tackling this.

So wait… do you plan on having animators set keys while they’re in ANIMAT mode? Or are we supposed to set the keys when the operator ends?

It seems to me that the latter would be a less troublesome route to take… especially if we’re not able to change the current frame while we’re in the middle of ANIMAT mode.

But let me back up and talk general workflow. Generally speaking, my preference is to take a multi-pass, layered approach to animating. The first pass is rough, consisting of large, obvious movements. And each subsequent pass works on finer and finer detailed movements. My concern as it relates to this tool is that it might be difficult to do tweaks on those later passes because you have no idea if the control you’re moving is already keyed or not. For realistic animation, this may not be such a big deal, but for a pose-to-pose style, it could be problematic… ending with an over-animated feel that’s either really mushy or kind of spazzy.

Of course, maybe I’m overthinking this a bit… it’s hard to know for sure without actually playing with the tool. The best I can do is tell you how I like to work and ask questions. :slight_smile:

You have full access to all animation stepping in ANIMAT mode (aside from timeline mouse scrubbing, which I’m still trying to sort out) so my current suggested workflow is to keyframe while in the mode. If this isn’t agreeable to people I’m certainly open to suggestion. I feel like this flows better than having to jump in and out of the mode every time you want to keyframe. My personal workflow is very similar to yours (set big poses at key times, gradually refine in between these frames). My ANIMAT workflow so far has been no different than that, and then when needed exiting the mode to manually adjust f-curves if desired.

Thank you so much! This looks really promising! Me, my cintiq and my wallet are ready! :slight_smile:

As for testing I think you should be taking pointers from the success of Sergey and Sebastian when developing the Matchmoving functionality. It’s an awesome Coding/Artist dream team and the results speak for themselves.

Pick any prominent Blender animator and work in collaboration with them to get some solid feedback with every iteration… Fweeb are you up to the job:)

I’m not a rigger or animator but I think with a tool like this it will become much more accessible.

This tool looks amazing. It’s tools like this that get Blender noticed by the 3d community-at-large… as a matter of fact, I found out about this over at the Lightwave forum!

This looks amazing. Can’t wait to try it. I’m curious about linked rigs too. Shapekeys are a bit of a pain to me in the setup, and often I set up custom property drivers to get quick access to a couple of SKs on a simple rig. Would this plugin effectively give access to shapekeys in linked rigs? After 1.0 I mean…

Hugs and $ from me :slight_smile:

This looks great!
However, until autokeying and linked libraries are working, it wont be that usable for production.
I guess I’ll have to wait until the despgraph and library issues are solved.

The good news is I have autokey working!

YESS! That is fantastic to hear.

I’m working more in stepped blocking nowadays, but I used to do more layered blocking like Fweeb. The main break in the layered workflow I see is being unable to use the graph editor while in the mode. My current stepped workflow brings in the dopesheet to rough out timing while creating key and breakdown poses and the graph editor to do tweaks to a pose. Once I’m out of stepped, the graph editor is my best friend and I always have it up on my second monitor. Without access to these, this workflow could get pretty tedious pretty quickly, since I still use viewport and graph together in all phases of animation.

Perhaps it would be enough to just be able to jump keyframes while in the mode then use ANIMAT as the manipulator to adjust the control while looking at the graph to get a smooth curve. I generally have the graph up all the time anyway, so whichever way the controls are manipulated doesn’t really matter too much to me. I’ve just found the graph editor faster than trying to fiddle with the manipulator, which is why I enjoy blender’s viewport: it doesn’t need a manipulator.

I’m just getting thoughts out there at the moment. I’ve spent the morning writing about hot topics, so this post may come across as a little counter-productive.

I have a common use case that would require controlling multiple bones with one zone:

Elbow bend, and forearm twist mapped to one zone. Reason: I spend a LOT of time setting up my rigs to emulate the ease of posing action figures. To pose the arm, you only need to grab the forearm and pull up/down to bend the elbow, (and while holding that same area) twist it left/right to rotate the arm. I often use rotation locks, and drivers to recreate this efficiency.

Mapping multiple bones to one zone would allow great efficiency whenever there is a swivel joint is near a hinge/or ball joint. This would be the greatest benefit to me as a character animator. I think this combo would be essential (Zone: LowerArm. left/right twists forearm swivel, up/down rotates elbow hinge)

Question: do the mouse bindings consider the mouse’s movement direction relative to the Camera and the bones axes they’re mapped too? This is important, because the mouse bindings that control the bones would quickly become nonsensical if they remain static. Imagine using left/right to control elbow rotation from T-Pose, but once you rotated the shoulders to put the arms at their sides in a rest position, left/right is no longer relevant to bend the elbow (at least, not in an expected human way :lol ). What about viewport Camera location? If you’re viewing the character from a angle other than front, the gestures would also cause unexpected behavior in this case.

I think in many workflows, it would need to keep the gestures local to the bones/zones, factoring in their world positions and localizing the gestures to the bones local axes.

Keep in mind that zones can be as many or as few faces as desired. The way this is handled in Presto (and the 3DS addon) is to have one section of the forearm for bend, and another for twist. On Sintel I have the wrist zone for twist, and the rest of the forearm for bend, and it’s never a problem picking which is which. You could indeed map twist to left and right as well, there’s nothing stopping that at all since the forearm really only needs a single control for bending.

It would be next to impossible to map multiple bones to a single face/zone. It would, at best, impose a workflow hiccup that slows down animating, and at worst completely break in some weird corner cases.

Mouse bindings are (and short of some kind of python/matrix math sorcery can ONLY be) camera dependent. It might sound confusing, but unless you’re working with an upside-down camera it really never ends up being confusing. The goal of the addon is to get rid of the need for widget clutter and GUI hunting, not to be a 1-to-1 translation of mouse movement to on-screen action. It’s just a matter of learning what the controls for your character do. It would be easy enough to cover these corner cases for a human character, but this is a general purpose animation tool. Up won’t always be up, down won’t always be down, and tweak distance will match up for some shape keys better than others depending on how much the mesh deforms. These are things that, while technically possible to check, turn this from an animation system with almost 0 runtime overhead, to one where you’re going to be losing substantial frames on the calculation of each movement. I’d much rather release a tool that is fast and adaptable than one that is rigid and slow.

On a related note, so as not to flood the Market with two big releases in one week (looking at you, MotionTool!) and to facilitate extra time to make sure everything’s working correctly/get some quality feedback from a couple of seasoned Blender animators, I’m tentatively pushing the release back a week. I want to make sure that this is both useful to animators and as bug-free as I can get it before I release it out into the world!

I see forgot to write the question that would have provided context to my first question.

When setting axis rotations/translations to a mouse binding, can you specify the transform orientation that it happens on?
Can I set mouse left/right to rotate the armature along the global Z axis, while having up/down rotate along the view Y axis?
That would be enough for me to need this, because I want a clean UI for animating, and know the 3D widget is laughing at me because it knows I need it…I might have imagined that part.

Yep, you set rig, then you choose a bone, a transform, and a channel, and you can set one for x axis movement and another for y axis movement

This does look promising!

Is there any way I could look this over? I’m sure you’re getting lots of requests for that…

I think I’m decent at rigging, my character animation skills need work, (need to study acting…), but I would like to see how this would work with other rigs. I’ve been using the Eleven rig recently. It’s a nice rig, easy to use, has fine tuning controls in the transforms panel, finger rotations are numeric sliders…

Personally, I’d rather use widgets in the 3d view port over numeric sliders on the side. Widgetless would be even better, or course.

So is there any way I could try this out on the Eleven rig? I’m willing to spend the time setting it up on the Eleven rig and give you feedback. All work will be confidential and via e-mail.

Randy

I’m also keen to check this out. The fact that autokey is now working tips the balance for me. The linked library thing is a minor annoyance, but I can understand you not wanting to put too much effort into it; the current system is really not ideal.

Is the first place we get to test this the blender marketplace?