[non-blender]SketchiMo: Sketch-based Motion Editing for Articulated Characters

If you don’t mind, please don’t simply do a linkbomb. Your subject line says a lot, but it would be nicer to include some context about what’s in the video and why you think it would be interesting to Blender artists.

there has been under developed for blender. Nice

I’m not an expert in the animation field, but aren’t those tools 2 totally different things?

PS: SketchiMo is cool. There was a similar tool time ago called motiontrails or smth in blender. Dunno what happened with that project tho. If I remember correctly it wasn’t very stable and had some performances issues at the time.

These kinds of things are all cool and all, but seem silly without many basics addressed. As an animator, I’d much rather have a more robust basic animation system than a flaky one with loads of bells and whistles.

Here are some basic things where Blender is way behind:

-Mesh highlighting for bone selection (for invisible rigs, really helps with selection and previewing)
-Useable performance for animated Subdiv models (OpenSubdiv?)
-Ability to link multiple versions of the same rig (Depsgraph?)
-Robust, working motion re-targeting
-Useable, working NLA system with working offsets so that you can chain, say, a walk then a run
-Ability to edit motion curves in the 3D view

Right now the Blender animation system has a bunch of holes and areas that don’t really work in practice, such as the NLA. Those should be fixed before pose sculpting or bone sketching etc.

Would you like to specify, how working on this is stopping/hindering other animation tool development? I don’t see the connection. Is the same developer responsible of making other animation tools also?

I think this is really great tool. If nothing else, just watching how to stroke that chain into pose convinces me that this is very handy tool to pose.

It is the same developer, who’s done some pretty great things over the years. It’s a question of focus. With a given amount of time and effort, I’d much prefer that was used to fix the glaring issues in the animation system before adding whimsy features like this. You need a strong foundation before adding ornaments.

Ok. So you would like to change the developer focus on something he doesn’t feel so passion from something he really likes to do? Is he getting any official payment? If not, it’s really important that he can do what ever he needs/wants to do.

Don’t get me totally wrong, I too would like to have prepairs to animation tools, and you William had a great list. But I feel that telling someone to change his focus from what he wants to do is not going to help anybody.

(I’ve split my original response into two parts, since for the first time I managed to make a post which hit the maximum 10000 character limit XD)

Hi William,

While I agree with some of these, I also beg to differ on a few of the others.

On the ones I agree with (mesh highlighting, animated subdiv performance, depsgraph/linking/instancing), work is already happening, ongoing, or is actually blocked by the need for some other stuff to happen. To be more specific:

  • Mesh Highlighting - There’s Julian’s work on Face Maps already in the works. Otherwise, in the meantime, there’s still the Mask Modifier way of doing it.
  • Subdiv Performance - That one’s pretty much exclusively Sergey’s area. IIRC, it was either stuff about the OpenSubdiv library missing some stuff, or our viewport refactor (or depsgraph-geometry handling) not being far along enough to support everything else it can give us.
  • Depsgraph Stuff - Yes, this one is important. That said, IIRC we don’t have a consensus on best way to approach this, and consensus is needed on this, since the next steps here will require quite a lot of backend rework that again needs multiple people working on it. (Personally, I’ve already dealt with the parts that I’m most familiar with and am most aware of the issues with: the new depsgraph should now have sufficient flexibility to deal with the rig setups that were previously causing problems)

Now, on to the ones that I don’t entirely agree with:

  • Motion Retargetting - Is it really that useful? Maybe/maybe not. TBH, personally it falls into the category of stuff which I have zero interest to work on actually. Just like finding a more accurate way to do single viewpoint motion capture using a video feed + depth camera; other people can work on that - heck, heaps do in academia.

  • NLA - While I do intend to look into this again at some point, I have also come to the conclusion that this is by and large a somewhat “failed” idea, and a bit of a dead end. My impression is that NLA/layering is one of those things which sounds cool on paper, but then animators come along, try to work that way, and then find that it really doesn’t offer the kinds/level of control + quality desired, and that there are things in there that end up being hard to get their heads around.


  • Motion Path Editing - My current stance on this is as follows:
    1. On a technical level, solving those paths is often incredibly difficult when we have to do it for the kinds of rigs animators currently use in Blender. In fact, in some regards, there are some setups where I think it might be impossible to find an actual solution that fits, because the combination of constraints + IK + transform limits/locks means that position can’t be reached (see my comments earlier about big complex cores). Maybe we can eventually solve some of these. But, we can also solve this in the context of getting sketching/sculpting working - stuff that other traditional “big packages” don’t have, but in-house software for high-end studios do increasingly have equivalents for.

    2. This Sketchimo paper is in fact the “second part” I’ve been referring to when talking about these Sketch + Sculpt tools. Pose Sketching resolves the problem of providing a more fluid way of quickly blocking in the major poses. Pose Sculpting resolves the problem of tweaking/adjusting/fine tuning these poses, and also serves as a way of selectively applying other effects to your poses/animation. What Sketchimo does then is it tackles the problem of timing - or more accurately, how we can provide some better ways of capturing/expressing our intentions of what we want the temporal aspects of the performance to be like. It provides a “broader strokes” view for quickly capturing the animator’s timing intentions (and from the videos/examples, it also looks like it can support ways to refine those guidelines).

But how does Sketchimo have anything to do with Motion Path Editing then? Well, one of the things you may notice from their demo is that one of the prime things it is being used to do is to control/affect the arcs being traced by the rigs. In effect, it’s doing what Motion Path Editing is aimed at doing. BUT instead of taking this manual vertex-by-vertex approach to tweaking the paths, we are taking a more cohesive look at the motion as a whole. In effect, this is akin to the Box Modelling vs ZBrush/Sculpting shift we’ve seen, except this time it is the animation workflow that we’re focussing on.

IMO, wouldn’t it be better to try to put in place something more advanced and potentially an improvement over what others have already done, versus perpetually being in a position where you’re “slavishly copying” the competition, constantly chasing after an agenda that is being set by someone else, and trying to only mirror/imitate what they did in the box they set on their terms. I know what I’d rather prefer to do! At worst, by pitching for the hills, we just end up failing, maybe “wasting” some effort in the process, but what we picked up in the meantime could well be enough to then fall back to just doing the same again…


(Continued from above…)

All this discussion ties in a bit with the broader strategy of where I currently think animation tools/systems should be going in the mid-long term:
“Simple core evaluation engine + A multitude of complex, specialised tools for getting particular things done”

From what I’ve seen - both in developing Blender, but also what’s happening in other parts of the industry - is that having monolithic evaluation engines which try to be smart ends up being trouble. It often boils down to the following problem: artists initially find it easy to use the engine to do X, then they try to push it to Y, but it can only ever give them Z (as the available inputs/controls can only go so far). But, because this is the core engine powering everything, you cannot just turn it off to “art direct” it. Because you’ve got a complicated basic structure that needs all that special machinery to function, and any tools you try to build on top of this engine then also must be made to work with that machinery. Some examples of things which fall into this category: Full Body IK, current ragdoll implementations, constraints (to a certain degree), procedural motion in the core engine, NLA.

Another prominent example is the problem of trying to animate ropes and strings - AFAIK, we can all pretty much agree that those things are hard to get right, and that trying to do so in the context of rigs where we’re still constrained to having to work around directionality, hierarchy, and a fixed control structure all end up being trouble eventually. I’ve heard of quite a few examples of animators (again, not just Blender animators) struggling with the rigs for these, and spending time fighting their rigs to get the desired results. IMO, if we just stripped the core rigs here down to just the bare basics needed to deform the geometry, and then focussed the rest of our attention on replaceable tools for controlling these rigs, we may be in a better situation. The rig being simple, and the tools being replaceable are important here, as it means that if a particular tool isn’t doing the job, we can always specially craft one that is up to the task, and have it work in that situation.

Pose Sculpting + Pose Sketching are my current attempt at putting in place a set of modern + more powerful tools to fill this void. Now, while the “sculpting” part is something new/different from all the other attempts out there (at least according to all academic literature, industry news/discussions/demos I’ve come across, etc.), the sketching part is something that is gaining traction and interest in industry in the past year or so. For reference, at Siggraph last year, Pixar gave two talks about sketch-based tools they’d been putting into their animation workflow: 1) Is used for posing rigs - specifically, long, chain-like appendages like necks + antennae + tails (e.g. Arlo’s neck), and 2) For adjusting the silhouettes of deformed characters (e.g. much like the AniSculpt, etc. stuff we’ve been seeing for years). I would also be surprised if the team developing Premo didn’t at least also consider the possibility of putting in place similar tools - they did after all reportedly do quite a bit of brainstorming at the start about what kinds of things their animators would like for making their workflows better, and/or how the tech could help artists focus on the art more.

At this point, there isn’t much that can be done to persuade me from continuing work on Pose Sculpt / Sketch. I’ve waited several years to have enough time/skill/knowledge to give it a good shot - to satisfy a personal curiosity/belief that a certain kind of workflow can be put in place. For anyone who says, “that doesn’t look useful - it looks too imprecise/clumsy/etc.” - you’re right! I’d agree with you that what you’re seeing now isn’t what it really needs to be, and/or what it could be, and/or what I actually have in mind. It’s getting a hell of a lot closer now than it was during my first attempts a few years ago. The February (“Adjust” brush) update for Pose Sculpting brought that goal a lot closer (it solves one of the types of behaviour that we want, but it doesn’t solve the rest). The Pose Sketching (Direct Sketch -> Chain mapping) I’ve gotten working now brings us another step closer, and probably solves quite a few more common cases now too. But again, in total, it’s still not the thing I’m really after. Adding Line of Action sketching will bring us closer, but even then, I think there’s still something missing - what exactly I haven’t figured out, but that’s what this type of work is always about. Exploring the unknown. Challenging yourself to find solutions to problems you didn’t know existing until you get to that point when they appear. No one said doing pioneering work is easy; and many haven’t seen it being done, so probably won’t realise that setbacks and living with a certain degree of uncertainty about whether the thing will ultimately succeed is a natural part of the game.

As for other Blender animation maintenance things - I’ll continue to work on those. Smaller things that can be quickly accomplished, which give the highest amount of benefit vs amount of work required will be prioritised, and dealt with chunk by chunk.

When will we start seeing this staring to get merged to blender?
Wouldnt it be better if people start testing it in real scenarios. :slight_smile: Some good feedback and ideas might arise.

the silhouette curve editing used in “Feast” would be incredibly useful to have . I cant find the video of it anymore :frowning:

The core stuff will be merged when enough of it becomes usable/stable. An example of something may change still is whether I keep this stuff in Pose Mode (to enable quicker tweaking of poses using this stuff, while retaining use of all other useful pose tools without having to make them also work in other modes), or whether it should go into a fully blown “Sculpt Mode”. At the moment, it’s almost reaching the point where it would make sense to put it into its own mode (to act as drop-in replacements for things like grab, rotate, and scale, but with a different style of interaction).

As for when it will become available for more widespread testing: there are currently 3-4 technical issues which must be resolved before I think it’ll be more suitable for people to test. For example, what may/may not have been visible in the video was that I was being extremely careful about how I used in in places; the reason is because the brush tool isn’t aware of custom shapes on bones, so on some rigs, it may end up accidentally latching on to and affecting a bone you’re not even aware is in that vicinity. I’ve found that it is glitches like this which often turn out to be dealbreakers, but they also turn out to be simple things that I can still quite easily pick up on myself still.

But yes, you’re right that more real world testing of these tools is something that will need to happen :slight_smile: Hjalti’s comments regarding why animators often end up just manually tweaking things channel by channel via the manipulator is a good example a useful insight. What can we take from it? Well, it looks like we’ll need to pay a bit more attention to what’s going on interpolation issues - not just for these tools, but for the animation tools in general (e.g. perhaps we need a tool to check for and flip rotations if they’re going to rotate through clearly invalid regions such as intersecting another bone. Something like the “flip quaternons” tool, but which actually goes through detecting these glitches)

EDIT: Hmm… I didn’t hear about such a tool being used on Feast. Perhaps it may be using similar tech to the tool Pixar was using, which was based on the “SilSketch” paper.

@Aligorith > Here is the tool I am reffering to:

he calls it “a little maya plugin we wrote”

Do you think it would be possible to implement in blender?



here is the technical paper:

hope it helps

silsketch would be awesome to have both for sculpting and animated silhouette editing

silhouette is super important in animation and such a feature will open a lot of doors for blender animators.

I am surprised actually - that after 9 years nobody has implemented this in blender. Meanwhile it is available on maya - but only internally to disney animators.

Nice tool. I enjoyed Patrick’s presentation too. In passing I think that attention to realism he spoke of is a trap that recent BI movies have also fallen into too at the expense of time spent telling a story and ‘art’. Instead of fretting about how Frank’s wool was or wasn’t for example or seeking a high detail of materials it would be better for a small studio to go for a project that has a degree of ‘artistic license’. Anyhow carry on… just an arm chair opinion :wink:

Wow, interesting thread! :slight_smile: I hadn’t seen SketchiMo nor Patrick Osborne’s presentation yet, so thank you for those - very cool. :slight_smile:

I’m following Aligorith’s developments with great interest and fully agree that it would be great to see animation workflows develop towards a more artistic, haptic and intuitive direction - as modelling has done with the rise of ZBrush.

What it boils down to is a hierarchy of needs, in which intuitiveness is important but control and reliability are of highest importance.
What animators want is to have as much precise control as possible within a humanly manageable number of variables (channels/controls/bones). Any developments to make the management of a rig more artistic and intuitive will be hugely helpful and beneficial to them, but they’ll be just as quickly discarded if it means giving up control, or if they work unreliably in a meaningful percentage of cases.

I’d love to do my posing or my motion trails via a sketching/sculpting tool and it could save tons of time, but if those tools would cause unwanted/unexpected flipping or other weirdness that I wouldn’t get via traditional rig manipulation methods - even if it’s only 10% or even just 5% of the time, I’d most likely go back to the more time-intensive but more intuitive (=behaves in a way that I expect it to) tool instead.

But I’m all in favour of exploring novel workflow paradigms. And I’m also in agreement that Mocap retargeting and NLA aren’t really of great priority to me personally. They are very important tools in game animation pipelines, but not really used in feature animation (Dreamworks, Pixar, Disney) productions - and the latter is what I’m personally much more interested in.