GSoC 2011 - Improving Motion Capture workflow - feedback and updates

Hello everyone,
I’m opening this thread to connect with the community in regards to my GSOC 2011 project - Improving Motion Capture workflow within Blender. This thread will contain updates regarding my progress, and releases of early functionality, probably in the form of python Addons for Blender.
Most important to me, I hope the community will be unabashadly forthcoming with comments and feedback. Blender as a whole and GSoC projects in particular are for you, the users, and I believe in this approach.

My goal in this project is to provide semi-automatic tools for artists to deal more easily with motion capture data. While Motion Capture is very realistic, I feel artists need tools to easily change the original data so it has the look and feel they want for their project and characters. I see mocap in this context as a tool and base for animators, and do not want to get into the realistic physical and biological models that Blenders more scientific base might be interested in.

What users can expect from the project is a comprehensive system for dealing with motion capture. The GUI component will easily give you access to:

  • Remapping the original “Performer” mocap rig to a user created rig, allowing you to retarget animations to your character.
  • Converting the heavily-keyframed mocap animation to less dense keyframes, reflecting how a human animator works.
  • Auto-detecting loops within the animation. For example, if you import a walk cycle, at a touch of a button the system will convert this series of walks to a single walk that loops well.
  • Stride bone/object - You will be able to transfer the displacement of the root bone from the animation a chosen stride bone or empty. Most mocap anims move the Hip bone (the usual root) around the world space, and often this movement is important for realism (the up and down movement in a walk cycle, jumps, etc.). However, being tied to the root bone is incompatible for most workflows, especially those using stride bones to move a character along a curve. Therefore the pace and tempo of this displacement can be transfered to a chosen bone or object, which handles the world displacement of the rig.
  • A layered animation system, where the raw retargeting data is on the bottom layer, and on top of this will be various constraints that can be applied to deal with artifacts arising from the retarget. For example, ensuring that the target’s feet do not cross the ground plane, or using IK to allow the user character to use the same footplant locations as the original anim, interacting with objects, etc. On top of this constraint layer will be a “user-tweak” layer, allowing the artist to layer on custom keyframes with ease. My hope is that this layered approach will result in a non-destructive system, allowing you to go back to the original animation, turn constraints on and off with ease, etc. It is not fully decided how this will be implemented, but I’m leaning towards using the NLA system to create a usable action that contains more than one layer of animation.
  • Handling of batch importing/retargeting - if you have a number of animations that use the same original hierarchy, you will be able to save and load your remapping of the bones, to enable quick importing of other animations from the same mocap session. I also hope to include some presets to popular hierarchies, such as CMU’s huge mocap library.
  • Use for BGE - The Blender Game Engine is a big part of Blender in my opinion, and as such I want to ensure that as much of the above functionality will be compatable with animating for games and the BGE.

The looping and curve simplifcation features have already been coded in the form of Python scripts, and I’m thinking towards releasing them in Add-on form in the coming days.

In sum, user feedback is very important to me. If you have any comments or suggestions regarding the above ideas and features, or how they should be implemented within Blender, please don’t hesitate to comment here or PM me. I’m also interested in your animation workflows, to ensure that my changes will mesh well with the popular methods of working in Blender.
A copy of my proposal and current progress is available at: http://wiki.blender.org/index.php/User:Benjycook

Benjy

Converting the heavily-keyframed mocap animation to less dense keyframes, reflecting how a human animator works.

There is an addon for that
http://www.blendercookie.com/2010/07/26/tip-simplify-animation-curves/

You can look into it and hardcode it in.

Wish you best of luck with this.

Are you planning to add support to Kinect? that would be awesome, if not… a good way to re-use a rig for several motion captures files would be welcome aswell, having an easy and fast tool to remap the bones.

+1 for the layered animation system… Maya has it and its SUPER useful for doing muscle jiggles / very finicky stuff that you dont want to mess up your main curves with.

Congrats! Your proposal is very impressive.
and Wow! What an ambitious and much needed set of tools to implement.

You mention the use of the bge. I am very interested in this as I have been working on dynamic/physics based character control for awhile.
wimmote marionette - 2.49 progranimation - 2.5
I’d be happy to help out with testing/reviewing code (not that my codes that great or anything- just sometimes helps to have another pair of eyes) Just let me know.

I also tried to get the Brekel-Kinect software working with the bge. Brekel has a socket connection that sends joint data, got a demo working but the shoulder rotations are mixed up and I couldn’t tell if it was my code or Brekels. I assume it’s me- I have some kind of disfunction when it comes to 3d angles. Not that you don’t have enough on your plate, but if you have access to a kinect and want to give this a whack I’d appreciate it.

-kind of a side rant.
Do you find the BVH format annoying? Not just that the acronym is irrelevant given that Biovision is dead, and the acronym has about 100 other uses. Mostly it’s just old. I really don’t like the one frame per line thing, seems crazy. The joint section isn’t horrible, but it could be so much more. I don’t know Maya/3Ds, but there has to be a similar enough set of features to allow some rigging standards, constraints, IK, etc.
A new format might include things like loops and even maybe animation layers/constraint information. Of course this is way beyond the scope of your project, I’m just curious if anybody else feels this way.

Also - in importing BVH’s from DANCE (Ari Shapiro’s Dynamic Animation thing) I found Blender imoprter (and most other BVH programs I found) freak out if the BVH doesn’t have 3 channels of rotation for every joint. DANCE export has <3 on some bones (ex. wrist, knee). I wrote a horrible script to add in the channels


If you create an importer can you set it up to allow <3 channels?

It’s very awesome that you’ll be releasing code along the way! I look forward to seeing this develop!

Benjy, are you doing mocap data cleaning also? Cleaning up footskate, marker noise, and other artifacts.

Freemind - I’m aware of the existing simplify curves, but it’s not as accurate as I like. It’s an add-on that was originally written for simplifying 3d curves, and was then ported over to FCurves because the underlying math is the same. My code, while a bit slower than the existing one, boasts better overall accuracy and is tailored for FCurves. For example, the code has an option to place the keyframes for all 3 rotations that a single bone/object has on the same frame each time - The objective function it uses to evaluate the error takes into account 3 curves at once, not one at a time.

SamCameron, teldredge - I do not own a Kinect and my proposal is not specifically geared towards using it. That said, I understand there are free (if not open-source) tools that can convert Kinect captures to BVH. Obviously, once in that format, Kinect users will benefit from the planned improvements. I recall a GSoC proposal that ultimately was not accepted regarding Kinect interfacing. If you’re interested, check the bf-commiters mailing list, I believe it was posted there. Perhaps you could contact the student and work on it anyway.
Regarding the BVH format, I’ve been approached by a number of people who use other formats to store anim/mocap data. My plan is that these improvements will be available if you use BVH or not. A possible solution is importing the data however you like, then selecting the mocap rig, then the user rig, and have some type of “retarget to active rig” operator.

telredge - Regarding your work in the BGE - it’s certainly very interesting, but as I mentioned in my original post I’m aiming towards an artist/keyframing based approach to animation. My main concern with the BGE is making my tools work with its animation system as well.
I’m unsure how well the BGE currently supports things like drivers, NLA, and constraints. When I worked with it during 2.4x it’s animation system was more limited than “regular” blender. That said, I have no idea what the current state of things is, especially when fellow GSoC’er Mitchell Stokes finishes his project. No matter how it turns out, I’m sure there will be a “bake” option or something for BGE users. Motion capture is too valuable in game development for these improvements to not apply to BGE. If you’re interested in combining motion capture and procedural animation, check this out: http://www.youtube.com/watch?v=WPoXNL_8Z5w (my work has nothing to do with this, but maybe when I’m done with this :wink: )

LetterRip - Marker cleanup will be addressed in the Fcurve simplification step. If the algorithm detects a very sharp turn, that exists on only one or a few channels, it will smooth this out, maybe ask for user confirmation. Footskate can be repaired within the outlined constraint system, via IK constraints that keep the foot planted on the ground when it “should” be. This could be done easily and quickly by the user, or automatically (there is some literature in my proposal dealing specifically with this issue). Footskate also depends heavily on the stride bone - if the animation and its world displacement are not in sync. I’m currently examining ways for dealing with this, but first I’m coding the actual retargeting stuff and testing ideas.

Benj, I’d keep simplification and reduction as separate steps.

Regarding Kinect, one of the GSoC proposals accepted for OpenCV will be creating an opensource library for estimating pose based on depth information. The student selected for it has a very strong background in the area, so a high chance of success.

Animation layers would be awesome! It all sounds ambitious, good luck.

@teldredge Just curious - did you show your work to Brekel? Maybe he could be interested.

He sent me the socket info. I posted the script in his forum but I think this needs attention from another blenderhead.

@Benjy - thanks for the link. Cool stuff.

Update:
The handy tools I mentioned regarding looping and curve simplifcation have been coded and are available on Pepper branch. You can either build it yourself or just download mocap_tools.py (https://svn.blender.org/svnroot/bf-blender/branches/soc-2011-pepper/release/scripts/modules/mocap_tools.py) from svn and put it in your 2.57/scripts/modules folder.

The functions you should use are fcurves_simplify and autoloop_anim. autoloop_anim requires you to have the fcurves you want looped to be selected (For a walk anim, select all of them except the root location curves). It accepts no arguments.
fcurves_simplify has 3 arguments:

sel_opt: either "sel" or "all" for which curves to effect, selected or all of them.
error: maximum error allowed, in fraction (20% = 0.0020), i.e. divide by 10000 from percentage wanted.
group_mode: boolean, to analyze each curve seperately or in groups, where a group is all curves that effect the same property (e.g. a bone's x,y,z rotation)

The defaults are sel_opt=“all”, error=0.002, group_mode=True, which will simplify all curves within 20% error, with group_mode active. 20% sounds like a lot, but remember that we are talking about “maximum error”, and I find that the animation still looks the same at even higher levels.

Retargeting work has begun and continues this week. UI work for all the functionality comes last, but I hope to be putting up screenies and vids soon, and whatever functionality is done will be committed to Pepper, so keep an eye on that. GraphicAll.org has been hosting builds for Pepper and the other GSOC branches. Builders: Your work is really appreciated - thank you! (http://www.graphicall.org/gsoc)

I made a mocap tool for MakeHuman a while ago. Originally it only worked with the MH’s mhx rig, but later it was generalized to quite general rigs; Sintel light and George rigs have been tested. Not sure if the scripts still work with the most up-to-date builds, but I have tested with Blender 2.57. Info can be found at

http://makehuman.blogspot.com/2011/03/mocap-tool-for-custom-armatures.html
http://sites.google.com/site/makehumandocs/blender-export-and-mhx/mocap-tool
http://sites.google.com/site/makehumandocs/blender-export-and-mhx/mocap-tool/custom-rigs

The last page also contains links to some quick animations. The code is not entirely transparent, but perhaps you can find some ideas.

I had some problems to make retargeting work with file linking. The problem is that to do retargeting you need to know the roll angles of the target rig, but those are EditBone attributes not available in a proxified rig. I solved that by storing the EditBone roll angle in a Bone property in the asset file.

may I wish you to implement something?
many animators (and me too) asked where they can find operations called “Add/Remove inbetween” (like in Maya). It is for fine-tune timing.
I am not programmer, but I think it is not so difficult to implement theese functions.

@BenjyCook
+1 for layered animation, I’ve used it in lightwave and motion builder, would be an excellent addition to blender.
If I understand Shader2 correctly the ability to edit keyframes directly from the timeline as opposed to editing them in the dope sheet would be pretty handy.

When working with animation layers in lightwave, or motion builder, I’ve always used the timeline or f-curves for editing. In blender you can’t edit the timeline directly, so I tend to edit the dopesheet. How will animation layers tie in with the dopesheet. for example would the layers be placed in a hierarchy for each object or would each layer have its own dopesheet.

ThomasL: I am familiar with your work on MakeHuman (mentioned in my proposal!). Thanks for your offer about getting ideas from it, I definitely will. With regards to the file linking, I’ll talk to the devs, perhaps we could expose the needed functions/data in the API.

Shader2: General animation tools are not part of my project’s goal. You should look at Joshua Leung’s (Aligorith) work (http://aligorith.blogspot.com/) this summer.

sx-1: I plan on using the NLA system to create a stack of layers with various blending modes between them. Thus, each layer would have its own dopesheet.

@BenjyCook
I suggested something similar on another thread here. At the time I believed it wasn’t possible to blend actions on the fly. Is this “blending actions on the fly” what you hope to implement.

Hello everyone,
I’ve been coding away, you can see some early results on Pepper branch.
To see what’ll be happening next and get more details on the project, check out http://wiki.blender.org/index.php/User:Benjycook/GSOC/Progress
That contains some screenshot mockups, some vids of existing and future work, and lots of implementation details.
I am particularly interested in feedback concerning a certain feature I’m debating to include:

Qualitative changes, if included (dependent on development time available and usefulness), to use a system similar to sound equalizers, to increase or decrease certain frequencies in the animation, that would modify the animation in such a way to increase tempo and character via a “equalizer” interface.

You can see and read more about this idea here (warning: contains math!) and let me know if you feel this would be worth my time and effort, or I should focus instead on other issues.

Benjy

This is great news.

I have been working a lot with MOCAP. Unfortunately I am in the middle of the another project so I am not sure how much time I can devote. But the list you gave above is pretty much my list of what I want to accomplish with it along with a few other things. (I don’t mean programing I mean rigging and using the NLA) My main inspiration (technically) comes from what they have in Motion Builder and it sounds a lot like what you are proposing. I had already got started on a rig that I was building around the CMU rig that I could control with IK and then using IK FK switching to be able to take over control of the mocap (FK) rig and relace it or have additive animation on top of it. It is not the best solution. But in essence that is how the MB set up works. It is pretty much what you are proposing with retargeting. This was something I was working to accomplish manually by building my own rig.

I hope I can get some time to provide some feedback. I think this would be a great asset to Blender.

One thing I want to get going is the ability to access mocap data as a library of motions. And thus be able to simply map and or blend various motions onto the different parts of a character.

Ideally I see this as being able to work in a non-linear non destructive way. And also to maintain the ability to further tweak the animation in layers.

There is a lot of potential here and I could go on. But I’ll leave it at that for now.