Face capture test

+1 for that :smiley:

jajaja agreed! :stuck_out_tongue:

Definitely what you did here is the best face animation done in blender. In the same level than Avatar face animation.

Thanks for the link to the video of the source material and the transcript.

I thank you a lot for being so supportive with my personal project, I actually looked forward two things: getting decent results with the animation, and demostrating the interface was good enought for professional facial animation. I think you guys are the one that can answer whether or not that goal is achieved, using the interface in your own pipeline.

See you around guys! :slight_smile: up to the next development/project.

I had come from a motion capture demostration in Buenos Aires perform by my studio. I am starting to realize how important is to get tracking algorithms works in real time, in order to become this prototype into a profesional animation tool, Iā€™ll explain:

  • imagine you can make your take and check it out after the shot, you may get a whole animation take with the possibility of testing the facial acting just after your actor performed the action or while doing it. It could save a lot of time in animation pipeline.

Let me know your criteria about this subject, guys, I actually think this could be something BIG in the animation industry, I think I have a vision.

I wonder how much front end technology integration is required. Are the camera or source media platform agnostic? Can you solve the camera location issue to record performances in a somewhat flexible way? I agree that openware interactive performance is cool. Could you develop a library of detailed facial motion that is triggered by a less robust tracker?

the head compensation has been solved by Christopher Chrerrett, let me show you his achieves: http://www.youtube.com/watch?v=filTu_7mkbw

The library is an excelent idea I think, I am excited about producing it, with the armature system of Face Rig Prototype, that could be possible, but for expressions, not for motion.

Hello again, guys, I am working on a helmet with a camera for face motion capture. I show you the results of the test with the blender tracking algorithm, I hope you enjoy it.

http://s20.postimg.org/jdowhx6jd/2013_07_18_18_56_06.jpg
http://s20.postimg.org/ajy00tjkp/2013_07_18_18_54_30.jpg

Looking good so far, militoarg! I have a link for you that may be helpful, if you are still working on a helmet cam rig.
http://1k0.blogspot.com/

I donā€™t believe theyā€™re using Blender, but perhaps some of the information may still be useful. Canā€™t wait to see more from you!

Thank you very much James, I really appreciate your support.
I agree with you, that does not look like blender. Nevertheless the helmet design is really interesting, the most important difference is that I added a powerful light source, because I need constrast, but unfortunately that is what make the helmet to be quite heavy.

Are you using an LED on the helmet rig because of the lighting of the room (for example, if the windows do not let in enough sunlight, or the lamps in the room are not bright enough)? Would it be possible to setup lights on stands to illuminate your face instead? I would assume the ideal (cheapest) method would be to record outside during the day, if you had a laptop already.

You are right, there is an issue of intensity of lightsource, but actually there are 2 important reasons:
1 Stability of light intensity to keep clean the image recorded.
2 Stability of light movement for preventing markerĀ“s occlusion by shadows.

I see what you mean. As an alternative, you could use fluorescent shop lights. They can be bought very cheaply, and provide very soft, even lighting to an area the size of your face. If using one fixture, I would place it directly in front of you, perhaps angled slightly if your helmet rig casts a shadow on your face. A more ideal method would be to use two fixtures, with a foot or two of separation between them, with both angled towards your face. This would provide both shadowless light, and very even illumination.

If using one light, it would work best in a horizontal position. Using two lights, vertically positioned would likely be best.

Of course, you may prefer the light on the helmet for other reasons, Iā€™m just trying to think of how you could move the light source off of it to lessen the strain on the neck.

It is absolutely necessary to do that because the helmet is very unconfortable, the image is quite good, but I am sure anyone would like to use it for more than 10 min of capture.

awesome tut thanks man :yes:

Thank you very much, Iā€™ll try to produce a little scene as part of my demoreel, with a facial motion capture and keyframe animation. So there will be much more of me for a while :slight_smile:

Looks better than the cameras they used in Avatar.