Lip sync plans for Blender...

For all you people that want to make of movie of rigidly chatterboxing MH models, that’s fine with me, it’ll make the rest of us look better.

Just because you used Makehuman to speed things up does not mean it will be rigid or chatterboxing or even “look” like a Makehuman model. You can put a little effort in after you’ve created your model in Makehuman to get to the same quality in less time.

As long as you can edit what is generated by automated lip synch, it would be useful for background and “organic attention”. Don’t you think?

That is all I want to say.

Koba

I said ‘rigidly chatterboxing’ to refer to automated lip sync, not talking about the mh models. It is entirely possible that there will be good, fluid automated lipsync coming out of this feature, it’s just that I have never seen that happen before.

EDIT: By the way, just because I’m regarding this cynically doesn’t mean I have to spoil anyone else’s fun. It’s just my opinion, is all. :stuck_out_tongue:

I’m just saying people almost ask for the option to write in console…:
" >blender -mycharacter - animate - dancing…" and wait for the program to make the animation for you…

“>blender -Character -frog -model it – Thank you”

">blender -texture -please make the UV’s -do it quickly I gotta go to the grocery!!!

If that’s what you need… good for you…

I choose to make my own models, animation, rigs…

Good luck!!!

Mike_S - I agree with everything you said. Thank you for saving me the time to say it :wink:

Koba - We seem to be on the same wavelength, too :slight_smile:

On the matter of “automation = no art /bad quality”, the point of automation IS to allow the user more focus on quality! Any artist worth his/her salt spends long hours picking out the proper tools to do more, and really good artists shape even their tools to let them do what they do easier, allowing them to put more effort into advancing their field. Claiming that lipsync will produce bad results by definition is like saying computers ruined scientific research; bad lipsyncing will ruin animation, but automation in itself is a plus.

As for the button that creates your movie, don’t think it impossible; just like Amazon can (attempt to) figure out what books you would like, a simple algorithm based on your earlier favorite shows/movies can do it for films. Now we just need that function that automates the production fo a manuscript and creates animation. And voices… oh those horrible computer voices :stuck_out_tongue:

I’m sure it’ll be a great help in that’ll enable people who are actually GOOD at lip sync to do it faster. But the people who don’t know how to animate will just use it raw, so that’s what we’ll see the most of. :stuck_out_tongue:

Anyway, please do not mentally associate me with bataraza, because–though I am sort of on the side of his view point in a few circumstances, I’m not bringing them up here.

Are my eyes decieving me? A group of nice people are going out of there way to add a feature that will allow better animations to be made faster and people are complaining about things like purity and organicness? Purity is for drinking water, not art.

Automated lip sync. Sort of like the Poser or MakeHuman of animation. Great for quick background stuff, but crap for anything that requires any sort of care or organic attention.
Glad to see you’ve tested and tried it. It, like Poser of MakeHuman, is an optional tool, one who’s quality cannot be tested at this stage.

I agree with BlackBoe… I think it would be perfect to make a movie like monsterHouse… where everything is mechanic and automated… but where everything losses the “art”(?) of making animation…
Like I said, it is a tool. By your logic, you should not be allowed to use the walk cycle features. Or subsurf. Or the compositor. They do all the work for you! You should have to manually create the vertices for subdivision, no sense letting a tool do it for you! You should have to manually pose every single vertice, render, move again, render, etc, then splice all your rendered frames together to make your movie. Otherwise, it isn’t art! It’s mechanic and automated!

I don’t understand people waiting for this kind of stuff while they can’t make a simple bone animation …??? No offend
Speaking only on behalf of myself, I can say that making bone rigs is exceedingly complicated, and outdated thanks to shape keys. And even using shape keys, or drivers, or whatevers, for lip sync is tedious and generally gives awful results. At least in my experience. I welcome a tool that is there to reduce stress and provide better results.

I’m gonna see in a couple of years someone asking for a button that make the whole animation for you…

What a fantastic exaggeration. I mean, we’re somewhat there already, what with things like Spore, but I doubt anything like that will ever replace hand created animations.

To the people saying this is important and necessary… try to make mancandy speak with stuff like that without loose personality…
Who said using this will make things lose personality? This only does the lip sync, it’s still up the artist to do facial expressions, eye movement, hand movement, etc, etc. I hardly believe it will detract from the personality at all.

think blender needs a loooooot of improvements on tools that already has…

Examples???
Improve lattices ( modelled lattices , following the shape you need…) Hooks…( deformations… parenting… this things aren’t working properly right now)… crazyspace vertex (not solved ) bones parented to a vertex o similar… etc etc and a large etc…
If these things are so necessary for you (seem fine to me), learn C++ and develop them! The people behind this lip sync project have learned C++ and are using it to develop something that they feel will greatly benefit our community. Not to mention in the PDF they state they aren’t quite up to par with Blender’s current coders, and therefore would probably be unable to fix some of those features you feel are broken.

I think sometime we should stop requesting features and request improvements as I said before… don’t you think…?
This is an improvement. If you feel otherwise, well, you do not have to use it!

But I understand it may be interested for someone who wants fast, easy and prefabricated products.
Does this mean you’re opposed to fluid and softbody solvers? Fast, easy, and prefabricated. It sure does make sense to have artists spend lots of time trying to recreate something like cloth or fluid.

Also, you are aware the user will still have to make the model, and make the different shape keys before they’ll be able to use this? All it’s doing is putting things you made together in order for you right? Kind of like the NLA editor. You must be opposed to that, too, right?

sometimes it’s more important to finish a project than to be painstakingly artistically brilliant over every detail.
I agree! I imagine Elephant’s Dream could have been much better (animation wise) if they didn’t have to spend so much time on lip sync.

I just said it almost definitely wouldn’t be worth half the same quality as a properly done hand-animated piece. For all you people that want to make of movie of rigidly chatterboxing MH models, that’s fine with me, it’ll make the rest of us look better. :stuck_out_tongue:
See an above response of mine. Just because you didn’t go through the hellish process of hand animating lip sync, which has been enough to turn me off from several projects, doesn’t mean your project is tainted, and certainly doesn’t have to mean it will look mechanized and ugly.

I said ‘rigidly chatterboxing’ to refer to automated lip sync, not talking about the mh models. It is entirely possible that there will be good, fluid automated lipsync coming out of this feature, it’s just that I have never seen that happen before

Because Blender never really had the feature anyways. Not to mention, how many projects with automated lip sync have you seen coming from Blender anyways? I honestly cannot think of many.

I’m just saying people almost ask for the option to write in console…:
" >blender -mycharacter - animate - dancing…" and wait for the program to make the animation for you…
Again, a fantastic exaggeration.

">blender -texture -please make the UV’s -do it quickly I gotta go to the grocery!!!
Manually unwrapping a model (I mean without calculations) would probably be the most hellish process ever, and I don’t think any one honestly would rather Blender did not have it’s UV mapping features.

I choose to make my own models, animation, rigs…
Good, I guess it’s important to you. It does not mean you have to oppose things that are important to us.

I’m sure it’ll be a great help in that’ll enable people who are actually GOOD at lip sync to do it faster. But the people who don’t know how to animate will just use it raw, so that’s what we’ll see the most of. :stuck_out_tongue:
In the same way that Blender allows people who are actually good at modeling to make good models, and faster.

Summary: This is a good feature, and many people want it. If you’re too hung up on the ‘purity’ of your art, you do not have to.

I’m not ‘hung up’. Anyway, I’ve never seen it done in blender, but I’ve seen it done elsewhere. Please do not assume that I am an idiot. I never mentioned purity either.

Also, you have far, far too much time on your hands. :stuck_out_tongue:

No, not really. I just spent about fifteen minutes on that.

Just to tell someone you disagree with them? Heh, as I said, far too much free time on your hands.

As to the project, good on them. i hope they learn lots and find a way to use driven shape keys.

It wasn’t ad hominem, I already addressed my answer to his arguement before I said he took too much time. The taking too much time thing was an aside.

EDIT: I should probably mention that the thing about it not ending up very good was an opinion. A lot of people have them, a lot of people think they’re the truth, I happen to think mine is true, through observance, and common observation what computers are capable of. Other people may think I am wrong, and this is their prerogative, but if I fight more rigourously for my point, it’s because I’m pretty much alone in a sea of people who think otherwise. Anyway, if we’re talking about ‘ad hominem’ fuzzmaster’s inclusion of me in his line-by-line ‘you and your arguments all suck’ festival could be construed as not altogether undirected. Anyway, I said what I said–that so far as I can tell, most of these efforts aren’t going to be very good, so don’t count on it as the do-all-end-all(which is quite a few people have been doing, you’ll notice)–so I really see no more point in drawing out my masochistic quest to explain myself. w007.

Also note: I don’t bear grudges outside of threads, so I hope you won’t do the same.

I’ll quote Han Solo and finish with: “Sorry about the mess.” :stuck_out_tongue: But I re-iterate, it wasn’t a matter of ‘purity’ for me, so I hope you’ll at least attempt to consider what I said.

No offence taken or meant. :slight_smile: I was just trying to nudge the discussion back into what the thing may actually do and what people in the community may want it to do. IMO, your point is valid, as a lipsync tool is really only a starting point, it won’t do it all. With that said, if it can cut down some drudgery work, all the better.

Anyhoo, from reading the pdf it doesn’t look like they’ve addressed the issue of driven shape keys. Unless they do something about that, it doesn’t seem much different from BlenderLipSynchro apart from integration in the UI.

Hi everybody,

thanks for your interest on our lipsync project
we have read your comments and we really want our feature to be usefull so any comments are welcome

We have decided to use actions for the animation to allow everyone to use his own rigging (shape actions, driven shape keys, lattices…)

Our wiki page has been updated :

http://wiki.blender.org/index.php/Requests/lipSync

We also have a temporary demo of our current progress

it’s only using few basic shapes to show how our project’s going so far and many things are still under development.

click right >> “save target as” , and open it with VLC: http://discoveryblender.tuxfamily.org/lipsync-wip-demo.mov

bye

A few years back I worked on a project to automate medical records. The doctors started a rebellion fearing that they’d be replaced.

Things went more smoothly when we explained that our research went into getting rid of redundant data requests and making their work more efficient, not trying replace them. Not that we would know how to do that, anyway.

Auto-lipsynch is kind of the same. If we rely on it blindly, as BlackBoe points out, we end up with mechanical, unrepresentative animation. As one quickly finds out, we need to soften or enhance our RVKs by hand to match certain accents and correct undue emphasis (our mouths tend to move as little as possible).

Do you need to use auto-lipsynching? Well, I may be doing this the wrong way, but I make my phoneme RVKs distinctive so I can auto-synch and then lower (or raise!) curve points in the key track as necessary to humanize the character’s speech. I’m sparing myself from that bit of mechanical work and the exposure sheet, even if I still end up visiting every keyframe.

I can understand people who’d only use Magpie or Papagayo to generate an X-sheet and then proceed to animate and give their characters their own, personal, artificial-ingredient-free touch. Since one ends up visiting every keyframe, they might as well…

Honestly, none of these two approaches seem wrong to me – it sounds rather like a matter of choice.

looks awesome thanks for your effort

Many hobbyists have limited time, not everyone minds limitations if it means they can be productive. I use every shortcut and ‘speed up’ tool I can! I am not trying to rival big studio quality, I just want to actually produce stuff!
I know my stuffs not that great, but I enjoy it! Kev simple lip sync thrown together quickly in papagayo
http://video.google.co.uk/videoplay?docid=-5753017105216370276&q=kev+the+caterpillar&total=1&start=0&num=10&so=0&type=search&plindex=0

i have seen the swfin that project. only Lip Area is moving. I got one commericial lip sync studio, in that with the lips, eyes, mouth, neck is also moved alongwith speech. Problem is that that product is not compatible with Blender. I want to use Blender for all purpose because here every thing is open. With that commercial Lip sync studio, you can use only the 3D characters of that comany with lot of legal bindings. please keep in mind that in Lip sync not only lips move, but eyes, neck is also moved according to the speech.

wow, lets hope this project gets off the ground solidly.
scripts often give the opportunity to delve into area’s that normally some wouldn’t touch, thus enhancing our learning & knowledge and potential job skill set.
keep up the good work on all levels of tools that we can use in Blender.

On a side note, for an hypothetical example>
If I was to work at a major animation studio, some basic skills in all areas would be helpful but not necessary, I would create my models then have someone else rig them,
or rig someone else’s models, then send it to the lip sync person, who would send it to the coders to customize the lip sync code per character for tweaking, then off to the advanced lip sync person ( or back to me) for final adjustments, then to the animator who would send it to and fro with the lip sync person, then the animator would send it to and fro from the texture person…ect.
as I said that was hypothetical.
The point is is each person is using different tools or building upon others work and specializing, often, in a single area of 3d. Building upon other peoples work to create the final product.
There are few people that can “do it all themselves”.
Fewer people that can do it all themselves well.
As i don’t work for a large 3d company the more tools i/you/we can have in our kits to help us through, the better. I understand that now.

Did one person create EDream, of course not, many people (some specializing in certain areas, most/all with a good basic working knowledge of all areas as well.) probably using whatever tools they had at their disposal to get them through to the final product.

So good luck to the lip sync people and i hope they succeed.
i look forward to testing it on my sculpted, custom rigged MH models.:stuck_out_tongue:
m.a.:evilgrin:

We stabilise the current implementation.
But after that, we will work to avoid the mechanical animation:
In the code there are a constant to control the smooth between the phonemes. We can use a IPO curve on that. So the animation will be less uniform.
In same kind, we will try to control the mouth amplitude with a IPO curve like it was suggested to us in a email.
Else we haven’t got a other way, but we can hope it’s a good way.

About the extra face movements, we can imagine that the user add a strip “eye” in LipSyncEditor. But now, we prefer to not make too much and so too bad. The development in blender expect many motivations, since the conception is… :@ … good :smiley:

The alternative is a action strip in the NLA to manage the extra face movements. I haven’t ideas to improve the workflow if it should be integrated in lipsync Editor. But maybe you have some ideas (for further development…)?

ps: I just forgot to logout… Normally I should post this message with sharpteam shared account for more clearness.