PDA

View Full Version : Lip sync plans for Blender...



Koba
01-Apr-07, 07:16
Hi, I just thought you may like to know that there are plans for lip synch in Blender:


http://wiki.blender.org/index.php/Requests/lipSync

There is a .pdf about it here:


http://marc.gardent.free.fr/files/Lipsync-Documentation.pdf

It is just in planning stages and will probably take a while to do so don't get too excited yet!

Koba

Lamoot
01-Apr-07, 09:50
No, we don't believe it, it must be some kind of april fools!







p.s. I've read about this yesterday at dev forums so it is real ;)

Koba
01-Apr-07, 10:12
Good point. I shouldn't have posted today.

If you don't believe me, here is the thread on blender.org that was started on the 27th:

http://www.blender.org/forum/viewtopic.php?t=11038&highlight=

See, not April fools!

Koba

AndyD
01-Apr-07, 10:27
Blecchh! Where's the fun in that?

rexprime
01-Apr-07, 10:32
its a fairly good idea (even if it were an april fools joke)

BlackBoe
01-Apr-07, 10:53
Automated lip sync. Sort of like the Poser or MakeHuman of animation. Great for quick background stuff, but crap for anything that requires any sort of care or organic attention.

Cognis
01-Apr-07, 11:06
Very interesting. One plea, though: No arbitrarily fixed set of phoneme shapes! I have been using Papagayo and the LipSynchro script, and not only is the user limited with the set design of 8(?) phoneme-to-shape directors, but tweaking is also more clumsy if extra Shape IPOs have to be set manually; in fact, it kinda makes the whole lipsync feature redundant.

So please let the number of phoneme/viseme Shapes be user-defined. Beyond that, I wholeheartedly support the effort. And after all, those disliking the feature are still free to do by hand ;)


It is just in planning stages and will probably take a while to do so don't get too excited yet!

Um, doesn't it say "deadline in may"? Don't mind waiting for quality, but it seems to be a deadline set by their university, and not jus tthemselves....

Koba
01-Apr-07, 16:15
Um, doesn't it say "deadline in may"? Don't mind waiting for quality, but it seems to be a deadline set by their university, and not jus tthemselves....

You are right. I was worried that people would jump on this thread and start asking for builds and patches. Thought it was best to tone it down.

The deadline is in may but that doesn't mean it will be done or done well. The chaps doing it seem to think it is ambitious themselves. That said, I wish them luck!

Koba

Cognis
01-Apr-07, 16:19
That said, I wish them luck!
I definitely second that. Also, they are advised to look at the LipSynchro script for inspiration! Are you a part of the team, or just well-informed, Koba?

Koba
01-Apr-07, 22:13
Just well informed. ;-)

Read the .pdf, it details their project well.

Koba

FuzzMaster
02-Apr-07, 01:14
Very interesting and exciting

bataraza
02-Apr-07, 01:41
I agree with BlackBoe... I think it would be perfect to make a movie like monsterHouse.. where everything is mechanic and automated... but where everything losses the "art"(?) of making animation....

I don't understand people waiting for this kind of stuff while they can't make a simple bone animation ...???? No offend

Same things happens with other soft's... WHy you need tools like CharacterStudio to make steps to your character automatically...? C'mon, guys!!!!

I'm gonna see in a couple of years someone asking for a button that make the whole animation for you...

To the people saying this is important and necessary..... try to make mancandy speak with stuff like that without loose personality...

I think blender needs a loooooot of improvements on tools that already has...

Examples????
Improve lattices ( modelled lattices , following the shape you need...) Hooks....( deformations.. parenting... this things aren't working properly right now)... crazyspace vertex (not solved ) bones parented to a vertex o similar... etc etc and a large etc...

I think sometime we should stop requesting features and request improvements as I said before.. don't you think...? ;)

JiriH
02-Apr-07, 03:28
I agree with bataraza and BlackBoe and AndyD. Focus on enhancement of old stuff maybe more appropriate than automatization and robotization of crative processes. I wish Blender would never meet he aproach of MakeHuman where actualy lack of art, crativity and imagination is quite obvious.

But I understand it may be interested for someone who wants fast, easy and prefabricated products.

Alltaken
02-Apr-07, 04:11
Wow it sounds like the last feature that basse has been waiting for before he goes pro.....

shoepie
02-Apr-07, 04:13
Looks very interesting, good luck.

I for one welcome new automation tools like this, for a noobs development! (Like myself!) sometimes it's more important to finish a project than to be painstakingly artistically brilliant over every detail. That all comes later on. I just want to make stuff talk!

And I'm sure the results will look pretty good....or be tweakable.

Mike_S
02-Apr-07, 04:49
I agree with BlackBoe... I think it would be perfect to make a movie like monsterHouse.. where everything is mechanic and automated... but where everything losses the "art"(?) of making animation....


This project looks interesting, and if it can streamline some of the process for lip sync, I think it would be great to have in Blender.

Some things will always need to be manually adjusted, but why start from scratch every time if some of the steps can be automated?



I don't understand people waiting for this kind of stuff while they can't make a simple bone animation ...???? No offend

Same things happens with other soft's... WHy you need tools like CharacterStudio to make steps to your character automatically...? C'mon, guys!!!!


Maybe they have no desire to make a simple ... or any other animation?

For some applications, e.g. simulations, or other animations where the overall story / situaion is the goal, the specific way a character talks or walks or whatever else action, may not be important.



I'm gonna see in a couple of years someone asking for a button that make the whole animation for you...


So why would that be a bad thing?

Again, it depends on what the goal is. If someone is more interested in getting the final result, and is not interested in the actual process, a "make movie" application, is just what they would be looking for. There are a number of those programs starting to appear now. We even have a Blender version in the works with Didu's "Movie Maker" project.



Improve lattices ( modelled lattices , following the shape you need...) Hooks....( deformations.. parenting... this things aren't working properly right now)... crazyspace vertex (not solved ) bones parented to a vertex o similar... etc etc and a large etc...


Those features are obviously important ... to you. Auto-lip sync, or tools to help with lip sync is important to others.

For instance, right now I'm playing around with a relatively simple script that will add dialogue text to the timeline, so that I don't have to enter it manually every time.

If you happen to program those or get someone to improve them, I'm certainly not going to complain and say they should'nt be in the program ....unless it slows it down or affects something else :)

I don't understand the mindset of dismissing / bashing a good idea that will provide a useful tool.



I think sometime we should stop requesting features and request improvements as I said before.. don't you think...?

This was not a feature request, and is not being produced by Blender developers, it's an outside university project.

Mike

Koba
02-Apr-07, 05:13
I agree with Mike_S.

Sometimes it seems that people view any progress as cheating. It is all about enabling expressiveness of the artist. And frankly, I am so tired of hearing that Makehuman is cheating. It is a tool and a great one at that. If you feel you are cheating:

- Heavily modify a Makehuman mesh after it is created. Has Makehuman stopped you from doing that? Does Makehuman instill a fear of edit mode? How exactly is Makehuman anything other than a tool to realise your artistic vision? Such a lame argument.

- Go make your own Makehuman targets and stop complaining. Now you realise that Makehuman is simply a mesh morphing program with a couple of presets - ever use the Blender presets, icosphere for example? Just another tool, in the same way that subsurf is a tool in Blender. Is subsurface cheating? After all you loose artistic control over every little polygon you mesh is "pretending" to have!

- Create your art and state *very clearly* that you used Makehuman. What is wrong with that?

My artistic freedom has been increased manyfold since the last release of Makehuman. Coupled with Blender I can create humanoids to my satisfaction in a reasonable timeframe with a reasonable effort. I can now focus on my composition, lighting and texturing instead of *just* modelling. With many, many characters, my current project is more than just a still or five seconds of animation and would simply be impossible without Makehuman. It has *enabled* me as an artist.

Back to the cheating issue - some people spend ages creating astounding images in MS paint. If I rendered a shiny sphere in Blender and claimed that I had drawn it pixel for pixel in MS Paint, I would be both cheating and lying. Similarly, I would not try passing a Makehuman model as my own model created from scratch. So you state the tools you used and the final artwork and let other people judge. Allowing people to express themselves to any level quicker and with less effort is progress, not cheating. Art is about vision and dedication - are movies "cheating" because they use special effects? It is the impact of the final product that counts.

There are images that used Makehuman in the Blender.org gallery. Go tell those artists that their integrity has been compromised by using began life as a Blender script. I hope you don't use Python scripts in Blender written by other people because it is "cheating".



I'm gonna see in a couple of years someone asking for a button that make the whole animation for you...What kind of button will be able to see inside your head and with a single click specify everything in your vision. Even if one day it does take a single click to draw out something in your mind, there is still effort. Effort in imagining the scene, the timings, the lighting the details. Their would still be art in such a world because not all people's minds are the same.

Sorry for the rant but I've heard this nonsense for years now. The same argument applies to adding lipsync. If you believe any major CG movie (eg. from Pixar) was done without lip sync tools, you must be out of your mind. If tools make your life easier as an artist, use them and to hell with "cheating".

Koba

EDIT> Just for the record, I probably wouldn't use Makehuman exclusively for a single still render where the character is the main focus of the piece. That said, I may use a base mesh for guidance and prototyping in Makehuman.

BlueSpider
02-Apr-07, 05:30
Id be happy with something that would just be able to highlight the phonenomes (im sure thats spelled wrong) in a loaded wav file..that would be great as for a whole system im not sure how that would work out, I have a feeling all your characters will have the same uniform expressions, but who knows if its done right it could be useful, anything that helps increase the speed of the work flow is a good thing, as long as you dont have to sacrifice quality.

Mike_S
02-Apr-07, 06:23
Id be happy with something that would just be able to highlight the phonenomes (im sure thats spelled wrong) in a loaded wav file..that would be great as for a whole system im not sure how that would work out, I have a feeling all your characters will have the same uniform expressions, but who knows if its done right it could be useful, anything that helps increase the speed of the work flow is a good thing, as long as you dont have to sacrifice quality.

That's what papagao does :

http://www.lostmarble.com/papagayo/index.shtml

As for any automation, it can be just a starting point, which you can tweak afterword, as the output of any automatoin is going to be ipo curves driving eithe bones or shape keys.

Mike

BlackBoe
02-Apr-07, 09:15
Holy crap. I never meant to transform it into a huge discussion. Nor did I say it was BAD, or cheating. I just said it almost definitely wouldn't be worth half the same quality as a properly done hand-animated piece. For all you people that want to make of movie of rigidly chatterboxing MH models, that's fine with me, it'll make the rest of us look better. :P

Koba
02-Apr-07, 09:22
For all you people that want to make of movie of rigidly chatterboxing MH models, that's fine with me, it'll make the rest of us look better.

Just because you used Makehuman to speed things up does not mean it will be rigid or chatterboxing or even "look" like a Makehuman model. You can put a little effort in after you've created your model in Makehuman to get to the same quality in less time.

As long as you can edit what is generated by automated lip synch, it would be useful for background *and* "organic attention". Don't you think?

That is all I want to say.

Koba

BlackBoe
02-Apr-07, 09:55
I said 'rigidly chatterboxing' to refer to automated lip sync, not talking about the mh models. It is entirely possible that there will be good, fluid automated lipsync coming out of this feature, it's just that I have never seen that happen before.


EDIT: By the way, just because I'm regarding this cynically doesn't mean I have to spoil anyone else's fun. It's just my opinion, is all. :P

bataraza
02-Apr-07, 11:56
I'm just saying people almost ask for the option to write in console..:
" >blender -mycharacter - animate - dancing...." and wait for the program to make the animation for you...

">blender -Character -frog -model it -- Thank you"

">blender -texture -please make the UV's -do it quickly I gotta go to the grocery!!!

If that's what you need... good for you...

I choose to make my own models, animation, rigs....

Good luck!!!

Cognis
02-Apr-07, 12:22
Mike_S - I agree with everything you said. Thank you for saving me the time to say it ;)

Koba - We seem to be on the same wavelength, too :)

On the matter of "automation = no art /bad quality", the point of automation IS to allow the user more focus on quality! Any artist worth his/her salt spends long hours picking out the proper tools to do more, and really good artists shape even their tools to let them do what they do easier, allowing them to put more effort into advancing their field. Claiming that lipsync will produce bad results by definition is like saying computers ruined scientific research; bad lipsyncing will ruin animation, but automation in itself is a plus.

As for the button that creates your movie, don't think it impossible; just like Amazon can (attempt to) figure out what books you would like, a simple algorithm based on your earlier favorite shows/movies can do it for films. Now we just need that function that automates the production fo a manuscript and creates animation. And voices... oh those horrible computer voices :p

BlackBoe
02-Apr-07, 12:37
I'm sure it'll be a great help in that'll enable people who are actually GOOD at lip sync to do it faster. But the people who don't know how to animate will just use it raw, so that's what we'll see the most of. :P

Anyway, please do not mentally associate me with bataraza, because--though I am sort of on the side of his view point in a few circumstances, I'm not bringing them up here.

FuzzMaster
02-Apr-07, 21:25
Are my eyes decieving me? A group of nice people are going out of there way to add a feature that will allow better animations to be made faster and people are complaining about things like purity and organicness? Purity is for drinking water, not art.


Automated lip sync. Sort of like the Poser or MakeHuman of animation. Great for quick background stuff, but crap for anything that requires any sort of care or organic attention.Glad to see you've tested and tried it. It, like Poser of MakeHuman, is an optional tool, one who's quality cannot be tested at this stage.


I agree with BlackBoe... I think it would be perfect to make a movie like monsterHouse.. where everything is mechanic and automated... but where everything losses the "art"(?) of making animation....Like I said, it is a tool. By your logic, you should not be allowed to use the walk cycle features. Or subsurf. Or the compositor. They do all the work for you! You should have to manually create the vertices for subdivision, no sense letting a tool do it for you! You should have to manually pose every single vertice, render, move again, render, etc, then splice all your rendered frames together to make your movie. Otherwise, it isn't art! It's mechanic and automated!


I don't understand people waiting for this kind of stuff while they can't make a simple bone animation ...???? No offendSpeaking only on behalf of myself, I can say that making bone rigs is exceedingly complicated, and outdated thanks to shape keys. And even using shape keys, or drivers, or whatevers, for lip sync is tedious and generally gives awful results. At least in my experience. I welcome a tool that is there to reduce stress and provide better results.


I'm gonna see in a couple of years someone asking for a button that make the whole animation for you...

What a fantastic exaggeration. I mean, we're somewhat there already, what with things like Spore, but I doubt anything like that will ever replace hand created animations.


To the people saying this is important and necessary..... try to make mancandy speak with stuff like that without loose personality...Who said using this will make things lose personality? This only does the lip sync, it's still up the artist to do facial expressions, eye movement, hand movement, etc, etc. I hardly believe it will detract from the personality at all.


think blender needs a loooooot of improvements on tools that already has...

Examples????
Improve lattices ( modelled lattices , following the shape you need...) Hooks....( deformations.. parenting... this things aren't working properly right now)... crazyspace vertex (not solved ) bones parented to a vertex o similar... etc etc and a large etc...If these things are so necessary for you (seem fine to me), learn C++ and develop them! The people behind this lip sync project have learned C++ and are using it to develop something that they feel will greatly benefit our community. Not to mention in the PDF they state they aren't quite up to par with Blender's current coders, and therefore would probably be unable to fix some of those features you feel are broken.


I think sometime we should stop requesting features and request improvements as I said before.. don't you think...?This is an improvement. If you feel otherwise, well, you do not have to use it!


But I understand it may be interested for someone who wants fast, easy and prefabricated products.Does this mean you're opposed to fluid and softbody solvers? Fast, easy, and prefabricated. It sure does make sense to have artists spend lots of time trying to recreate something like cloth or fluid.

Also, you are aware the user will still have to make the model, and make the different shape keys before they'll be able to use this? All it's doing is putting things you made together in order for you right? Kind of like the NLA editor. You must be opposed to that, too, right?


sometimes it's more important to finish a project than to be painstakingly artistically brilliant over every detail.I agree! I imagine Elephant's Dream could have been much better (animation wise) if they didn't have to spend so much time on lip sync.


I just said it almost definitely wouldn't be worth half the same quality as a properly done hand-animated piece. For all you people that want to make of movie of rigidly chatterboxing MH models, that's fine with me, it'll make the rest of us look better. :PSee an above response of mine. Just because you didn't go through the hellish process of hand animating lip sync, which has been enough to turn me off from several projects, doesn't mean your project is tainted, and certainly doesn't have to mean it will look mechanized and ugly.


I said 'rigidly chatterboxing' to refer to automated lip sync, not talking about the mh models. It is entirely possible that there will be good, fluid automated lipsync coming out of this feature, it's just that I have never seen that happen before

Because Blender never really had the feature anyways. Not to mention, how many projects with automated lip sync have you seen coming from Blender anyways? I honestly cannot think of many.


I'm just saying people almost ask for the option to write in console..:
" >blender -mycharacter - animate - dancing...." and wait for the program to make the animation for you...Again, a fantastic exaggeration.


">blender -texture -please make the UV's -do it quickly I gotta go to the grocery!!!Manually unwrapping a model (I mean without calculations) would probably be the most hellish process ever, and I don't think any one honestly would rather Blender did not have it's UV mapping features.


I choose to make my own models, animation, rigs....Good, I guess it's important to you. It does not mean you have to oppose things that are important to us.


I'm sure it'll be a great help in that'll enable people who are actually GOOD at lip sync to do it faster. But the people who don't know how to animate will just use it raw, so that's what we'll see the most of. :PIn the same way that Blender allows people who are actually good at modeling to make good models, and faster.

Summary: This is a good feature, and many people want it. If you're too hung up on the 'purity' of your art, you do not have to.

BlackBoe
02-Apr-07, 21:33
I'm not 'hung up'. Anyway, I've never seen it done in blender, but I've seen it done elsewhere. Please do not assume that I am an idiot. I never mentioned purity either.

Also, you have far, far too much time on your hands. :P

FuzzMaster
02-Apr-07, 21:38
No, not really. I just spent about fifteen minutes on that.

BlackBoe
02-Apr-07, 21:40
Just to tell someone you disagree with them? Heh, as I said, far too much free time on your hands.

fatfinger
02-Apr-07, 22:01
Just to tell someone you disagree with them? Heh, as I said, far too much free time on your hands.

http://en.wikipedia.org/wiki/Ad_hominem

As to the project, good on them. i hope they learn lots and find a way to use driven shape keys.

BlackBoe
02-Apr-07, 23:06
It wasn't ad hominem, I already addressed my answer to his arguement before I said he took too much time. The taking too much time thing was an aside.

EDIT: I should probably mention that the thing about it not ending up very good was an opinion. A lot of people have them, a lot of people think they're the truth, I happen to think mine is true, through observance, and common observation what computers are capable of. Other people may think I am wrong, and this is their prerogative, but if I fight more rigourously for my point, it's because I'm pretty much alone in a sea of people who think otherwise. Anyway, if we're talking about 'ad hominem' fuzzmaster's inclusion of me in his line-by-line 'you and your arguments all suck' festival could be construed as not altogether undirected. Anyway, I said what I said--that so far as I can tell, most of these efforts aren't going to be very good, so don't count on it as the do-all-end-all(which is quite a few people have been doing, you'll notice)--so I really see no more point in drawing out my masochistic quest to explain myself. w007.

Also note: I don't bear grudges outside of threads, so I hope you won't do the same.

I'll quote Han Solo and finish with: "Sorry about the mess." :P But I re-iterate, it wasn't a matter of 'purity' for me, so I hope you'll at least attempt to consider what I said.

fatfinger
03-Apr-07, 00:22
No offence taken or meant. :) I was just trying to nudge the discussion back into what the thing may actually do and what people in the community may want it to do. IMO, your point is valid, as a lipsync tool is really only a starting point, it won't do it all. With that said, if it can cut down some drudgery work, all the better.

Anyhoo, from reading the pdf it doesn't look like they've addressed the issue of driven shape keys. Unless they do something about that, it doesn't seem much different from BlenderLipSynchro apart from integration in the UI.

sharpteam
15-Jun-07, 19:23
Hi everybody,

thanks for your interest on our lipsync project
we have read your comments and we really want our feature to be usefull so any comments are welcome

We have decided to use actions for the animation to allow everyone to use his own rigging (shape actions, driven shape keys, lattices...)

Our wiki page has been updated :

http://wiki.blender.org/index.php/Requests/lipSync

We also have a temporary demo of our current progress

it's only using few basic shapes to show how our project's going so far and many things are still under development.

click right >> "save target as" , and open it with VLC: http://discoveryblender.tuxfamily.org/lipsync-wip-demo.mov

bye

CubOfJudahsLion
15-Jun-07, 22:38
A few years back I worked on a project to automate medical records. The doctors started a rebellion fearing that they'd be replaced.

Things went more smoothly when we explained that our research went into getting rid of redundant data requests and making their work more efficient, not trying replace them. Not that we would know how to do that, anyway.

Auto-lipsynch is kind of the same. If we rely on it blindly, as BlackBoe points out, we end up with mechanical, unrepresentative animation. As one quickly finds out, we need to soften or enhance our RVKs by hand to match certain accents and correct undue emphasis (our mouths tend to move as little as possible).

Do you need to use auto-lipsynching? Well, I may be doing this the wrong way, but I make my phoneme RVKs distinctive so I can auto-synch and then lower (or raise!) curve points in the key track as necessary to humanize the character's speech. I'm sparing myself from that bit of mechanical work and the exposure sheet, even if I still end up visiting every keyframe.

I can understand people who'd only use Magpie or Papagayo to generate an X-sheet and then proceed to animate and give their characters their own, personal, artificial-ingredient-free touch. Since one ends up visiting every keyframe, they might as well...

Honestly, none of these two approaches seem wrong to me -- it sounds rather like a matter of choice.

moh taia
16-Jun-07, 04:03
looks awesome thanks for your effort

Roy
16-Jun-07, 04:39
Many hobbyists have limited time, not everyone minds limitations if it means they can be productive. I use every shortcut and 'speed up' tool I can! I am not trying to rival big studio quality, I just want to actually produce stuff!
I know my stuffs not that great, but I enjoy it! Kev simple lip sync thrown together quickly in papagayo
http://video.google.co.uk/videoplay?docid=-5753017105216370276&q=kev+the+caterpillar&total=1&start=0&num=10&so=0&type=search&plindex=0

patricia3d
16-Jun-07, 05:09
i have seen the swf (http://marc.gardent.free.fr/videos/lipsync.swf)in that project. only Lip Area is moving. I got one commericial lip sync studio, in that with the lips, eyes, mouth, neck is also moved alongwith speech. Problem is that that product is not compatible with Blender. I want to use Blender for all purpose because here every thing is open. With that commercial Lip sync studio, you can use only the 3D characters of that comany with lot of legal bindings. please keep in mind that in Lip sync not only lips move, but eyes, neck is also moved according to the speech.

Meta-Androcto
16-Jun-07, 06:26
wow, lets hope this project gets off the ground solidly.
scripts often give the opportunity to delve into area's that normally some wouldn't touch, thus enhancing our learning & knowledge and potential job skill set.
keep up the good work on all levels of tools that we can use in Blender.

On a side note, for an hypothetical example>
If I was to work at a major animation studio, some basic skills in all areas would be helpful but not necessary, I would create my models then have someone else rig them,
or rig someone else's models, then send it to the lip sync person, who would send it to the coders to customize the lip sync code per character for tweaking, then off to the advanced lip sync person ( or back to me) for final adjustments, then to the animator who would send it to and fro with the lip sync person, then the animator would send it to and fro from the texture person....ect.
as I said that was hypothetical.
The point is is each person is using different tools or building upon others work and specializing, often, in a single area of 3d. Building upon other peoples work to create the final product.
There are few people that can "do it all themselves".
Fewer people that can do it all themselves well.
As i don't work for a large 3d company the more tools i/you/we can have in our kits to help us through, the better. I understand that now.

Did one person create EDream, of course not, many people (some specializing in certain areas, most/all with a good basic working knowledge of all areas as well.) probably using whatever tools they had at their disposal to get them through to the final product.

So good luck to the lip sync people and i hope they succeed.
i look forward to testing it on my sculpted, custom rigged MH models.:p
m.a.:evilgrin:

Marco
16-Jun-07, 12:13
We stabilise the current implementation.
But after that, we will work to avoid the mechanical animation:
In the code there are a constant to control the smooth between the phonemes. We can use a IPO curve on that. So the animation will be less uniform.
In same kind, we will try to control the mouth amplitude with a IPO curve like it was suggested to us in a email.
Else we haven't got a other way, but we can hope it's a good way.


About the extra face movements, we can imagine that the user add a strip "eye" in LipSyncEditor. But now, we prefer to not make too much and so too bad. The development in blender expect many motivations, since the conception is... :@ ... good :D


The alternative is a action strip in the NLA to manage the extra face movements. I haven't ideas to improve the workflow if it should be integrated in lipsync Editor. But maybe you have some ideas (for further development....)?

sharpteam
16-Jun-07, 12:24
We stabilise the current implementation.
But after that, we will work to avoid the mechanical animation:
In the code there are a constant to control the smooth between the phonemes. We can use a IPO curve on that. So the animation will be less uniform.
In same kind, we will try to control the mouth amplitude with a IPO curve like it was suggested to us in a email.
Else we haven't got a other way, but we can hope it's a good way.


About the extra face movements, we can imagine that the user add a strip "eye" in LipSyncEditor. But now, we prefer to not make too much and so too bad. The development in blender expect many motivations, since the conception is... :@ ... good :D


The alternative is a action strip in the NLA to manage the extra face movements. I haven't ideas to improve the workflow if it should be integrated in lipsync Editor. But maybe you have some ideas (for further development....)?

ps: I just forgot to logout... Normally I should post this message with sharpteam shared account for more clearness.

Koba
16-Jun-07, 13:28
Only words of support from me.

Keep up the good work.

Blender needs this sort of thing!

Koba

macouno
18-Jun-07, 05:56
Yes the same from me.

I've been using papagayo some and I love that sort of simple approach, even though their system feels like it's half finished. This sort of thing can provide a real nice starting point for an animation.

yogyog
18-Jun-07, 07:41
I'm not too great at lip-sinc. Unless you're autistic or checking lip-sinc you don't tend to stare at someone's mouth as they talk. using something like this for lip-sinc, especially if you can define your own face shapes - as many as you like - would be great. You can then touch up, add other keys for the rest of the face, and keyframe hand and body movement. THAT's the important bit!

Yes - this lip-sinc thing is amazing - great idea.

Felix_Kütt
18-Jun-07, 09:44
Only words of support from me.

Keep up the good work.

Blender needs this sort of thing!

Koba

+1 to that!

Klepoth
18-Jun-07, 09:48
There is an old saying in the CG-world: "Technical directors build riggs and animators break them." Automation may sound like a good thing in animation, and is when it comes to background characters and such, but when it comes to animating the "head characters" all the automated animation will need to be modified and "dirtied up" to make the movements look natural and non robotic. It doesn't matter if its automated lipsync or walkcycles.
For example, when talking every time you say an "o" your lips won't be o-shaped to the same extent. It depends on where in the word the o is, how intense the word is spoken etc. And to after the automated animation go back and fix things like this takes a whole lot of time, some times more time than if was animated from scratch. And with some automated things (I don't know if this applies to this lipsync plug-in) it isn't possible to fix things like this without totally breaking it (and this is the main reason why IK seldom is used for animating thins like arms in real production environment, it just takes to much time to correct the IK animation).
Fore these automated things in animation to be useful they need to be easily combined with manual animation otherwise the end results will look amateurish, and even if one is an amateur, amateurish results should never be a goal.

fatfinger
19-Jun-07, 08:01
Good to hear you're making progress. Any idea when we might see a working version or a patch?

Anyhoo, keep up the good work.

Toldi
15-Jun-08, 20:14
What happened to this project, updates seemed to have stopped ?
It still seems to manual ... we could automate this even more with step 2 and 3 below...


1) create the characters usual 6-8 mouth shapes for the different letters of speech

2) use existing "voice recognition software" (and they exist with 99% accuracy which will definitely be good enough for most lipsync) to generate "tokens" (with time info) from the audio track. These can be stored simply as an array of the shapes of point 1) and at what time they occur

3) simply convert these tokens into a blender "NLA phenom strips" automatically.

freeblender
15-Jul-08, 07:12
Lip sync proposal

What happened to this project?
Something to download - to use
Lip sync in Blender - how to do it?
:cool:
Thanks in advance!

SHABA1
04-Oct-08, 19:54
Is the project still active?

bongox
20-Dec-08, 10:49
This is a priceless addition to Blender, i certainly hope its still progressing.