So I was studying from the CG Cookie Blender Animation Toolkit for over a week and a half because I wanted to improve on my animation skills. So I use a model that I already have set up and tried it out on the character that I created. I’ve watched this thing over 10 times and I feel that there may be something wrong with it. So I want your thoughts on the video. What I need to do to improve this. I can deal with the other movements later, but my main concern is with the lip sync.
Look like you got the timing and keyframing down pretty good. I’ll be the first to say that I’m no expert on this matter. I can say that in attempting lip syncing myself however; a few things that gave me issues that I think I see in you animation as well are as follows:
A constant movement of the mouth between syllables in an almost machine like manner. When we speak, if there is emphasis on a word or syllable, there is a slight pause in the movement of our mouths, a slight timing hold before moving on to the next syllable. If you were to look in the mirror and sing the song you have there, you’lll notice the pause in movement that I’m speaking of If you are animating from keyframe to keyframe, it might help to extend some of the keyframes beyond their initial points to memic this slight pause.
Shape keys can sometimes throw your lipsync attempt if there isn’t enough variation in your shapes. I’m assuming that you were going for something subtle, but much like other areas of animation, in some instances, exaggeration helps.
An alternative to the keyframe streatch method I mentioned above in #1 you could also try closing his mouth between syllables, or a combination, whatever seems to work best for you.
Well, that’s all I got, and that’s just from personal experiance as a beginner lipsyncer myself. You might also want to go out online and look at other phonem charts. they’re not all the same, and given the situation, some work better than others. Anyhow, hope something here helps you out.
I see. A pause in between syllables. So I use a rest position of every one?
Well, I would use pausing to tweak it some, to make the movement look a little more natural. When to pause and how much would depend on how quickly or slowly you would need to slide into the next syllable. It’s a tool for movement variation. Use it at your discression.
Alright, I’ll try it and post an update later.
So after 2 days, I brought in this animation in hopes to show if there was any improvement from the last clip I posted.
I see slight differences, but it still looks pretty much the same. Also I don’t recognize any M phonems with the song, I think it would make a difference. If I were to be nit picky I would say the the timing is a little off which normally wouldn’t be a big deal, but if synching to a song with music and a timed beat, unfortunately it matters I’m aware of the method that you used to generate your keyframes (by way of drivers moving armatures moving shapekeys. I’ll admit, It can become tedious at times. Have you tried the Blender Lip sync add on ( http://wiki.blender.org/index.php/Extensions:2.6/Py/Scripts/Import-Export/Lipsync_Importer ) Outside of the fact that you have to use Papagayo or Jlipsync in conjunction with it, I found it to be pretty useful. The big issues here is using Papaguyo or Jlipsymc. I found, got an understanding of, and ultimately used Jlipsync. It’s a pain because it requires that you transform your wav file into 8 bit first (it can’t read anything else so use Audacity to convert it you don’t have a better method to convert) Once you have JLipsync on your computer ( also note that JLipsync requires that you have Java on your computer to install ) Load the wave file into Jlipsync and simply (ha ha ) scrub through the track using the arrow keys and placing a keyframe for each frame using your keyboard letter keys ( and it is important that you keyframe each frame because the mouth is set to the closed postiton for anything you don’t keyframe) . J Lipsync is set up with letters to define the keyframes, but not all of them. That doesn’t matter. If Jlipsync has no letter s, but you type in the letter s for a keyframe, it will store an s even though there’s no reference for it, the Blender Lipsync Add On will then still recognize it if you have a corresponding keyframe with the same name in Blender. I know this sounds confusing right now, but if you attempt to use it, at some point, you’ll begin to understand why I’m including so much detail. As I just mentioned also, the shapekeys that you create inside of Blender must have identicle names to those that you use in JLipsync. Can’t tell you about Papagayou because I found it cumbersome to use with longer animations and quickly pulled the plug there. Anyhow, once I got this set up , I found it to be a much easier process. That’s my two cents. Oh also, the manner in which you set up your interface with JLipsync will dictate the ease or difficluty of your scrub. If you zoom in on the track for a better view of the wave shape on the screen, and use your keyboard for navigation, things will flow a lot more smoothly.
The mouth and lips movement is very linear. The mouth usually opens a lot faster than it closes. And lips don’t always fully separate. Take a mirror and make different shapes with your mouth. When your lower jaw moves just a bit and you try to talk only the middle of the mouth will be open. And the more you open the mouth the more the corners will separate.
The letter “M” wasn’t notice able? Ok, I guess I’ll have to watch some animations with one of their characters singing. I’ve tried the Papaguyo method and it was annoying to save it. In fact, I couldn’t save the file. I couldn’t even export anything out! I’ll have to check out Jlipsync and go back on the video and see where I went wrong. And if that don’t work, I’ll find a way how in some form.
Ok, I’m having some trouble converting the file into a 8 bit format. Can’t believe it’s causing this much trouble…
Jlipsync downloads as a .jar file. It’s associated with java. If you havae java on your machine, once downloaded, all you ahould have to do is click on it to run the program. The link is here. http://sourceforge.net/projects/jlipsync/
If you don’t have java, you can download it here. http://java.com/en/download/index.jsp
As I said before, this method worked well for me. Once you have it on your computer:
- Import your 8bit wav file in
- Set the number of frames based on the ultimate render of your scene
- View the wav image in the black window (magnify it in necessary to see where the peaks and vallies in the wav begin and end)
- Use the up down arrows to scroll through each frame.
- Tap the space bar to listen to each frame.
- Select the letter you wish to use by typing it on your keyboard in that frames “Mouth” column (note, before it will recognize the Mouth column for input, starting out, you must click on the column with your mouse pointer, otherwise you will simply be placing letters in the Comments column.
- I beleive once done you have to go back and just click through each letter in the "Key " column to mark the frames as active, you have to check on that to make sure.
- Save your work and then export it as a Moo file.
I know it sounds like a lot, but once set up, I’ve found this to be the most simple and most efficient method I’ve come accross. I use it over and over again. I can understand if you don’t feel up to it right now, you’ve been donig a lot of lipsynching lately, but I would encourage you to try it at some point, because once you get it down, it’s just another tool in your arsenal. That’s how I like to think of it. By the way, I have used the method that you used here, and it works too. There are a few other methods I know as well. But this method is my preferred because it’s fast, and does a decent job. But at the same time, to each his own. You have to find what works for you. Good luck and if you need any help, I’ll do what I can.
To convert to 8 bit in Audacity:
- In the “Save File Type” dropdown, select “Other uncompressed files”
- Click on the “Options” button on the right
- In the Encoding dropdown box select " Unsigned 8 bit PCM "
- Click OK
- Click Save
Don’t get me wrong. I don’t mind doing. I just wish I can just get it right! I tried the conversion method on Audacity, but it just didn’t work. The option to export it to a Moo file isn’t there.
My work flow for this video:
was to create a pose library of phonemes using this picture as a reference:
I then used the pose library and inserted poses where I heard the sounds associated with pose. If I heard a ‘m’ or a ‘strong o’, I inserted that pose. After that pass, I then went in and tweaked everything. If it was a loud ‘strong o’, then I opened up the mouth more, if it was a quiet ‘m’ then I closed the mouth off a bit more to add variance.
Because I dreaded doing the lip sync part of that video, I did it first, thinking it would take a long time. It didn’t take long to do it at all, once the pose library was created. A couple of blender sessions and I was done and surprised at how easy it was…
I’m going to do the same workflow for my current WIP for this month’s 11 second club. If you want me to describe this method, let me know and I’ll document it as I do it and post it someplace…
Well, I only said what I did because I know by now I’d probabal be frustrated and need to put it down for awhile. Anyhow, in Audacity you are just converting your wav file to a lower rate. After you convert it, it should still be a wav file. The Moho file is the ultimate result of what you create in JLipsync. The reason that you convert your wav file to 8 bit is because JLipsync can’t use anything else. So do this:
- Convert your wav file to an 8 bit wav file using the instructions above post #12
- Import that wav file into Jlipsync and follow the instructions under post #11 above
- Once you have created your Moho file, It will ultimately be imported into Blender by way of the Blender Lipsync Add-on ( there is a place within the user interface of it’s pannel in which you can do this), for this, follow the instructions on the url link at post #7. I beleive there is also a link to a tutorial for the Add-on at the bottom of that page.
I would also check out what revolt_randy has to offer, sounds promissing
Ok, I’ll try it.
And by the way I use a lip sync chart similar to the one posted by reovlt_randy in conjunction with JLipsync. The only difference is that for each letter I create a seprate shapekey, Even if it’s a duplicate shape. In other words the shapekeys of D, N ,S ,X ,T , and Z would all be copies of the same shape, but seprate keys with different lables for each for ease of use.
I keep getting a error message stating that the file doesn’t exist.
I need more information, I’m not really shure about what you’re referring to