Hello to everyone,let’s say that I want to create something like this :
this can be done easily with crazy talk animator,but we like to use blender,right ? let’s say also that I know how to blink the eyes and how to move the mouth in syncro with the sound. what I don’t know how to do is how to make swing the head,the shoulder and the bust a little according with the movements of the lips and of the eyes. I think how to do that for some time and right now I haven’t found any tutorial that teaches how to do that. This tutorial for example teaches how to synch the lips and it makes the character speaks nicely,BUT when it does it,his head and his shoulders don’t move,so it’s not very realistic.
Lipsinc for the Lazy in Blender 2: Realism
Maybe I can apply the same tecnique that I use to sync the lips with the sound,but this time,instead of using the lips,I can use the head and the shoulders ?
In that tutorial I go into a quick way of recording the head and shoulder movement… it’s not perfect, but it’s quick and can work quite well.
I was wondering about automating this by…
Using the volume of the voice to advance through a smooth animation (can be done via import sound to f-curves)
Converting voice to midi data and then importing this to trigger poses (no idea how to do this)
For sure i can move slightly the head and the shoulders manually adding new keyframes on the timeline and even using a BVH file,but as i said i would like to find a way to make them swing autonomously and automatically. Maybe I can use the “bake sound to f-curve” tecnique but this time moving the head and the shoulders instead of the lips. The trick could work like this : I create a set of shapekeys which move the head and the shoulders a little (like the phonemes for the lips) and I make them move using the frequency of the voice with the F-curve. So,I should obtain some kind of oscillation. The meaning of the speech and generally of the body movement is correlated with the kind of shapekey used and the spikes of sound determines how much times per second the shapekey is applied. So,the idea is to create a new set of shapekeys to move the head and the shoulders. How much often they are moved depends on the spikes of sound…really I don’t know which could be the final result
One tickbox in Bake Sound to F curve allows for it to be just additive - for the curve to just go up and up and up and not down, but go up faster during loud moments. In theory, this curve can be the ‘playhead’ of an action.
I still think that without code, recording the movement of the head and sholders while listening to the voice in real time is a good way to swiftly create body movement for dialog.
I also have plans to improve the mouth movement method again… Using Rhubarb is great - you get good F, W and L mouthshapes that you don’t get just using sound to trigger the mouth movement, but I think I can improve again using Rhubarb to determine minimum and maximum mouth openness and trigger F, W and L shapes, and volume to move within those boundaries.
Thinking better to what I want to do,I would like to make some corrections to what I said before. Infact I think that instead of talking about to move the shoulders,its better to talk about to move the diaphragm. Infact I think that it has no big sense to create a set of new shapekeys to simulate the moving of the shoulders. Bastioni lab has two shapekeys that can be used for this : Expressions_abdomExpansion_max and min ; Expressions_chestExpansion_max and min that can be repeated,because the breath is cyclic. This tutorial can be used to create a cyclic breathing :
If we want to create a realistic talking head there are some other little movements to simulate. For example a random deglutition. Bastioni has the shapekey “Expressions_deglutition_max and min” and for moving the head I can’t make any new shapekey,because the head movement does not imply a movement of the mesh. For this a BVH file can be used.
All these things help, but for more than about 5 seconds I think its best if the head moves… maybe only a little. I would use the rig to move the head, as this is what it’s for. How does Crazy Talk handle head movement?
I am a very finicky,so I think that an animation becomes good if we take care of all the details. Regarding the breathing I found this tutorial :
you are right. Neither do I like to use the wave modifier. Regarding how crazy talk handles the movement of the head I don’t know,but a good idea could be creating several types of head movements using the action constraints and then randomize them…
or even creating a python script,which generates different head movements each time. Or creating different head poses and again,randomize them…
you may be interested in this :
#i can already tell you you won't like the results.
ob = bpy.data.objects['metarig']
bpy.context.scene.objects.active = ob
BONE = ob.pose.bones[BONE_NAME]
BONE.rotation_mode = 'XYZ'
tmpcheck = random.randint(1,3)
if tmpcheck == 1:
axis = 'X'
if tmpcheck == 2:
axis = 'Y'
if tmpcheck == 3:
axis = 'Z'
tmpangle = random.randint(LIMIT1,LIMIT2)
angle = tmpangle
def cripple():#does everthing
ob = bpy.context.object
if ob.type == 'ARMATURE':
armature = ob.data
for bone in armature.bones:
def listedlimits():#and since you know what every bone is, you can make your own list and generate a random pose.
#this is just a terrible way to animate, but can be a lot of fun if you just want to see random effects and apply limits to your bones, note that the
#limits here are not the total limit but the limits as to what can be applied. you are far better off going into riggify and creating your own by lining
#an metarig up. Or by using someone elses human rig.