Here is a first draft of a script to use papagayo input files to output a lipsync NLA track with phoneme action strips. It sits in the properties section of the NLA window.
To use one will need an armature, i use a facecontrol armature that drives bones and shapes on target armature/mesh, with all the phonemes set up as actions. You will also need to create an empty NLA track as i haven’t worked out how to add a track when none exist using strips.
Any fixes additions to the script would be most appreciated.
# Papagayo lip sync using NLA strips
# Need a face or faceControl armature with actions already set up for each phoneme AI,E,MBP,etc,rest,O,U,FV,WQ
# might extend this to have an offset frame set.. although selecting the track and moving it aint the hardest.
# Neees an active track in the NLA editor.. how do you use add_track without one existing?
import bpy,os
def readPapagayoFile(context,filepath):
fileName = os.path.realpath(os.path.expanduser(filepath))
(shortName, ext) = os.path.splitext(fileName)
if ext.lower() != ".dat":
raise NameError("Not a Papagayo Data file: " + fileName)
print( "Loading Papagayo file "+ fileName )
f = open(fileName,'r');
#print(f.readlines())
print(context.active_object)
f.readline() #skip the first line
bpy.ops.nla.tracks_add()
tracklist = context.active_object.animation_data.nla_tracks
lipsynctrack = tracklist[len(tracklist)-1]
lipsynctrack.name = 'LipSync ('+os.path.basename(filepath)+')' # name the track after the file name
line = f.readline() # read the first phoneme
start_frame = float(line.split()[0])
context.scene.frame_set(int(start_frame))
action = line.split()[1]
bpy.ops.nla.actionclip_add(action=action)
lipsynctrack.strips[0].frame_start = start_frame
i = 1; #counter for actions
for line in f:
start_frame = float(line.split()[0])
action = line.split()[1]
lipsynctrack.strips[i-1].frame_end = start_frame # set the end frame of previous to start frame of this
context.scene.frame_set(int(start_frame)) # set the scene frame to the start frame
bpy.ops.nla.actionclip_add(action=action)
lipsynctrack.strips[i].frame_start = start_frame
print(line)
i+=1
class My_NLA_PT(bpy.types.Panel):
bl_space_type = 'NLA_EDITOR'
bl_region_type = 'UI'
bl_label = "NLA lip sync"
def draw(self, context):
scene = bpy.context.scene
layout = self.layout
area = layout.row()
self.layout.operator("object.LoadLipSync")
area = layout.row()
class OBJECT_OT_LoadLipSync(bpy.types.Operator):
bl_idname = "OBJECT_OT_LoadLipSync"
bl_label = "Load Papagayo Data File"
filepath = bpy.props.StringProperty(name="File Path",maxlen=1024, default="")
def execute(self, context):
import bpy, os
readPapagayoFile(context, self.properties.filepath)
return{'FINISHED'}
def invoke(self, context, event):
context.window_manager.add_fileselect(self)
return {'RUNNING_MODAL'}
It sometimes throws an error where the actions enum is empty… if this is the case use the interface to add an action to a track and then it seems to work… … any ideas on how to fix this?
might be a good idea to provide the facecontrol armature as well dude.
It’ll work on any armature that has actions named after the visemes AI,O,rest,etc… etc
Using catch/except pair at the place where something may go wrong?
Will do. It is a context thing i think. It bums out at the addstrip operator when the blend file is newly opened and the script is run. Adding an actionstrip to any nlatrack using the UI and whallah it works.
Also is there a method for setting an individual name from the API… eg the way the UI gives you NLATrack, NLATrack.001 etc?
TypeError: Converting py args to operator properties ‘etc’ not in enum <>
Do I need to define my own addStrip operator class? The script works ok after I have used the addstrip operator in the UI. Got a feeling it’s something obvious and simple … but at the moment i’m missing it.
Most of the objects have a name property, which you can set
cube = bpy.data.objects[“Cube”]
cube.name =“This is my name of the Cube”
changes the name of the ‘Cube’
cube2.name=“This is my name of the Cube”
end up with two cubes same name. But if i duplicate cube above in UI it is given the name plus a .nnn extension that is unique. I was wondering if there was a method to assign a unique name.
so what this script does is map the phonemes file of papagayo to bone armatures that were already been rigged to mesh, nice:)
mine got the same but with shape keys, and i want it to save this news for later -after doing more research- but i want to start a new branch about automatic lip syncing - like XSI - and it will support shape keys at 1st as its more accurate than bones and easier too.
so it will be good to have someone to port armatures to it
Hmm could debate you on that one. Given the new API where everything can drive everything it’s a bit chicken and egg… but I’ll stick to lip syncing a control armature that drives a combination of shapes and bones on multiple objects. For instance i have a head and body mesh.
I was about to start to develop a script for Blender that does exactly the same thing.
Searching at the net for references for python API Blender and papagayo I luckily found your thread. Thanks a lot for starting it. I believe it will be great for the community.
I have created just simple file with a mouth(line with depth) and teeth(box). And created some shape keys for each phoneme. I also create and 3 keyframe Action for each phoneme and renamed accordingly.
When I run your script in blender 2.55, on NLA Actions, the title of your script(NLA lip sync) shows up, but with no options available. Am I doing something wrong?
I would love to collaborate to improve your script and to get tested.
Would you mind to give more instructions about how to use your script, and if possible, saying how I should setup my animation file? I am doing just a simple test with Shape Keys and NLA Actions at the moment just to see it working.
After running the script it shows up as a button in the NLA editor properties panel… hit n in the NLA editor.
If the button doesn’t work first time, and you get a "TypeError: Converting py args to operator properties ‘etc’ not in enum <>
" error, just make a “junk” track and add an action to it… any action. press the button again and that seems to get it to work…
The result should be an track with all your visemes as actions
Note i haven’t put any checking on it yet. You will need an action named after each of the visemes. (or phonemes).
If you are doing shapes for the visemes , have a look at bat3’s post as it looks like you are set up to do them that way.
For the file that you set up … if you want to use it for NLA you will need to set up a controlling armature. Get your hands on the ludwig rig… This is the first rig i “deconstructed”… The faceControl armature drives both bones in face armature, but shape keys as well.
Setting up custom shapes for your bones and you get a nice controller look. There was a demo file somewhere that created the controller shapes in 2.4 or below. Things like sliders and the four way sliders you might use for the tongue position xy.
Have sliders for mouth open, pout closed etc etc…
Once you have that you can then set up different actions to emulate the viseme/phoneme. For example have the tongue touch the roof of mouth for an L.
Thanks for you reply.
I couldn’t make your script work in my blender 2.55(Mac). It still doesn’t show up the button, only the Title of the script on the Panel when we press “n”.
I will try to use it later with a proper scene that I am creating right now. With a character, shape-keys, armature and rig.
If you have some extra time, it would be nice to release a quick how-to video demonstrating how you made your script to work using a your basic scene. Once your script is the only one available for Blender-Papagayo around.
If I success using your script with my scene, I am more than happy to help you with that.
I will get back to this thread as soon as I have something show-able.
I believe FlashAmp is a much faster way.
Perhaps you can do a script working with FlashAmp?
It is so fast - compared to papagayo and jlipsync -they give you seconds of lipsync - FlashAmp minutes.
So have a try with FlashAmp.