Lipsync Importer & Blinker - 02/03 armature support

hi guys, this is my attempt to learn blender API system with python in a shape of a script similar to lipsynchro script from blender 2.4x.

so i needed it work for 2.5x versions, and tried to convert it and couldn’t :), so again i made up my own one.

i welcome any comments on the script :slight_smile:

latest updates are bold

DOWNLOAD:
from Here or grab the latest build with contrib-addons on graphicall:

WIKI page:
http://wiki.blender.org/index.php/Extensions:2.6/Py/Scripts/Import-Export/Lipsync_Importer

TOTORIAL link (for v2.2):
http://www.vimeo.com/15442522

for bugs please report here, i will fix it :slight_smile:

CHANGES log:
*version 0.20
-added an automatic blink options.
-fixed compatibility with 2.56, added Gap option to hold keys for slow lipsync.
-separated the blinker from the lipsync with a menu.
*version 0.26 30/03/2011
-fixed API renaming issue.
-added JLipsync exported Moho file support
*version 0.30 15/08/2011
-Updated the UI a bit.
-Added support for Yolo exported Moho file.
-Added random generator for blinks.
*version 0.31 04/09/2011
-quick bug fix update
*version 0.40 05/09/2011
-refactor of the under code to simplify things
-changed the work flow slightly (see the script’s page)
*version 0.50 15/02/2012
-armature support
-now it shows in tools panel
=== Latest Version 0.51 04/01/2013 works with blender 2.65 ===
-Bug fixes only

come guys where are you, what a crowded place ???

It worked for me. Thank you, I’ve been looking for something like this.

salam bat3a… i try ur script in blender 2.5.4 and it return error. how can i use this script (im a newbie)? tqvm

hi again, i finished the UI part i hope u like it

the WORKFLOW changed a bit, so here it is again:

1.export the file from papagayo as voice file
2.open your blend file containing the character you want to lipsync
3.rename you mouth shapes to the corresponding phoneme ( AI, O, E, U, etc, L, WQ, MBP, FV, rest)
4.save your scene
5.copy the lipsync_importer.py file to the addons file
6.start blender and go the preference -> addons -> enable lipsync_importer, you’ll find it in the tool shelf
7.now select the voice file set the offset frame
8.change shape key value (how much your lipsync will be pronounced)
9.set ease in and out curves (how smooth your lipsync is)
10.press plote keys to plot the animation keys

i’ll do a video tutrial on the script very soon :slight_smile:

for any bugs report here in this thread, an error msg from the consol would be helpful :slight_smile:

any ideas to improve are welcome


# ##### BEGIN GPL LICENSE BLOCK #####
#
#  This program is free software; you can redistribute it and/or
#  modify it under the terms of the GNU General Public License
#  as published by the Free Software Foundation; either version 2
#  of the License, or (at your option) any later version.
#
#  This program is distributed in the hope that it will be useful,
#  but WITHOUT ANY WARRANTY; without even the implied warranty of
#  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
#  GNU General Public License for more details.
#
#  You should have received a copy of the GNU General Public License
#  along with this program; if not, write to the Free Software Foundation,
#  Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# ##### END GPL LICENSE BLOCK #####

bl_addon_info = {
    "name": "LipSync Importer",
    "author": "Yousef Harfoush - bat3a ;)",
    "version": (0,2,0),
    "blender": (2, 5, 4),
    "api": 32037,
    "location": "3D window > Tool Shelf",
    "description": "Plot Papagayo's voice file to frames",
    "warning": "",
    "wiki_url": "",
    "tracker_url": "",
    "category": "Import/Export"}


import bpy, re

# intializing variables
obj = bpy.context.object
scn = bpy.context.scene
typ = bpy.types.Scene
var = bpy.props

scn['offset']=0
scn['skscale']=0.8    
scn['easeIn']=3
scn['easeOut']=3

if bpy.context.user_preferences.filepaths.use_relative_paths == True:
   bpy.context.user_preferences.filepaths.use_relative_paths = False

# testing if a mesh object with shape keys is selected
def ini():
    
    if obj!=None and obj.type=="MESH":
        if obj.data.shape_keys!=None:
            if scn.fpath!='': mapper()
            else: print ("select a voice file")
        else: print("add shape keys PLEASE")
    else: print ("Object is not mesh or not selected ")
    

# mapping shape keys to phonemes vars.
def mapper():
    
    global AI, O, E, U, etc, L, WQ, MBP, FV, rest
    global AIphnm, Ophnm, Ephnm, Uphnm, etcphnm, Lphnm
    global WQphnm, MBPphnm, FVphnm, restphnm
    
    AI="off"; O="off"; E="off"; U="off"; etc="off"; L="off"
    WQ="off"; MBP="off"; FV="off"; rest="off"
      
    sk=len(obj.data.shape_keys.keys)
    
    for x in range(sk):
        
        obj.active_shape_key_index = x
        
        if obj.active_shape_key.name=="AI": AI="on"; AIphnm=x
        elif obj.active_shape_key.name=="O": O="on"; Ophnm=x  
        elif obj.active_shape_key.name=="E": E="on"; Ephnm=x   
        elif obj.active_shape_key.name=="U": U="on"; Uphnm=x
        elif obj.active_shape_key.name=="etc": etc="on"; etcphnm=x
        elif obj.active_shape_key.name=="L": L="on"; Lphnm=x
        elif obj.active_shape_key.name=="WQ": WQ="on"; WQphnm=x
        elif obj.active_shape_key.name=="MBP": MBP="on"; MBPphnm=x
        elif obj.active_shape_key.name=="FV": FV="on"; FVphnm=x
        elif obj.active_shape_key.name=="rest": rest="on"; restphnm=x
    
    # calling file splitter
    spltr()


# reading imported file & creating keys
def spltr():
    
    f=open(scn.fpath) # importing file
    f.readline() # reading the 1st line that we don"t need
    
    for line in f:
        
        # removing new lines
        lsta = re.split("
+", line)
        
        # building a list of frames & shapes indexes
        lst = re.split(":? ", lsta[0])
        frm = int(lst[0])
        
        # creating keys
        if lst[1]=="AI" and AI=="on": crtkey(AIphnm, frm)
        elif lst[1]=="O" and O=="on": crtkey(Ophnm, frm)
        elif lst[1]=="E" and E=="on": crtkey(Ephnm, frm)
        elif lst[1]=="U" and U=="on": crtkey(Uphnm, frm)
        elif lst[1]=="etc" and etc=="on": crtkey(etcphnm, frm)
        elif lst[1]=="L" and L=="on": crtkey(Lphnm, frm)
        elif lst[1]=="WQ" and WQ=="on": crtkey(WQphnm, frm)
        elif lst[1]=="MBP" and MBP=="on": crtkey(MBPphnm, frm)
        elif lst[1]=="FV" and FV=="on": crtkey(FVphnm, frm)
        elif lst[1]=="rest" and rest=="on": crtkey(restphnm, frm)

# creating keys with offset and eases
def crtkey(phoneme, Skey):
    
    objSK=obj.data.shape_keys
    obj.active_shape_key_index=phoneme
    
    offst=scn['offset']        # offset value
    skVlu=scn['skscale']       # shape key value
    frmIn=scn['easeIn']        # ease in value
    frmOut=scn['easeOut']      # ease out value
        
    obj.active_shape_key.value=0.0
    objSK.keys[phoneme].keyframe_insert("value",
        -1, offst+Skey-frmIn, "Lipsync")

    obj.active_shape_key.value=skVlu
    objSK.keys[phoneme].keyframe_insert("value", 
        -1, offst+Skey, "Lipsync")
    
    obj.active_shape_key.value=0.0
    objSK.keys[phoneme].keyframe_insert("value", 
        -1, offst+Skey+frmOut, "Lipsync")


# creating ui button to running things
class LipSync_go(bpy.types.Operator):
    bl_idname = 'LipSync_go'
    bl_label = 'Start Processing'
    bl_description = 'Plots the voice file keys to timeline'
    
    def invoke(self, context, event):
        ini()

# drawing the user intreface
class LipSync_viewer(bpy.types.Panel):
    bl_space_type = "VIEW_3D"
    bl_region_type = "TOOL_PROPS"
    bl_label = "LipSync Importer"
    
    def draw(self, context):
        
        typ.fpath = var.StringProperty(name="Import File ", description="Select your voice file", subtype="FILE_PATH", default="")
        typ.skscale = var.FloatProperty(description="Smoothing shape key values", min=0.1, max=1.0)
        typ.offset = var.IntProperty(description="Offset your frames")
        typ.easeIn = var.IntProperty(description="Smoothing In curve", min=1)
        typ.easeOut = var.IntProperty(description="Smoothing Out curve", min=1)
        
        layout = self.layout
        
        if obj != None:
            if obj.type == "MESH":
                col = layout.column()
                split = col.split(align=True)
                split.label(text="Selected object is: ", icon="OBJECT_DATA")
                split.label(obj.name, icon="EDITMODE_HLT")
            elif obj.type!="MESH":
                layout.row().label(text="Object is not a Mesh", icon="OBJECT_DATA")
        else:
            layout.label(text="No object is selected", icon="OBJECT_DATA")    
        
        layout.prop(context.scene, "fpath")

        col = layout.column()
        split = col.split(align=True)
        split.label("Shape Key Value :")
        split.prop(context.scene, "skscale")
        
        col = layout.column()
        split = col.split(align=True)
        split.label("Frame Offset :")
        split.prop(context.scene, "offset")

        col = layout.column()
        split = col.split(align=True)
        split.prop(context.scene, "easeIn", "Ease In")
        split.prop(context.scene, "easeOut", "Ease Out")
        
        
        col = layout.column()
        col.separator()
        col.operator('LipSync_go', text='Plote Keys PLEASE')
        
        col.separator()
        col.label("Version 0.20 Updated 23/9/2010")
        col.label("Yousef Harfoush")

# registering the script
def register():
    pass
def unregister():
    pass
if __name__ == "__main__":
   register()

hi i added a tutorial to the script

any new ideas i willing to help
:slight_smile:

OK another link on you-tube

The youtube says: Error occured, try again later
???

Vimeo works :wink:

Hi bat3a… I have been coming at lip sync from another angle using NLA strips for phonemes rather than shape keys… http://blenderartists.org/forum/showthread.php/201490-NLA-lip-sync-using-papagayo-data-file-for-2.54?p=1732043&highlight=#post1732043… and hey become the first to comment on it LOL…

I chose to go with an NLA track for lipsync. This way i can use a control armature to pose the visemes in the action editor. Of course some bones in facecontrol armature may just drive shapes on mesh. I did this because i like to have a jawbone on my mesh and making sure the jawbone matched the viseme (back then … i can see it prob wouldn’t be too hard) drove me nuts.

Perhaps we could get our heads together on this. The script in my link works ok but there are some glitches when the file is first opened… perhaps one day a “guru” may give me a hand… but the file read may be of some assistance to you

Works great for me on 2.54

thanks very much!
my creature is talking!

nice to here that someone is using it - that makes my effort are gone non useless :slight_smile:

Haven’t used it yet, I’ll try it out in a few weeks. Doing non-talking game animations right now.

Thank you Yousef! I am testing this now, it is a dream come true that this script exists.

Edit: Ahhh! it works like a charm. I have wanted a tool like this for blender for so long, thank you kindly. I will take this over an automated solution anyday! I think that we should work to expand this script, to include some random head eye movements, and you will have an extremely robust solution for large scale lip syncing.

nice to hear that your dream come true 3dmentia, and doing random eye script (that’s inshallah easy) :slight_smile:

I think that we should work to expand this script, to include some random head eye movements, and you will have an extremely robust solution for large scale lip syncing

and doing random eye script (that’s inshallah easy

Very good idea, I’ll need this in a very near future project :wink:

hi all i added an automatic blinking options

Thanks again for this script, I will test the blinking option this weekend. Here is my first test of the lip synch script, of course the results are also affected by my own skill at creating morph targets and ability using papagayo. The script can probably produce even better results than this.

http://blenderartists.org/forum/showthread.php?t=204920&p=1757408#post1757408

Thanks Yousef!

you welcome :slight_smile:

if you have any other ideas to improve for my script i welcome it.

its only detect eng and spanish and italian lang… it should have to detect any kinda words …

you mean papagayo ??? that’s what i hate about this program, it syncs the words based on a dictionary, while there might be a very simpler way to recognize only the letters (which will make it language independent), i’m trying to make that happen :slight_smile:

1 question: there was a branch in 2.4x that creates a new window in blender looks like papagayo editor, iwounder what happened??? did they stop, or if something went wrong ??