How do I play a MIDI-file into a piano-animation?

I composed a piece for piano.

I want to make a piano inside Blender (not done, yet, I want to know if playing a MIDI and convert it to animation in Blender is possible).

When that is done, what I want to know is: How can I play my composition on this piano?

If it makes any difference, since my piece is composed and played electronically, I can render each melody-line (polyphonies, chords etc.) as independent MIDI-files so I only have to focus on one part of the music at a time.

But I still want to know how to make the notes on my piano play accordingly to the note played.

How do I do that?

You’ll have to convert the MIDI to an audio format (WAV or somesuch). As for the MIDI controlling the animation, I have no idea if that is possible.

But if you convert it to WAV you lose all the code that the MIDI contains. All you can do with a WAV file is to find peak points, not the actual notes played. A MIDI file is essentially a sheet score, in pure code. That is the difference. For example, you can edit a MIDI-file, and change the notes played, in music software, but you cannot change a WAV file. That is because the WAV file contains only sound. A MIDI contains no sound, but pure code.

The sound that plays when you play a MIDI-file, is actually the computer playing the sheet music. It does not contain any sound, like MP3 or WAV itself. It depends 100 % on the soundcard and its MIDI-channels.

If Blender do not have it already, I think this is a big shame on Blender not to be able to use the MIDI data, and convert the digital sheet music directly into keypoints in Blender. It is technically possible, but unfortunatly, I cannot write Python myself.

For anyone interested in knowing more about the difference of MIDI and WAV, read here:
http://www.danmusic.com/midaudio.html

MIDI files contain no sound. Repeat, MIDI files contain no sound! They contain only performance data.

Blender should definetly take advantage of this!

There you go then! It has documentation so you should just read it and try it out on something simple first.

Ok, will give it a try.

I have now read the documentation. But one things stands out to me and prevents me from using it…

How on Earth am I supposed to tell the SCRIPT which objects in my scene are Keys (on my piano) and which are not? How does the script know? This is definetly poor documentation!

Besides: This script gives me a Syntax Error when I run it.

Any help would be appreciated. I have seen so many videos with pianos playing, made in Blender. But none of which, tells you how they made it.

Please help me!

This is proof that it can be done:

But of course, nobody wants to tell how… or?

That video you show’d had some problems. If it was converted to animation from a midi file, the script did not know anything except the buttons the file was using. It does not look realistic as long as the length of the key press is not animated.
I don’t know how this can be achieved, but I’m also very interested. That’s the reason I made this comment. :stuck_out_tongue:

I may eventually have figured out a way to do this and have the key press accordingly… but it is very tiresome.

  1. First, compose a melody, for piano, in your software. When done, select all the notes that are the same (example: all notes on A6), and render them out as a WAV-file. But remember to give them a sound that is consistent. A Piano sound is not consistent, as it changes and virbrates. Give the notes something like an Organ, but remember to disable all sorts of reverb. In order to have a consistent sound for Blender to use, change the notes by moving them to another note, say C5. Do this for all the notes, no matter what they are playing. What we want is WHEN the note plays, not which note plays. Render out the WAV-file. It should sound more like a “BEEP” whenever the note is played. This is loud enough for Blender to recognize.

  2. In Blender, apply the Array modifiers of your piano keys, and give each and every piano key a Shapekey. It is best to know how much influence the baked sound has before doing anything, so test it with one note. You may eventually need to have your piano key pressed a lot in influence, if the sound wave is not loud enough. For instance, a baked sound like this one, may only give influence to a shapekey corresponding to a value of 0.1 which is not enough for the key pressed to be visible. Any fix to this would be appreciated, if you know how to fix this. For now, I have my key pressed just below the piano for 100 % influence. When the sound plays, the key is pressed with an influence much less than 1, which means the key is pressed accordingly and for the right length.

  3. Do the above for all your keys/notes. You may end up with about 12-14 sound files regardless of the length of your melody, each file for each different note played.

Any other ideas would be welcomed in this thread.

Looking at the script, it depends on the config file to connect it to the objects… Take a look at the config.example.yml, and the readme.md.
It’s a bit of work to setup everything (the author also says it in the readme.md), but one you have your armature set, and the config correctly connected to it’s actions, it’s supposed to be straight forward…

What I would do, is to use the csv file from the midicsv and parse it with python. It has the every thing we need there, and it shouldn’t be difficult to take that info and animate your objects.
Here’s the documentation of the cvs format: http://www.fourmilab.ch/webtools/midicsv/

the most important information is this:

Track, Time, Note_on_c, Channel, Note, Velocity
Send a command to play the specified Note (Middle C is defined as Note number 60; all other notes are relative in the MIDI specification, but most instruments conform to the well-tempered scale) on the given Channel with Velocity (0 to 127). A Note_on_c event with Velocity zero is equivalent to a Note_off_c.

Track, Time, Note_off_c, Channel, Note, Velocity
Stop playing the specified Note on the given Channel. The Velocity should be zero, but you never know what you’ll find in a MIDI file.

Creating a simple py script that can understand these lines is quite simple, and to create keyframes for them should be easy.

Nothing is simple, I’m an artist, not a coder. I don’t know python. I completely gave up on the script, simple due to the fact that I can’t even run the most basic thing. It can’t even open the script, it says Syntax Error at line 1 when it tries to open the script. That alone, tells me the script must be broken.

Also, I never got an answer on how to setup the rig according to the script.

if you don’t need it urgently, I may try to work something out in the weekend…

Thank you so much. I do not need it urgently, but I’d like it in the weekend, or maybe monday?

But in any case, I would like a good tutorial on how to use any of this. Without a tutorial, I am completely lost. I am not used to use scripts, only addons and even then, I need a tutorial on how to use them.

In this case I want a tutorial (just a text tutorial) on how to get the MIDI to play on a mesh-keyboard.

I’ll try to keep it simple, with a step by step info. tell you something when i start. :wink:

Did anything ever come of this?

Well, I wrote my own additional nodes for AN to do this and am now writing my own Node Editor and Node set to do it, not finished yet and no manual either…

One of quite a few I have done over the last few years… All animations ar done directly from a MIDI.csv file.

I had something written at that time, but it wasn’t calculating the times correctly and i forgot to continue.
Still just a sketch code, as it just reads on/off notes and time related stuff. (no velocities, controllers, key-signatures, etc).
All you need is to have the csv file linked in the Text Editor, and the notes objects (named as xx.000 to xx.127).

import bpy
from collections import defaultdict

# the csv file
cvsfile = bpy.data.texts['pmc.csv']

# the prefix for the objects (from *.000 to *.127)
objectprefix = 'Cube'

#channel to be used 
channel = 2 


class tempo():
    def __init__(self, lst, clk, fps):
        self._clk = clk*1e6
        self._lst = lst
        self._tbl = self._buildtable()
        self._fps = fps

    def _calc(self, clocknow, clockstart, curtick, timestart):
        return (((clocknow-clockstart)*curtick) / (self._clk)) + timestart

    def _buildtable(self):
        clockstart=0
        clockspeed=1e6
        timestart=0
        timelast=0
        table=[]
        for tempoevt in self._lst:
            if tempoevt[0]>clockstart:
                table.append((clockstart, tempoevt[0], clockspeed, timestart))
                timestart = self._calc(tempoevt[0], clockstart, clockspeed, timestart) 
            clockstart = tempoevt[0]
            clockspeed = tempoevt[2]
        table.append((clockstart, float('inf'), clockspeed, timestart))
        return table
        
    def _getslice(self, value):
        return [sl for sl in self._tbl if sl[0]<=value<sl[1]][0]
        
    def calc(self, value):
        sl = self._getslice(value)
        time = self._calc(value, sl[0], sl[2], sl[3])
        return int(time * self._fps)

def readtracks(file):
    tracks = defaultdict(list)
    for line in file.lines:
        splits = [x.strip() for x in line.body.split(',')]
        if len(splits)>1:
            track = eval(splits[0])
            evttime = eval(splits[1])
            evttype = splits[2]
            if evttype.lower() in ['header', 'tempo', 'note_on_c', 'note_off_c']:
                args = [eval(s) for s in splits[3:]]
                tracks[track].append([evttime, evttype]+args)
    return tracks        

def processtrack(track):
    notes = defaultdict(list)
    activenotes = {}
    idx = 0
    for evt in track:
        if evt[1].lower()=='note_on_c':
            if evt[3] in activenotes.keys():
                notes[evt[3]].append((activenotes.pop(evt[3]), evt[0]))
            activenotes[evt[3]]=evt[0]
        elif evt[1].lower()=='note_off_c':
            if evt[3] in activenotes.keys():
                notes[evt[3]].append((activenotes.pop(evt[3]), evt[0]))
    return notes

def processtempos(track):
    tempos = []
    for evt in track:
        if evt[1].lower()=='tempo':
            tempos.append(evt)
    return tempos

def processheader(track):
    return track[0][-1]

def parsetracks(trackdict):
    clock=0
    notes={}
    tempos=[]
    for trackidx in trackdict.keys():
        if trackidx==0:
            clock = processheader(trackdict[trackidx])
        elif trackidx==1:
            tempos = processtempos(trackdict[trackidx])
        else:
            pnotes = processtrack(trackdict[trackidx])
            if len(pnotes):
                notes[trackidx]=pnotes
    return clock, notes, tempos            


# scale times to animation
def scenespeed():
    rndr = bpy.context.scene.render
    return rndr.fps*rndr.fps_base

def timefy(clock, events, tempos):
    scaledevents=defaultdict(list)
    tempostat = tempo(tempos, clock, scenespeed())
    for note,evts in events.items():
        for evt in evts:
            sttime = tempostat.calc(evt[0])
            entime = tempostat.calc(evt[1])
            scaledevents[note].append((sttime, entime))
    return scaledevents


#object related stuff

def pressevt(note, start, end):
    obj = bpy.data.objects['{:s}.{:03d}'.format(objectprefix, note).format(note)]
    obj.rotation_euler.x=-0.20
    obj.keyframe_insert("rotation_euler", frame = start)
    obj.rotation_mode = "XYZ"
    obj.rotation_euler.x=0.00
    obj.keyframe_insert("rotation_euler", frame = end)

def resetobjs(insertkf=-1):
    for i in range(127):
        obj = bpy.data.objects['{:s}.{:03d}'.format(objectprefix, i)]
        obj.rotation_euler.x = 0.0
        if insertkf>=0:
            obj.keyframe_insert("rotation_euler", frame=insertkf)

def setinterpolation():
    for i in range(127):
        obj = bpy.data.objects['{:s}.{:03d}'.format(objectprefix, i)]
        keyframes = obj.animation_data.action.fcurves[0].keyframe_points
        for kf in keyframes:
            kf.interpolation='CONSTANT'    

def transfermotion(notes):
    for x,v in notes.items():
        for itm in v:
            pressevt(x, itm[0], itm[1])


def main():
    tracks = readtracks(cvsfile)
    clock, notesEvents, tempoEvts = parsetracks(tracks)
    scaledtimes=timefy(clock, notesEvents[channel], tempoEvts)
    resetobjs(0)
    transfermotion(scaledtimes)
    setinterpolation()
    resetobjs()

main()

That looks great and very much so like Animusic, I came across this recently that I think does what you’re doing:

A couple of years ago (during Covid lock down :slightly_smiling_face:) my son did a science fair project on the very same topic. The idea was to render the animation of the piano keys, based on the MIDI file of the piece being played. As a means to help with remote teaching of music. It was a fun project but my son has has moved on. I am trying now to bring it back up some. Not sure how I can add a video here though for a sample

Ok, it looks like I cannot put in attachments here. Will try to share them on YouTube perhaps and give a link.

Animating the piano keys with music/midi