Donc voilà. Je voulais tester la musique MIDI, mais je ne sais pas comment faire pour lire la musique directement dans le stage. Je voulais changer du MP3, et faire séparer le son des instruments, et faire comme ceci.
Il y a un moyen de lire les fichiers de musiques (SF2) pour la version 36.1, je regarde comment ça marche, mais pas de tutoriel.
J’ai essayé avec Python mais ça mène vers l’erreur…
you may just get more info if you additional use the prefered language here… … so i’m using some help from the net… :
(auto translated)
So here is. I wanted to test MIDI music, but I don’t know how to play the music directly in the stage. I wanted to change from MP3, and separate the sound from the instruments, and do it like this.
There is a way to read music files (SF2) for version 36.1, I’m looking at how it works, but no tutorial.
I tried with Python but it leads to the error…
Do you mean a kind of mp3 to midi conversion?
This is not possible with this. And a very difficult job anyway. You could search the internet for software like Anthemscore or Melodyne or audio2score etc.
So you want to import a midi file then doteo things:
play the midi … and
visualize it … ?
Well the midi file is just the information when which music instrument have to play which note… it doesn’t say anything abotu the instrument itself… so you have to “select” one for every “channel”.
And the visualization is… well just anothe “notation” but graphically…
For this there are usually speciall programs out there… but blender cannot read them natively… but alas…
I have no experience with MIDI files, but with .wav files that can be managed through the aud module that we already have available in upbge/range. I had already sent to Kevin the link to the official documentation where he can have a complete overview of the functions that can be called so that he can try to implement something like an editor/player similar to what I did some time ago.
Here is the link to the documentation: https://upbge.org/docs/v0.2.5/api/aud.html
To create an instrument you just need the sound of a note in .wav format, I used a C note if I remember correctly.
The aud module allows you to modulate the pitch, volume etc…
Through pitch modulation, for example, a piano C can be made to sound like a different note.
You will have to write a Python script that allows you to make the .wav file play when the progress line hits the bricks representing the notes, modulating the pitch to raise or lower the sound by a certain number of semitones deduced from the height (worldPosition.y) of the brick .
The length of the note bricks (worldScale.x) is instead correlated to the sustain, which means that in function of this value you will reduce the volume of the sound more or less quickly (fadeOut) until it reaches zero.
*It would be important to make the collision search algorithm you implement performant, otherwise when you have many notes (bricks) in your editor you could have slowdowns and the music would not be performed with the perfect timing.
It’s been a while since I’ve played with this experiment, but i think that this is all you need to know about how it works. If you have some experience writing python scripts, it shouldn’t be too challenging now! Just in the end you will look about improving performances
Otherwise, if you are simply interested in composing music in a similar way you could think about using software like LMMS. Bye