Idea: Generating audio from the physics engine

The idea
Hello, everyone. I was just watching this video: http://www.youtube.com/watch?v=FIPu9_OGFgc and was quite amazed by it. Being an audio engineer and audio plug-in developer, I got this idea: What if you could generate sound based on the many objects hitting each other? I can’t cound how many times I’ve done sound effects for collisions and similar things in computer games, and thinking “A large part of this could be done by a computer.” I admit I am not a Blender user, but I do have some insight into other 3D apps, and 3D engines in general, so here’s my idea that I hope someone will find interesting:

Assumptions

  • Blender knows exactly when each object is hitting each other (with at least a per-frame precision)
  • Blender knows at what velocity objects are colliding.
  • Blender might even know how large each object is?
  • Blender probably knows if the surface / texture is hard or soft looking (based on specularity settings.)

Necessary additions to the UI
What would have to be added to blender, is that each surface type / texture should be able to have a user selected .wav file associated with it. If you create a glass texture, you should attach the sound of two little pieces of glass hitting each other. If you create a wood texture, you should attach the sound of two pieces of wood hitting each other, and so on.
There should also be a way to specify an output folder for the resulting audio file, along with the desired output format (or maybe just hardcode to 96000 hz/24-bit. People can normalize and downsample from there.)

The actual calculation
Everytime two objects collide with each other (the physics engine should know this), we calculate how much kinetic energy is involved in the collision. Now I’m not a physics expert, but my guess is something like:
energy = mass1 * mass2 * collision_velocity^2

The trick is that everytime two objects collide, we mix the sound of these two 50-50% and then adjust the volume to be the same as the “energy” we just calculated. For example, if a glass fragment hits a wooden table, we mix glass.wav with wood.wav and adjust the volume of the resulting sound by the velocity.

We then mix this resulting “click” or “bonk” sound into the final output.wav, in the position that corresponds in time to where the collision happened. The resulting .wav will perfectly fit the animation we rendered out.

The user can then decide to add a bit of reverb or such to simulate the percieved acoustics of the scene (I suggest leaving this to 3rd party audio software, as reverb simulation is horribly complicated to code.) The dynamics of the resulting output will be unusually high, and manually limiting / waveshaping / saturating the resulting audio file will be necessary for this to sound natural, which is why 24-bit output is a requirement.

Lowpass filtering (not sure this part is even necessary)
We might also be able to deduct how soft the collision is by looking at the specularity of the the two colliding objects:

  • Shiny hits shiny (both have a lot of specularity): no low pass filtering.
  • Shiny hits soft: a lot of lowpass filtering.
  • Soft hits soft: a little bit more lowpassing than “shiny hits soft”.
    Assuming lowpass frequency is from 0…1 and specularity is also 0…1, the necessary formula could be something like:
    lowpass_frequency = 1 - sqr(obj1_specularity + obj2_specularity)

Possible ways to improve the result

  • Randomizing the pitch of the texture .wav files (the glass/stone/wood click-sounds) before mixing them in will make it sound more natural.
  • We might simulate differently sized objects better by pitching down the texture .wav files depending on the size of the object.
  • Alternatively, we could allow the user to associate multiple .wav files with each texture, and a way to sort them by size. This way he could provide files like woodblock_large.wav, woodblock_medium.wav, woodblock_small.wav and Blender would then choose the proper one.

A simpler solution requiring less code
In case the above seems overwelming, simply exporting a MIDI file might still be somewhat useful, if not entertaining. We could simple export a MIDI file containing a note-on / note-off pair for each collission. The velocity of each note-on should then be equal to the energy of the collision (see formula above.) This way Blender wouldn’t have to deal with .wav files at all. The result would be much less precise though.

1 Like

nice idea! made me remeber this one from siggraph 2010… sounds like a good proposal to bullet build?

Due to the flowing transitions of single impact and continuous contacts
sound synthesis might work better than using samples.
Reference:
Game Programming Gems 4
Controlling Real-Time Sound Synthesis from Game Physics (Frank Luchs 2004)

I also toyed with this idea.The assumptions you made for blender are all possible!
Gamelogic have collision sensor.
Velocities can be calculated with python.
Mass , volume, size scale is easy to know.
Texture or material properties can be based on materials assigned to object.
I never worked with audio but this idea is doable in blender.

its a great idea!!!