Camera and Sound interaction

Hi,

I’m working on a project in which people are guided by sound to solve a maze. In order to do this I need realistic 3d sound. As far as I know, the 3d sound function in Blender only ajusts the level of intensity (volume) between the left and right ear to spatialize sound.
It would be nice if you can also correct for the time difference between th left and right ear. So when a sound comes from the left side of the camera, not only is the sound intensity lower in the right ear, also there is a little delay in the right ear (abour 0.05 s). I think this would enhance the experience of 3d sound
Does anyone know how to do this? Or how I can acces the left and right channel in the active camera?

Kind Regards

Not sure, but Audaspace has a pretty extensive audio library. You might want to check that out.

The audaspace module does have functions for delaying sound. You would need to adjust the time difference between the left and right speakers acording to the angle between the sounds source and the listener. An easier approach would be to measure the distance the sound has to travel to each ‘ear’ and from that calculate the time the sound takes reach each ear. Then delay the left and right channel accordingly. This would give you the delay between the ears used to help determine direction but also a delay between the the source of the sound and when the sound reaches the listener. This would make the sound appear much more 3D as the delay would decrease as you approach the source. This would help guide the player as I don’t think Blender’s 3D sound takes sound traveling time into consideration. Furthermore, just to complicate matters: the delay between the ears is different for different frequencies, and affected by ambient temperature.

The human ear doesn’t just use interaural time differences (IDT) to detect direction of a sound. The ear and brain also use frequencies to determine direction. For example, say a sound is coming directly from the listeners left, the head will block out frequecies with a short wave length (above 1kHz - though dependent on the listeners head size). So the listeners left ear will pick up more high frequencies (treble sounds) than the right ear. the audspace module comes equiped with filters you could use to dampen the higher frequencies coming out the speaker opposite to the sound source.

Dealing with sounds coming from above/below or behind/infront is a bit more tricky as they reach the ears at the same time. Here, tone and frequencies are used to determine where the sound is coming from. The shape of the ear helps filter frequencies dependent on it’s location. For example, the shape of the ear is unidirection, sound coming from behind will have certain frequencies attenuated by the pinna (the outer part of the ear) that are not blocked when the sound comes from the front. Again, audspace’s filters could help simulate this effect. Though, this effect is entirely dependent on this listeners ear shape.

Finally, sound reflections are also used to determine a sounds location and the space a person is in. The brain compares the intensity of the initial sound to the intensity of any reflects to help determine direction.

Not the most useful of replies, but hopefully it will give you a bit of an insight into how humans determine sound direction and some of the considerations needed for realistic 3D sound. (My degree’s in psychology and neuroscience - this post is the most use I’ve made my learning in a few years!)

Thanks for your help, I checked out audaspace. However, my problem is that I don’t know how to access the left and right speaker seperately. Can anyone help on how to modify the left and right channel of a sound?