Global Audio System - Concept?

Lately I’ve been trying to plan out the audio system for mu current game project. I realized that I wanted to have one global system to manage all the sounds (footsteps, damage hits, water splashes, loops). However; I’m not sure how to handle this. Using logic bricks for sound is out of the question because that’ll end up being more confusing than building a global system.

I’ve been thinking of making an audio class that multiple components to it. Functions that handle playing the sound, organizing the sound, receiving the sounds, checking the played sound, etc.

Let me give a couple of examples…
I want there to be footstep sounds when the player’s feet touches the ground. When the player’s foot hits the ground, I add the footstep sound into the global queue as a one time play, the audio system plays the sound, and then removes the sound from the global queue.
Maybe the player is severely hurt and a beeping alert sound starts to loop. The sound is added into the queue as a loop type, the audio system plays the sound (infinitely). Later the system gets a signal to stop the sound, the sound stops, and the it gets removed from the queue.

I’m not sure if this is the best way to do it, and I’m not sure how I’d go about actually doing it. I’m thinking of using BGEs sound system (audi?).

How would you guys go about making a sound system?

I use Audaspace (aud). It works pretty well. It handles looping, volume, pitch bends (particularly useful when playing a single sound over and over again and you want it to sound less grating), and other things for you. I load in all of the sounds via Audaspace into a global dictionary (logic.sounds, for example), store a reference to the audio device (logic.auddevice), and simply play them when I want ( logic.auddevice.play(logic.sounds[‘Explosion’]), for example).

Yeah, I remember trying to do sound reversing with that.
I’ve been thinking and what if I want to lower the volume for everything, or instantly stop all the sounds at the exact same time.

In other words, how can I control everything from one place?

I find the best way is to write a class that wraps aud (or just offers a nicer frontend to it). I could show you my source tommorow evening.

The Audaspace device has a stopAll() function for stopping all playing sounds, and a volume property for altering the volume of all playing sounds. Agoose’s method would be good for expanding, though.

Ah. I see, I’ll try to build on what agoose was beginning to bring up. Then look over his source code tomorrow evening.

Thanks.

Edit:
@SolarLune, I meant things like stopping specific types of sounds, or lowering the volume of specific types of sounds (should have made that more clear). So exactly, for expanding (:

Feel free to check out the sound_manager we coded for Novus: http://code.google.com/p/novus-terra-code/source/browse/trunk/src/sound_manager.py

Sadly, towards the end of the development, I started to quickly add features and functions so its not quite as clean as I would want. But it might give some ideas?

Ex.

Edit: Also some features are half baked (for the same reason) :o

Cool! Thanks Ex.
Why do you use semi-colons? (C++, Java syntax?)
Also, I see some code repetition between methods, any reason why some of the code within methods couldn’t be made into methods themselves, and then used to run reoccurring code within the methods? Or does the entire class instance run faster if code is just rewritten?
Just wondering. :stuck_out_tongue:

Agoose, are you going to post your source?

Is there a way to buffer music, sounds or any type of audio using audaspace without suspending the entire game?
When music is being buffered I, at least, want to be able to show an animated loading screen so it doesn’t look like the game has crashed on start up.

Any ideas?
What I mean is… Does the buffer need to happen over 1 frame? Or can it be spread out through multiple so the screen doesn’t freeze when the audio is loading?
Edit:
I was thinking maybe something with scene.post_draw?

You can buffer a sound with aud.Factory.buffer() (which returns a Factory, not a buffered playback handle). It shouldn’t be a noticeable amount of time to buffer a sound - I think you would generally buffer a short sound that you play often (like a footstep sound), and leave a longer sound (like music) for Audaspace to load and unload (or possibly stream?) as necessary.

EDIT: After re-reading your question, I see that you are buffering. I don’t know about partially buffering a sound - I think it’s either all or nothing. However, is it noticeable that your sounds are buffering? See if you’re using an efficient sound file type (OGG is low file-size and pretty much lossless, so maybe it’s efficient enough to not take long to buffer).

Yeah I’m using .ogg… (I’m just testing my code with a 6 minute song) I was thinking that you buffered long music and left short sound effects just load and unload when needed because longer = more info to load, and shorter = less info to load. Heh… Not sure if that’s correct reasoning.

I’m not even totally sure of the point to buffer sounds because it doesn’t seem to make a difference with my tests, I just see many code examples use it to set up their factories. I’m assuming it just helps in the long run with FPS.