[sdlsound] SDL_sound 2.0 details revealed!

Ryan C. Gordon icculus at clutteredmind.org
Thu Jul 29 18:51:25 EDT 2004

> Does this mean we're abandoning small devices without FPUs, like my ipaq?  
> 44100*2 pointless software-emulated conversions to floating point and back 
> per second would grind it to a halt.

Well, with the exception of the callbacks, this all happens behind the
scenes, so we could have an integer path for embedded devices that
doesn't make promises about clipping. Is it reasonable to have 32-bit
integer mixing, or should we target 16-bit for these devices? This is a
question of efficiency, mostly, since extreme sound quality is probably
secondary to making noise at all on, say, a Palm Pilot.

Ironically, it's the float-to-int conversion that is slow on MacOS
(since the PowerPC can't convert between two registers, and memory
access outside the CPU cache is painfully slow).

We'd have to figure out something with the post-mix callbacks, though.
Part of the reason for the float32 mixer is CoreAudio, but part of it is
application convenience...there's nothing worse than having to write 12
codepaths for all the different audio formats you might have to process.

> Can you limit the maximum number of sounds playing at the same time?  I don't 
> see the ability to mix 1000 simultaneous sounds as something to be desired, 
> since every individual sample would be completely inaudible.

You can assign priority to sounds, and it'll cull the rest out. This is
kinda fuzzy right now, so it's mentioned but not really covered in the
tutorial doc. Some amount of application-level logic to decide what
should be played is going to be desirable in any case; having a priority
system is the cheapest way to let the mixer worry about this, but a more
robust app will probably want to cull on its own.

> Just as devil's advocate, if it's not integral to SDL_sound, why build it into 
> it instead of making it an add-on library?

Because more add-on libraries are more external dependencies. Mostly it
_should_ be there, but things that need an extremely small library
footprint will want this (case in point: using SDL_sound to decode audio
into raw PCM for feeding to another mixer, like OpenAL).

The SDL_sound mixer will benefit from the tight integration with the 1.0
API in ways that an external library can't (i.e. - we can add
mixer-specific internal data per-sample, know when to lock the mixer
callback when calling 1.0 APIs on samples that are currently playing,

> Is Sound_MixInit calling SDL_Init and related calls for us, or are they 
> omitted for clarity?

I haven't decided, honestly. It'll definitely call SDL_OpenAudio() for
obvious reasons. The "NULL" is there in case you want to open the device
in a specific format (but then do we force SDL to emulate that format if
the hardware doesn't support it directly, etc).

These are important questions that still need good answers.

> That is an elegant and powerful way to tie SDL_sound and the mixer together, 
> but I see one big limitation:  What if you want to have several instances of 
> 'hello' being played at the same time?  Will you have to load it into memory 
> several times to get different Sound_Samples to tell them apart?

See the "hardcore" section...there are ways to pool all this to be CPU
and memory efficient for multiple copies of the same sound data. It
probably shouldn't be in the "hardcore" section of the tutorial, to be

> I'd suggest allowing a range of -2.0f to 2.0f, or perhaps -32767 to +32767 if 
> we do integer mixing.  You can do some nifty pseudo-surround effects by 
> inverting one stereo channel.

Hmm...interesting. Sure, why not?


More information about the sdlsound mailing list