[sdlsound] SDL_sound v2's internal mixer format...

Tyler Montbriand tsm at accesscomm.ca
Wed Aug 11 16:48:22 EDT 2004


On Wednesday 11 August 2004 05:19, Ryan C. Gordon wrote:
> 2) I reject the mixing to int32
Ouch.  All right.  If these points don't convince you, I'll drop it.

> this is extremely cache-unfriendly to keep jumping between different buffers
> in memory, and it makes it difficult or impossible to use SIMD effectively.
I'm not sure we need different buffers, converting in place works fine:

  #include <stdio.h>
  #include <SDL/SDL_types.h>
  typedef union audioptr
  {
    Sint32     *s32_mono;       /*   s32_mono[sample]            */
    Sint32    (*s32_stereo)[2]; /* s32_stereo[sample][channel]   */
    Sint16     *s16_mono;       /*   s16_mono[sample]            */
    Sint16    (*s16_stereo)[2]; /* s16_stereo[sample][channel]   */
                                /*        ... etc ...            */
    const void *cv;             /* Can set with any pointer type */
  } audioptr;

  /* Convert 32-bit to 16-bit in place */
  void buf32_to_16(audioptr buf, int len)
  {
    int n;
    for(n=0; n<len; n++)    buf.s16_mono[n]=buf.s32_mono[n]>>16;
  }

  /* Convert 16-bit to 32-bit in place, buffer must be large enough */
  void buf16_to_32(audioptr buf, int len)
  {
    int n;
    for(n=len-1; n>=0; n--) buf.s32_mono[n]=buf.s16_mono[n]<<16;
  }

  int main()
  {
    Sint16 data[]={-4,-3,-2,-1,0,1,2,3,0,0,0,0,0,0,0,0},n;
    audioptr p={(void *)data};

    for(n=0; n<8; n++)
      printf("%+d ",p.s16_mono[n]);
    printf("\n");

    buf16_to_32(p,8);
    for(n=0; n<8; n++)
      printf("%+d ",p.s32_mono[n]);
    printf("\n");

    buf32_to_16(p,8);
    for(n=0; n<8; n++)
      printf("%+d ",p.s16_mono[n]);
    printf("\n");

    return(0);
  }
prints:
  -4 -3 -2 -1 +0 +1 +2 +3
  -262144 -196608 -131072 -65536 +0 +65536 +131072 +196608
  -4 -3 -2 -1 +0 +1 +2 +3

> It's probably best to say that clipping isn't a wildly important concern
> in most games
I can't agree with this.  Clipping is a non-issue only because thing like 
SDL_mixer *don't* clip.  They solved the problem by using a fixed(albeit 
configurable) number of channels.

The only time I ever "succeeded" in getting SDL_mixer to clip was with an 
echo-effect callback.  It mostly worked with only the occasional snap, but 
when the music got loud it was dreadful.

Some assembly lets integers saturate instead wrapping, but quality is still 
audibly lost.  Besides, we can't count on assembly solutions;  who are we 
going to get to hardware-optimize the mixing routines for every platform 
known to SDL?  Or we could brute-force it:

  Sint16 s16_add_saturate(Sint16 buf, Sint16 val)
  {
    if(buf>=0)
    {
      if(val>(32767-buf))
        return(32767);
      else
        return(buf+val);
    }
    else
    {
      if(-val>(32768+buf))
        return(-32768);
      else
        return(buf+val);        
    }
  }

If the overhead of 32-bit mixing is bad, this has got to be worse.

> First, we have to have multiple backends inside the mixer. There's just
> no way around it. There's no good reason to piss away CPU time on a
> handheld to make a nice MacOS path, and vice versa.
Mixing in float gives us the opposite problem:  suddenly, integer samples 
incur a hell of a lot of overhead.

I'm curious if my direct int-float conversion hack is faster...  probably not.

-Tyler




More information about the sdlsound mailing list