[physfs] Replace malloc/strcpy by strdup

Ryan C. Gordon icculus at icculus.org
Mon Oct 4 01:52:31 EDT 2010


> I postulate that the cost for this memory allocation is neglectable in
> comparison to the cost of the real I/O. So, i'm out of this discussion.
> Nether the less, a good benchmark would be interesting.

I tend to agree...most of the overhead of PhysicsFS would almost 
certainly be either disk i/o, or decompression (but I haven't 
benchmarked, either).


Other notes:

allocator.Malloc() allows people to swap in their own allocators, if, 
say, memory fragmentation is an issue. I've spent a large amount of time 
reducing malloc pressure in PhysicsFS 2.0...a lot of things have moved 
to the stack, not recalculated thing unnecessarily, etc. There's a lot 
to do, still, though, since I still think about this as something that 
should run well on embedded systems.

PATH_MAX (MAX_PATH?) are used in a few places, but shouldn't be. It's 
not good to assume a hardcoded filename limit at compile time (and in 
the case of the Hurd, there IS no PATH_MAX).

Most of the places we're allocating temporary paths should be using 
__PHYSFS_smallAlloc(), which is a macro: it'll use alloca() to 
stack-allocate data if it's small enough, and allocator.Malloc(), 
otherwise (and __PHYSFS_smallFree() knows what to do with these). I'm 
not confident that the stack is that much faster for our purposes--at 
least until you have pathological heap fragmentation--but it _does_ help 
prevent fragmentation in the first place. Note that the allocation in 
that original patch doesn't use smallAlloc() because the data has to 
outlive the function, so it can't ever be on the stack.

Tolga's fix for the NULL check is good, though, and it's now in revision 
control (thanks!).

--ryan.



More information about the physfs mailing list