I have been looking at some custom memory allocation schemes w.r.t mud drivers and I was curious what techniques existed for assessing the efficacy of one allocation scheme (malloc replacement) versus another or how to for example measure the effectiveness of a garbage collector. I guess the basic technique would be to add some sort of tracking system into the allocator but I am curious what sort of things one would be looking for? Would Knuth's analysis of best,first,worst fit be a place to start or is this material too dated and would their exist more up to date techniques (since for example the modern malloc implementation contains multiple lists of different sizes).
A pretty decent way of comparing efficiency is to check how much space has been allocated on the heap vs. how much space is actively in use. Compare a hypothetical perfect allocator with one that always puts new memory requests at the end.
Another useful metric is how much time is spent in the allocator. You might have a wonderful allocator scheme that takes far too long to process.
To be honest, though, the better question here is why you need to do this in the first place. Unless you have reason to believe that your MUD is spending inordinate amounts of time allocating memory or is wasting heap space left and right, you can probably avoid this entirely.
Thanks for the help. It has been just something I have been wondering about since I have been poking through the fluffos source code again and historically it does contain a number of custom allocation schemes (also I was wondering how the current malloc linux implementation compares to certain other schemes and how one makes a determination concerning how effective these schemes are).
This kind of micro-management is probably 99% pointless for applications running on modern operating systems. Back when memory was tight and disks were extremely slow, it was essential to keep any kind of performance, but these days, you may not know how many pages of your application's data set are swapped in or out at any given moment, and to further complicate matters, you may not even know if you are in a virtual machine, which itself may be partly swapped out by the hypervisor.
In short, there are too many layers outside the application's control for this level of optimization to really matter. You could spend months devising a slightly better scheme, only to see that in real-world deployment, your "RAM" is getting swapped to disk anyways.
I believe the saying is, penny wise and pound foolish. You'll see a lot more bang for the buck if you concentrate on minimizing the amount of memory you use, or moving structures together that get used frequently so they're more likely to stay swapped in.