17 Nov, 2010, Silenus wrote in the 1st comment:
Votes: 0
Hi,

I have been looking at some custom memory allocation schemes w.r.t mud drivers and I was curious what techniques existed for assessing the efficacy of one allocation scheme (malloc replacement) versus another or how to for example measure the effectiveness of a garbage collector. I guess the basic technique would be to add some sort of tracking system into the allocator but I am curious what sort of things one would be looking for? Would Knuth's analysis of best,first,worst fit be a place to start or is this material too dated and would their exist more up to date techniques (since for example the modern malloc implementation contains multiple lists of different sizes).
17 Nov, 2010, David Haley wrote in the 2nd comment:
Votes: 0
A pretty decent way of comparing efficiency is to check how much space has been allocated on the heap vs. how much space is actively in use. Compare a hypothetical perfect allocator with one that always puts new memory requests at the end.

Another useful metric is how much time is spent in the allocator. You might have a wonderful allocator scheme that takes far too long to process.

To be honest, though, the better question here is why you need to do this in the first place. Unless you have reason to believe that your MUD is spending inordinate amounts of time allocating memory or is wasting heap space left and right, you can probably avoid this entirely.
19 Nov, 2010, Silenus wrote in the 3rd comment:
Votes: 0
Hi David,

Thanks for the help. It has been just something I have been wondering about since I have been poking through the fluffos source code again and historically it does contain a number of custom allocation schemes (also I was wondering how the current malloc linux implementation compares to certain other schemes and how one makes a determination concerning how effective these schemes are).
21 Nov, 2010, quixadhal wrote in the 4th comment:
Votes: 0
This kind of micro-management is probably 99% pointless for applications running on modern operating systems. Back when memory was tight and disks were extremely slow, it was essential to keep any kind of performance, but these days, you may not know how many pages of your application's data set are swapped in or out at any given moment, and to further complicate matters, you may not even know if you are in a virtual machine, which itself may be partly swapped out by the hypervisor.

In short, there are too many layers outside the application's control for this level of optimization to really matter. You could spend months devising a slightly better scheme, only to see that in real-world deployment, your "RAM" is getting swapped to disk anyways.

I believe the saying is, penny wise and pound foolish. You'll see a lot more bang for the buck if you concentrate on minimizing the amount of memory you use, or moving structures together that get used frequently so they're more likely to stay swapped in.

All, IMHO, of course.
21 Nov, 2010, David Haley wrote in the 5th comment:
Votes: 0
Actually, memory allocation management can matter a very great deal for certain kinds of applications — except that MUDs are not one of those certain kinds of applications. :smile:
0.0/5