15 Jul, 2009, Silenus wrote in the 1st comment:
Votes: 0
I have been trying to design a pool allocator for allocating small objects (of the same type) and I was curious how best to do this. My only reference on this topic is a book on Modern C++ Design and this allocator I wrote is somewhat related to the allocator in the book(probably hopelessly misintrepreted). The question I have is are there better strategies for dealing with situations like this? The code I have constructed is as follows and assumes I preallocate a huge block all at once. I dont like the fact that it exposes the next pointer to the client classes.

template<class T>
struct Element
{
T element;
Element<T>* next;
};

template<class T, uin32_t N>
class Pool
{
enum { SIZE = N; }
Chunk();
bool allocate();
void deallocate(T* item);
private:
Element<T>* _empty;
Element<T> _pool[N];
};

Pool::Pool()
{
_empty = _pool;
for(uint32_t i = 0;i < SIZE - 1;++i)
_pool[i].next = _pool[i+1];
_pool[SIZE - 1] .next = 0;
};

Element<T>* Pool::allocate()
{
if(_empty == 0)
return 0;
Element<T>* ret = _empty;
empty_ = empty_->next;
return ret;
}

void Pool::deallocate(Element<T>* element)
{
element->next = _empty;
_empty = element;
}
15 Jul, 2009, David Haley wrote in the 2nd comment:
Votes: 0
If you don't like that it exposes 'next', why don't you just make it private/protected?
15 Jul, 2009, Silenus wrote in the 3rd comment:
Votes: 0
Ah good point I guess I could friend the Pool class and privatize the next field. Thanks David :D.
15 Jul, 2009, Runter wrote in the 4th comment:
Votes: 0
Any reason not to use boost::pool?
15 Jul, 2009, Silenus wrote in the 5th comment:
Votes: 0
Thanks Runter. I have actually looked at that but haven't looked at the source code yet so I am not sure what it does.
15 Jul, 2009, Scandum wrote in the 6th comment:
Votes: 0
Silenus said:
I have been trying to design a pool allocator for allocating small objects (of the same type) and I was curious how best to do this.

Not fully on topic, but pool allocation stops debugging tools (like Valgrind) from functioning properly, and I think the speed gains are questionable as well.
15 Jul, 2009, David Haley wrote in the 7th comment:
Votes: 0
The usefulness depends very heavily on your allocation patterns, yes.
15 Jul, 2009, Runter wrote in the 8th comment:
Votes: 0
There's reasons other than speed to use pool allocation. However, as said already, it's fairly niche conditions which often is avoided in common design inherently.
22 Jul, 2009, Silenus wrote in the 9th comment:
Votes: 0
I hope I am not resurrecting a dead thread here- but my question here is should not a pool allocator "in principle" be more efficient (both in time and space) than the normal malloc allocator if coded properly? I am not sure precisely how much you would save but you do not need to "in principle" check and compare sizes since all the objects allocated are of the same size. Or am I wrong about this?
22 Jul, 2009, David Haley wrote in the 10th comment:
Votes: 0
Well, you can answer your question like so: if it were always more efficient when "coded properly", and if it were always appropriate to use it, people would have coded it properly and used it as the standard allocator. The point is that there are some cases where yes, it is more appropriate to use a memory pool, and other cases where no, it is not as good. In particular, pool allocators are not appropriate when you have a bunch of objects of different sizes.
22 Jul, 2009, Silenus wrote in the 11th comment:
Votes: 0
Thanks David. I was talking about the same size case.
22 Jul, 2009, David Haley wrote in the 12th comment:
Votes: 0
It depends on the allocation pattern. If you are allocating a lot of things very quickly, alongside other things of different sizes, then it probably is better to use a pool for those things that are allocated quickly.

Variations on this pattern are kind of common, actually. For example, some dynamic vector implementations will allocate more than they actually need, for instance doubling their size with each growth. The idea is that the more you add, the more you are likely to add. So, if you have 100 elements, and you add another, it makes sense to allocate another 100 slots in preparation for the (likely) future additions. This is basically like a small, collection-local memory pool, with the added simplicity that elements are basically added sequentially and relatively rarely destroyed.
22 Jul, 2009, Runter wrote in the 13th comment:
Votes: 0
It can also make more sense when freeing data as well. Usually in these libaries you can dump off the allocator in a single free. (Which makes sense if you have a lot of data localized.) Like quests, containers, rooms, worlds, galaxies, inventories, whatever. It makes destroying them far more simple. Provided it all must be destroyed.
0.0/13