bluemoon wrote:This is what I called flat, auto sized array, and this is a few improvement/suggestions for you:
1. you may pre-allocate in chunks to avoid frequent allocations
2. you may break down the allocation to reduce stress from realloc (imagine you expand from 10000 ro 10001 elements)
This is what my array look like:And guess what, this is one of the class that I never used in any scenario, in all cases I used linked list, map (hash table), bi-directional map, or just boost.Code: Select all
template <class T> class libprefix_Array { public: libprefix_Array(); ~ libprefix_Array (); // not suppose to be inherited, no virtual void clear (); T* get(int index); int count() { return counter; } protected: static const int kChunkSize = 16; int item_node_count, item_node_allocated, counter; T** item; }; template <class T> T* libprefix_Array<T>::get(int index) { int chunk = index / kChunkSize; if ( chunk >= item_node_allocated ) { // expand item pointer table } if ( item[chunk] == NULL ) { // allocate payload } return &item[chunk][index % kChunkSize]; }
There's a difference though, my code is C and yours is C++
Obviously, it was kinda stupid for me to allocate so much, but at the time I was writing it I was more concerned with size than speed (no idea why)
EDIT:
Then again, I doubt people are going to constantly be resizing lists. Obviously, this isn't suited for something like memory management. My intentions when writing this were to use it to attach events to the components in my gui.