Quote Originally Posted by phibermon View Post
Take a parser that loaded lines of text - it might be asked to load 10 lines and then it might be asked to load 10 million lines - a fixed increment would have to be huge to make this close to efficient in terms of performance and it would then become inefficient in terms of storage for smaller numbers of lines.
You could combine both approaches. In fact I have seen this somewhere in Delphi code. I'm not sure whether this was in a TList or one of it descendants or perhaps in TMemoryStream. But I do know I have seen approach where initially size is increased exponentially to some point. From then on it is increased in fixed increments.

if you *know* you're storing 1GB of data in memory then you *never* want to be resizing an array of that size anyway - that's just silly - you'd pick a more efficient access pattern in the first place - one that doesn't rely on continuous blocks across the entire set.
That is true. I have been exaggerating a bit with my example to make the point more obvious.

In my projects I'm generally trying to avoid any array that would exceed 100 MB in size. If needed I split such arrays into multiple parts and then envelop them with class which then still allows me accessing to their data as it would still be stored in a single array. This way I lose a bit of performance but I avoid most problems that may arise due to memory fragmentation (not being able to find big enough continuous block of memory).

Also I'm starting to use classes and list more and more. One reason is to avoid having large arrays. Second reason is much faster data soring since you are moving less data in the array.
I even have a special list that provide multiple indexes and therefore allows me to have my data sorted by different criteria at all times.
Of course this list is also designed in a way that when I'm adding new data or removing data from it all indexes are updated right away (sorted inserts).