Results 1 to 10 of 31

Thread: Dynamic in-game object creation and manipulation

Hybrid View

Previous Post Previous Post   Next Post Next Post
  1. #1
    PGD Staff / News Reporter phibermon's Avatar
    Join Date
    Sep 2009
    Location
    England
    Posts
    524
    It's just an example - exponential array growth is a *good* solution - if you're using dynamic arrays in such a way then it means you have no idea on the maximum number of elements you need - ergo an exponential growth is a good default mode of operation.

    Take a parser that loaded lines of text - it might be asked to load 10 lines and then it might be asked to load 10 million lines - a fixed increment would have to be huge to make this close to efficient in terms of performance and it would then become inefficient in terms of storage for smaller numbers of lines.

    Exponential growth is the best trade-off you can make between performance and storage in a number of scenarios - the detriment remains roughly equal across both domains.

    In fact a better solution is to have an affordable but reasonably large initial size (say 4096 elements) and then grow exponentially from that - you lose a little bit of storage space for lots of small collections but you minimize the number of resizes as they're all bunched at the lower end with exponential growth.

    if you *know* you're storing 1GB of data in memory then you *never* want to be resizing an array of that size anyway - that's just silly - you'd pick a more efficient access pattern in the first place - one that doesn't rely on continuous blocks across the entire set.

    Memory is plentiful, you can afford to waste some for the sake of performance.
    Last edited by phibermon; 20-12-2015 at 12:01 AM.
    When the moon hits your eye like a big pizza pie - that's an extinction level impact event.

  2. #2
    Quote Originally Posted by phibermon View Post
    Take a parser that loaded lines of text - it might be asked to load 10 lines and then it might be asked to load 10 million lines - a fixed increment would have to be huge to make this close to efficient in terms of performance and it would then become inefficient in terms of storage for smaller numbers of lines.
    You could combine both approaches. In fact I have seen this somewhere in Delphi code. I'm not sure whether this was in a TList or one of it descendants or perhaps in TMemoryStream. But I do know I have seen approach where initially size is increased exponentially to some point. From then on it is increased in fixed increments.

    if you *know* you're storing 1GB of data in memory then you *never* want to be resizing an array of that size anyway - that's just silly - you'd pick a more efficient access pattern in the first place - one that doesn't rely on continuous blocks across the entire set.
    That is true. I have been exaggerating a bit with my example to make the point more obvious.

    In my projects I'm generally trying to avoid any array that would exceed 100 MB in size. If needed I split such arrays into multiple parts and then envelop them with class which then still allows me accessing to their data as it would still be stored in a single array. This way I lose a bit of performance but I avoid most problems that may arise due to memory fragmentation (not being able to find big enough continuous block of memory).

    Also I'm starting to use classes and list more and more. One reason is to avoid having large arrays. Second reason is much faster data soring since you are moving less data in the array.
    I even have a special list that provide multiple indexes and therefore allows me to have my data sorted by different criteria at all times.
    Of course this list is also designed in a way that when I'm adding new data or removing data from it all indexes are updated right away (sorted inserts).

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •