Results 1 to 10 of 25

Thread: CLASS vs. OBJECT and memory management

Threaded View

Previous Post Previous Post   Next Post Next Post
  1. #15
    PGD Staff / News Reporter phibermon's Avatar
    Join Date
    Sep 2009
    Location
    England
    Posts
    524
    Yes - pre-allocate storage for data, pre-construct object instances and then just fetch them from the pools. You can grow as needed and construct double linked lists for quick insertion and deletion if required.

    I usually sync my linked list operations to a hash table or spatial partitioning paradigm where appropriate as it's much faster to keep multiple structures in sync per operation than it is to keep them in sync by scanning the whole collection.

    I usually add on a thread safe reference count operation that will flag the data/object as unused rather than handing the job over to reference counted interfaces. In games we only really care about getting images, geometry, sounds etc out of memory - compared to the size of that stuff we don't care about instance sizes or constant data pools that rarely extend past dozens of MB. Leave the dynamic allocation for really big data.

    For data/objects that are written/read from multiple threads I wrap it in a task that gets passed along in chains across priority queues in each thread to ensure single thread access and operation order. Along with lock free queues this means I don't have to maintain slow locks for bits of data or objects - I just ensure it's impossible to be accessed at the same time (same goes for rendering, I don't lock any data or structures, I just pause the threads processing queues that might access the data the render thread needs - the so called 'render window')

    Tasks can be started and stopped at any point to ensure a maximum task time per cycle or flagged as frame critical to ensure the task is complete in time to finish rendering before the start of the next V-synced frame. (so it's like a crude OS scheduler but instead of sharing time on a processor, I'm sharing the time available per frame ( - time to render the frame + jitter overhead))

    This is the best way I know of handling time wasted idling in the Swap operation. Disabling v-sync is a stupid thing to do. You can't can't show more frames than the refresh rate of the screen - you only have to minimise time spent in the swap operation and time things carefully so you don't run over into the next window and cause an uneven framerate. You should be measuring performance by the time it takes to render each frame - not by how many frames per second you can push through - that doesn't tell you anything useful at all except if one computer is faster or slower than another on a given static task. Pipelines are too complex to rely on FPS as an indication during optimisation - high precision timers on actual operations are best. You can use the GL timer API to get true frame render times rather than putting a flag either side of the flush and swap - the card may of already started by then so you really want GL timers. (I'm sure DirectX and Vulkan have something similar)

    Sorry, I digress. When don't I?
    Last edited by phibermon; 12-07-2017 at 01:29 AM.
    When the moon hits your eye like a big pizza pie - that's an extinction level impact event.

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •