Quote Originally Posted by Ultra
I haven't loooked at Alimonster's code, it might be that his is much more efficient but this has worked for me so far at least.
It looks like your code does pretty the same thing, but with one problem. You mentioned the word "efficient," which has jogged my memory about what I missed in the previous post.

I don't know how 2D dynamic arrays are stored in memory. If they're entirely consecutive (rather than each row being in a different area of memory), as with normal 2D arrays, then you only need one WriteBuffer/ReadBuffer for the entire map - sod all the loops .

[pascal][background=#FFFFFF][comment=#0000FF][normal=#000000]
[number=#C00000][reserved=#000000][string=#00C000]type
TTile = record
// stuff goes here
end;

T2DArray = array of array of TTile;

procedure SaveMe(const arr: T2DArray; const mem: TStream);
var
SizeX, SizeY: Integer;
begin
SizeX := Length(arr);
if SizeX > 0 then
SizeY := Length(arr[0])
else
SizeY := 0;

// write the map size
mem.WriteBuffer(SizeX, SizeOf(SizeX));
mem.WriteBuffer(SizeY, SizeOf(SizeY));

// and the **entire map in one go **
if (SizeX > 0) and (SizeY > 0) then
mem.WriteBuffer(arr[0][0], SizeX * SizeY * SizeOf(arr[0][0]));
end;[/pascal]

...and the same deal for loading .

What's important here: the hard disc is bloody slow, as with all input devices (think about when your computer thrashes the hard disc getting virtual memory; that's much slower than using RAM). As a result, you want to do as few reads as you possibly can, reading in big chunks. This will minimize the slowness of the hard disc I/O. RAM is very quick - read in stuff to memory, and parse it *there* .

Your code, Ultra, seems to be doing a read/write per tile (there will be a lot of 'em remember!). Think about this: each tile will be tightly packed together in the map, which means you can write many at the same time. If you write a tile[y,x], SizeOf(Tile[y,x]), then you're writing one tile. If you do the same, but 2 * SizeOf(Tile[y,x]), then you'll be writing two tiles at the same time (the expected one, plus the next one)! Remember that all your objects in the array, by definition, will be the same element type, and hence, the same size.

My previous code snippet was done because I'm not sure if dynamic 2D arrays are sequential in memory. I'd guess they are, but without checking, I couldn't be certain. As a result, the least disc activity possible is to write out one row per outer loop. I'm absolutely sure that the rows of a 2D array will be stored sequential (anything else is unthinkable!). However, if my new assumption that the entire array will be contiguous holds, then no loops are necessary at all.

If you're planning on doing lots of fiddling with a file then read as much of it into memory as you can each time, For example, you could use a TMemoryStream and TFileStream together - Create your filestream, use the memory one's CopyFrom, and use the *in memory* version. This will give big speed-ups (big, big ones!).