Just a theory but how about this?

In your Header, put a variable to contain the file offset of the "index". As the header is a fixed size, you can update this for every addition/deletion. For each chunk, also store the the size of that chunk, then at the end of the file after the raw data you store the index which contains the number of chunks, the file offset for each chunk and maybe also a flag to show if it's active or not.
Your file structure would look like:
[Header] [Chunk0] .. [Chunkn] [Index]

For adding:
  • Go to the index file offset.
  • Store chunk size and file offset in index
  • Overwrite index with new data.
  • Store file offset in Header.
  • Write index to disk.
  • Go to the beginning of the file and re-write the Header.


For deleting:
  • Change the active "flag" in index.
  • Go to index file offset
  • Re-write the index


I know this would be a lot of wasted space, but you could write a procedure to delete all "inactive" chunks from the file (kinda like compacting) which only has to be done every so often. It also gives you the option of restoring deleted data just by changing the flag.

I don't know if this is how databases work, but if you're looking for speed, then this is how i'd do it.

edit:
Just re-read post (properly ) The above should still work if the data shrinks, but not if it grows. How often will the data get larger? I thought databases were fast because they used fixed-size records.