In a library I'm starting on I have the requirement for handling fairly large and frequent passing of buffers from the program to the library. The quesion on methodology is this: Would it not be more efficient for the program to pass ownership of the buffer to the library rather than use the standard method of copying every single buffer? All this dual buffer stuff seems frightfully wasteful to me.

To illustrate, consider a network library where you pump about 10 MB/sec (bytes, not bits) through the network. ie, roughly what you would get with a normal 100 mbps network card. The normal method of a non-blocking call is that you populate your buffer with data, call the library which creates it's own buffer, and then copies your buffer into it's buffer, and then returns back to you where you can now reuse or discard your buffer.

This is very simple, very easy, and very encapsulated. But it is not efficient.

The alternative is that the program calls the library and passes ownership of the existing buffer to the library, who is then free to use it and then to discard it when completed. Of course the caller must not touch the buffer after calling the library, so possibly setting buffer := nil; after the call is a good idea.

This is not without drawbacks. The memory management on the library and the program have to be the same one. Not an issue for an embedded library where you pull the library's source into your program, but a factor when you use it as a linked library. Secondly, you can't pool your buffers so there is an overhead of buffer creation and destruction. Though not critical, this can increase memory defragmentation.

Your thoughts?