We stream the results to the server directly, so while the client is
still sending results, we are already flushing them to disk.
To make things even more interesting, we aren’t using standard GZip
compression over the whole request. Instead, each batch is
compressed independently, which means we don’t have a dependency on
the internals of the compression routine internal buffering system,
etc. It also means that we get each batch much faster.
There are, of course, rate limits built in, to protect ourselves
from flooding the buffers, but for the most part, you will have hard
time hitting them.
Bulk inserts and data import are two interesting topics in the world of NoSQL databases where there are no ACID guarantees. What is the state of the databases if data stream is cut midway? What is the state of the database if the import fails midway? What is the state of the database if some insert/update operations fail? I’m not aware of any good answers for these possible issues.
Original title and link: RavenDB Bulk Inserts: Implementation Details ( ©myNoSQL)