I’ve bookmarked this article initially because it was mentioning the same strategy for migrating data with zero downtime.
So during the update period, there could be writes sent to
the old service, which is writing to the old, single
MongoDB cluster. After updating, there’s a period of time
where both servers are writing before the second machine
But then I’ve realized that this is all about MongoDB. Tony Tam of Reverb:
To partition your data with the standard MongoDB toolset, significant
downtime is unavoidable. You’ll either need to write a bunch of application
logic, or get creative with some third party tools. This is a problem that
we’ve hit at Reverb more than once, and are the exact same tools + technique
that we used to migrate across datacenters (see From the Cloud and Back).
Isn’t MongoDB’s autosharding supposed to address exactly this scenario? What am I missing?
Original title and link: Partitioning MongoDB Data on the Fly