ALL COVERED TOPICS

NoSQL Benchmarks NoSQL use cases NoSQL Videos NoSQL Hybrid Solutions NoSQL Presentations Big Data Hadoop MapReduce Pig Hive Flume Oozie Sqoop HDFS ZooKeeper Cascading Cascalog BigTable Cassandra HBase Hypertable Couchbase CouchDB MongoDB OrientDB RavenDB Jackrabbit Terrastore Amazon DynamoDB Redis Riak Project Voldemort Tokyo Cabinet Kyoto Cabinet memcached Amazon SimpleDB Datomic MemcacheDB M/DB GT.M Amazon Dynamo Dynomite Mnesia Yahoo! PNUTS/Sherpa Neo4j InfoGrid Sones GraphDB InfiniteGraph AllegroGraph MarkLogic Clustrix CouchDB Case Studies MongoDB Case Studies NoSQL at Adobe NoSQL at Facebook NoSQL at Twitter

NAVIGATE MAIN CATEGORIES

Close

Migrating Live Memcached Servers

Ian Winter describes the possible approaches to migrate their memcached server farm:

  1. Big Bang. Update all the configuration files and simply push that code live. This has a huge hit on performance as all the new instances are cold, it is however the quickest option.
  2. Migration instance by instance. Each instance would be moved and relevant configuration files pushed live. This way only 1 or X instances is “cold”. This limits impact, but, also takes time as only 1 instance can be done at any one time.
  3. Write to both. The application code could be altered to make all writes to both the existing set of instances and the new. This isn’t great as we’d need a full QA cycle to validate the code works. It’s also something we’d have to implement then pull back out down the line.
  4. Mirror traffic. Similar to the above, but, this time lower down the stack. Essentially duplicate all the TCP level traffic so it warms in parallel and keeps both instances in sync meaning new writes, deletes occur on both existing and new sets.

Can you tell which approach they used?

PS: For the first two approaches I’d use different names: Stop the World and Rolling upgrade respectively.

Original title and link: Migrating Live Memcached Servers (NoSQL database©myNoSQL)

via: http://globaldev.co.uk/2013/01/migrating-memcached/