NoSQL Benchmarks NoSQL use cases NoSQL Videos NoSQL Hybrid Solutions NoSQL Presentations Big Data Hadoop MapReduce Pig Hive Flume Oozie Sqoop HDFS ZooKeeper Cascading Cascalog BigTable Cassandra HBase Hypertable Couchbase CouchDB MongoDB OrientDB RavenDB Jackrabbit Terrastore Amazon DynamoDB Redis Riak Project Voldemort Tokyo Cabinet Kyoto Cabinet memcached Amazon SimpleDB Datomic MemcacheDB M/DB GT.M Amazon Dynamo Dynomite Mnesia Yahoo! PNUTS/Sherpa Neo4j InfoGrid Sones GraphDB InfiniteGraph AllegroGraph MarkLogic Clustrix CouchDB Case Studies MongoDB Case Studies NoSQL at Adobe NoSQL at Facebook NoSQL at Twitter



Using MySQL as NoSQL: A Story for exceeding 750k qps

How many times do you need to run PK lookups per second? […] These are “SQL” overhead. It’s obvious that performance drops were caused by mostly SQL layer, not by “InnoDB(storage)” layer. MySQL has to do a lot of things like below while memcached/NoSQL do not neeed to do.

  • Parsing SQL statements
  • Opening, locking tables
  • Making SQL execution plans
  • Unlocking, closing tables

MySQL also has to do lots of concurrency controls.

The story has been out for a couple of weeks already, so I’ll not get into the details. But I felt like adding a couple of comments to the subject:

  • existing RDBMS storage engines are most of the time very well thought and long time tested
  • some NoSQL databases have realized that and allow plugging in such storage engines in their systems:
    • Project Voldemort supports Berkley DB (and MySQL, but not sure it goes around the SQL engine)
    • [Riak comes with Innostore], an InnoDB-based storage
  • many of the findings in this article sound very close to the rationale behind VoltDB, including the pre-compiled, cluster deployed stored procedures

Original title and link: Using MySQL as NoSQL: A Story for exceeding 750k qps (NoSQL databases © myNoSQL)