ALL COVERED TOPICS

NoSQL Benchmarks NoSQL use cases NoSQL Videos NoSQL Hybrid Solutions NoSQL Presentations Big Data Hadoop MapReduce Pig Hive Flume Oozie Sqoop HDFS ZooKeeper Cascading Cascalog BigTable Cassandra HBase Hypertable Couchbase CouchDB MongoDB OrientDB RavenDB Jackrabbit Terrastore Amazon DynamoDB Redis Riak Project Voldemort Tokyo Cabinet Kyoto Cabinet memcached Amazon SimpleDB Datomic MemcacheDB M/DB GT.M Amazon Dynamo Dynomite Mnesia Yahoo! PNUTS/Sherpa Neo4j InfoGrid Sones GraphDB InfiniteGraph AllegroGraph MarkLogic Clustrix CouchDB Case Studies MongoDB Case Studies NoSQL at Adobe NoSQL at Facebook NoSQL at Twitter

NAVIGATE MAIN CATEGORIES

Close

NoSQL benchmark: All content tagged as NoSQL benchmark in NoSQL databases and polyglot persistence

What HBase Learned From the Hypertable vs HBase Benchmark

Every decent benchmark can reveal not only performance or stability problems, but oftentimes more subtle issues like less known or undocumented options, common misconfigurations or misunderstandings. Sometimes it can reveal scenarios that a product hasn’t considered before or for which it has different solutions.

So even if I don’t agree with the purpose of the Hypertable vs HBase benchmark, I think the benchmark is well designed and there were no intentions to favor one product over the other.

I went back to two long time HBase committers and users, Michael Stack and Jean-Daniel Cryans, to find out what the HBase community could learn from this benchmark.

What can be learned from the Hypertable vs HBase benchmark from the HBase perspective?

Michael Stack: That we need to work on our usability; even a smart fellow like Doug Judd can get it really wrong.

We haven’t done his sustained upload in a good while. Our defaults need some tweaking.

We need to do more documentation around JVM tuning; you’d think fellas would have grok’d by now that big java apps need their JVM’s tweaked but it looks like the message still hasn’t gotten out there.

That we need a well-funded PR dept. to work on responses to the likes of Doug’s article (well-funded because Doug claims he spent four months on his comparison).

Jean-Daniel Cryans: I already opened a few jiras after using HT’s test on a cluster I have here with almost the same hardware and node count, it’s mostly about usability and performance for that type of use case:

  • Automagically tweak global memstore and block cache sizes based on workload

    Hypertable does a neat thing where it changes the size given to the CellCache (our MemStores) and Block Cache based on the workload. If you need an image, scroll down at the bottom of this link:

    Hypertable adaptive memory allocation

  • Soft limit for eager region splitting of young tables

    Coming out of HBASE-2375, we need a new functionality much like hypertable’s where we would have a lower split size for new tables and it would grow up to a certain hard limit. This helps usability in different ways:

    • With that we can set the default split size much higher and users will still have good data distribution
    • No more messing with force splits
    • Not mandatory to pre-split your table in order to get good out of the box performance

    The way Doug Judd described how it works for them, they start with a low value and then double it every time it splits. For example if we started with a soft size of 32MB and a hard size of 2GB, it wouldn’t be until you have 64 regions that you hit the ceiling.

    On the implementation side, we could add a new qualifier in .META. that has that soft limit. When that field doesn’t exist, this feature doesn’t kick in. It would be written by the region servers after a split and by the master when the table is created with 1 region.

  • Consider splitting after flushing

    Spawning this from HBASE-2375, I saw that it was much more efficient compaction-wise to check if we can split right after flushing. Much like the ideas that Jon spelled out in the description of that jira, the window is smaller because you don’t have to compact and then split right away to only compact again when the daughters open.

If someone is faced with similar scenarios are there workarounds or different solutions?

Michael Stack: There are tunings of HBase configs over in our reference guide for the sustained upload both in hbase and in jvm.

Then there is our bulk load facility which by-passes this scenario altogether which is what we’d encourage folks to use because its 10x to 100x faster getting your data in there.

Jean-Daniel Cryans: You can import 5TB in HBase with sane configs, I’ve done it a few times already since I started using his test. The second time he ran his test he just fixed mslab but still kept the crazy ass other settings like 80% of the memory dedicated to memstores. My testing also shows that you need to keep the eden space under control, 64MB seems a good value in my testing (he didn’t set any in his test, the first time I ran mine without setting it I got the concurrent mode failure too).

The answer he gave this week to Todd’s email on the hadoop mailing list is about a constant stream of updates and that’s what he’s trying to test. Considering that the test imports 5TB in ~16h (on my cluster), you run out of disk space in about 3 days. I seriously don’t know what he’s aiming for here.

Quoting him: “Bulk loading isn’t always an option when data is streaming in from a live application. Many big data use cases involve massive amounts of smaller items in the size range of 10-100 bytes, for example URLs, sensor readings, genome sequence reads, network traffic logs, etc.”

What are the most common places to look for improving the performance of a HBase cluster?

Michael Stack: This is what we point folks at when they ask the likes of the above question: HBase Performance Tunning

If that chapter doesn’t have it, its a bug and we need to fix up our documentation more.

Jean-Daniel Cryans: What Stack said. Also if you run into GC issues like he did then you’re doing it wrong.

Michael Stack also pointed me to a comment by Andrew Purtell (nb: you need to be logged in on LinkedIn and member of the group to see it):

I think HBase should find all of this challenging and flattering. Challenging because we know how we can do better along the dimensions of your testing and you are kicking us pretty hard. Flattering because by inference we seem to be worth kicking.

But this misses the point, and reduces what should be a serious discussion of the tradeoffs between Java and C++ to a cariacture. Furthermore, nobody sells HBase. (Not in the Hypertable or Datastax sense. Commercial companies bundle HBase but they do so by including a totally free and zero cost software distribution.) Instead it is voluntarily chosen for hundreds of large installations all over the world, some of them built and run by the smartest guys I have ever encountered in my life. Hypertable would have us believe we are all making foolish choices. While it is true that we all on some level have to deal with the Java heap, only Hypertable seems to not be able to make it work. I find that unsurprising. After all, until you can find some way to break it, you don’t have any kind of marketing story.

This remineded me of the quote from Jonathan Ellis’s Dealing With JVM Limitations in Apache Cassandra:

Cliff Click: Many concurrent algorithms are very easy to write with a GC and totally hard (to down right impossible) using explicit free.

As I was expecting, there are quite a few good things that will come out from this benchmark for both long time HBase users, but also for new adopters.

Original title and link: What HBase Learned From the Hypertable vs HBase Benchmark (NoSQL database©myNoSQL)


LevelDB and Kyoto Cabinet Benchmark

I’ve been pretty excited about Google’s LevelDB, not to mention there are some really old tanks already in the battle field like BDB, Tokyo Cabinet (Kyoto Cabinet as new one), HamsterDB etc. Fortunately I’ve already worked with Kyoto Cabinet and when I looked at the benchmarks I was totally blown away.

His benchmark results are radically different than the ones published in the LevelDB benchmark.

Original title and link: LevelDB and Kyoto Cabinet Benchmark (NoSQL database©myNoSQL)

via: http://maxpert.tumblr.com/post/8330476086/leveldb-vs-kyoto-cabinet-my-findings


Redis vs H2 Performance in Grails 1.4

I wondered just how much faster read/write operations could be with Redis (if at all) over the H2 database so I set out to write a little test app to see for myself.

It is an apple-to-apple comparison—both Redis and H2 are in-memory databases. But it is not a comparison of Redis vs H2 performance, but rather a comparison of Grails integration for Redis and H2, Grails object to Redis vs object to relational mapping, Redis and H2 drivers, and only at last of Redis and H2 performance.

You could argue that for real applications that’s what matters and that would be correct. But then the title should be the one I used.

Redis vs H2 in Grails 1.4

Original title and link: Redis vs H2 Performance in Grails 1.4 (NoSQL database©myNoSQL)

via: http://www.christianoestreich.com/2011/06/redis-vs-h2/


The 11 Commandments of Benchmarking

Mark Nottingham has a great post about benchmarking HTTP servers. All the 11 rules exposed in the post apply as they are to NoSQL benchmarks and generally to storage benchmarks:

  1. Consistency
  2. One machine, one job
  3. Check the network
  4. Remove OS limitations
  5. Don’t test the client
  6. Overload is not capacity
  7. Thirty Seconds isn’t a test
  8. Do more than Hello world
  9. Not just averages
  10. Publish it all
  11. Try different tools

Go read the post now before creating yet another irrelevant benchmark.

Original title and link: The 11 Commandments of Benchmarking (NoSQL databases © myNoSQL)

via: http://www.mnot.net/blog/2011/05/18/http_benchmark_rules


Multi-tenancy and Cloud Storage Performance

Adrian Cockcroft[1] has a great explanation of the impact of multi-tenancy on cloud storage performance. The connection with NoSQL databases is not necessarily in the Amazon EBS and SSD Price, Performance, QoS comparison, but:

and

If you ever see public benchmarks of AWS that only use m1.small, they are useless, it shows that the people running the benchmark either didn’t know what they were doing or are deliberately trying to make some other system look better. You cannot expect to get consistent measurements of a system that has a very high probability of multi-tenant interference.


  1. Adrian Cockcroft: Netflix, @adrianco  

Original title and link: Multi-tenancy and Cloud Storage Performance (NoSQL databases © myNoSQL)

via: http://perfcap.blogspot.com/2011/03/understanding-and-using-amazon-ebs.html


NoSQL Benchmark Source Code Available

Code of the NoSQL benchmark I’ve mentioned a couple of days ago — the one comparing Cassandra 0.6.10, HBase 0.20.6, MongoDB 1.6.5, Riak 0.14.0 with some weird results — is now available on GitHub.

The benchmark is part of the Master’s thesis of Thibault Dory, a Belgium CS student. Thibault has also set up a website for the benchmark: NoSQLBenchmarking.com.

I’m wondering how soon we will start seeing corrections to the initial results for each of the included NoSQL databases.

Jean-Daniel Cryans

Original title and link: NoSQL Benchmark Source Code Available (NoSQL databases © myNoSQL)


YCSB Benchmark Results for Cassandra, HBase, MongoDB, Riak

A recent slide deck presenting results of the YCSB a new benchmark run against the latest versions of Cassandra (0.6.10), HBase (0.20.6), MongoDB (1.6.5), and Riak (0.14.0):

Some of the results are striking, so I cannot wonder if there weren’t some configuration issues.

Update: A few users that had more luck reading the details on the slides have pointed out that this is not the YCBS benchmark, but rather a new one developed by the presenter. Another detail that’s important is that data used was rather small and could easily fit in memory.

Original title and link: YCBS Benchmark Results for Cassandra, HBase, MongoDB, Riak (NoSQL databases © myNoSQL)


MongoDB vs Clustrix Performance Comparison

This made some rounds yesterday. And it got some long comments on both Hacker News and Reddit.

While I haven’t gone through the benchmark details, the first thing that made me raise an eyebrow was this comment early in the post:

Well, that’s just bullshit. There is absolutely nothing about SQL or the relational model preventing it from scaling out.

I’m afraid I’ll have to disagree with the second part.

Original title and link: MongoDB vs Clustrix Performance Comparison (NoSQL databases © myNoSQL)

via: http://sergeitsar.blogspot.com/2011/01/mongodb-vs-clustrix-comparison-part-1.html


VoltDB: 3 Concepts that Makes it Fast

John Hugg lists the 3 concepts that make VoltDB fast:

  1. Exploit repeatable workloads: VoltDB exclusively uses a stored procedure interface.
  2. Partition data to horizontally scale: VoltDB devides data among a set of machines (or nodes) in a cluster to achieve parallelization of work and near linear scale-out.
  3. Build a SQL executor that’s specialized for the problem you’re trying to solve.: If stored procedures take microseconds, why interleave their execution with a complex system of row and table locks and thread synchronization? It’s much faster and simpler just to execute work serially.

Let’s take a quick look at these.

Using stored procedures — instead of allowing free form queries — would allow the system:

  1. to completely skip query parsing, creating and optimizing execution plans at runtime
  2. by analyzing (at deploy time) the set of stored procedures, it might also be possible to generate the appropriate indexes

The benefits of horizontally partitioned data are well understood: parallelization and also easier and cost effective hardware usage.

Single threaded execution can also help by removing the need for locking and reducing data access contention.

While these 3 solutions are making a lot of sense and can definitely make a system faster, there’s one major aspect of VoltDB that’s missing from the above list and which I think is critical to explaining its speed: VoltDB is an in-memory storage solution.

Here are a couple of examples of other NoSQL databases that benefit from being in memory (or as close as possible to it). MongoDB, while being a lot more liberal with the queries it accepts, can deliver very fast results by keeping as much data in memory as possible — remember what happened when it had to hit the disk more often? — and using appropriate indexes where needed. Redis and Memcached can deliver amazingly fast results because they keep all data in-memory. And Redis is single threaded while Memcached is not.

Original title and link: VoltDB: 3 Concepts that Makes it Fast (NoSQL databases © myNoSQL)

via: https://voltdb.com/blog/why-voltdb-so-fast


Mongo Vs Redis, The Increment Battle

The Hacker News thread points out all the flaws in the test:

  • measuring a mix of client library latency and round trip time
  • single threaded
  • no durability requirements
  • wrong way to compute and present stats

Original title and link: Mongo Vs Redis, The Increment Battle (NoSQL databases © myNoSQL)

via: http://devdazed.com/post/2737860983/mongo-vs-redis-the-increment-battle


Graph Database DEX Benchmark

Sparsity, producers of the DEX graph database, have published the results of a benchmark measuring:

  • How many nodes and edges could be created?
  • Which was the size of the database created?
  • How long did the load of the database take?
  • How many traversals we could possibly make per unit of time?

DEX graph database benchmark

I don’t know how to interpret these numbers so I’ll let graph database experts to comment.

Benchmark aside, from this post I’ve learned about the “Scalable Graph Analysis Benchmark” paper that can be downloaded from here (PDF). Which makes me wonder: has any other graph database producer used this paper for benchmarking their product?

Original title and link: Graph Database DEX Benchmark (NoSQL databases © myNoSQL)

via: http://sparsity-technologies.com/blog/?p=196


CouchDB: Measuring Read Request Throughput

I am trying to measure max couch throughput - for these tests im happy with just repeatedly requesting the same document. I have some reasonable boxes to perform these tests - they have dual quad core X5550 CPUs with HyperThreading enabled and 24GB RAM. These boxes have a stock install of oracle enterprise linux 5 on them (which is pretty much RHEL5). The oracle supplied erlang version is R12B5 and I am using couch 1.0.1 built from source.

The database is pretty small (just under 100K docs) and I am querying a view that includes some other docs (the request contains include_docs=true) and using jmeter on another identical box to generate the traffic. The total amount of data returned from the request is 1467 bytes. For all of my tests I capture system state using sadc and there is nothing else happening on these boxes

Leave aside for a moment the numbers and read how he is building the test: clear scenario and objectives, clear setup, lots of details about the setup, then experiment, tweak, repeat.

Original title and link: CouchDB: Measuring Read Request Throughput (NoSQL databases © myNoSQL)

via: http://comments.gmane.org/gmane.comp.db.couchdb.user/11515