ALL COVERED TOPICS

NoSQL Benchmarks NoSQL use cases NoSQL Videos NoSQL Hybrid Solutions NoSQL Presentations Big Data Hadoop MapReduce Pig Hive Flume Oozie Sqoop HDFS ZooKeeper Cascading Cascalog BigTable Cassandra HBase Hypertable Couchbase CouchDB MongoDB OrientDB RavenDB Jackrabbit Terrastore Amazon DynamoDB Redis Riak Project Voldemort Tokyo Cabinet Kyoto Cabinet memcached Amazon SimpleDB Datomic MemcacheDB M/DB GT.M Amazon Dynamo Dynomite Mnesia Yahoo! PNUTS/Sherpa Neo4j InfoGrid Sones GraphDB InfiniteGraph AllegroGraph MarkLogic Clustrix CouchDB Case Studies MongoDB Case Studies NoSQL at Adobe NoSQL at Facebook NoSQL at Twitter

NAVIGATE MAIN CATEGORIES

Close

NoSQL benchmark: All content tagged as NoSQL benchmark in NoSQL databases and polyglot persistence

Railo Cache Benchmark - CouchDB, MongoDB, RAM

They’re all fast, but what amazes me is how little difference there is between RAM vs MongoDB performance!

Not sure why that’d would be amazing considering MongoDB will keep all that data in memory. In fact I’d say that the interesting part is CouchDB performance considering it goes to the disk for each read.

Original title and link: Railo Cache Benchmark - CouchDB, MongoDB, RAM (NoSQL databases © myNoSQL)

via: http://zefer.posterous.com/railo-cache-benchmark-couchdb-mongodb-ram


Benchmarks or Which is faster?

Evan Weaver answering a question on quora:

Whether a benchmark is relevant to you always depends on the use case. Very few systems are ever categorically better than others, but many are situationally better (but your situation may change).

The shortest and one of the best comments you can make.

From all NoSQL benchmarks I’ve covered so far, very few represent solid performance evaluations.

Original title and link: Benchmarks or Which is faster? (NoSQL databases © myNoSQL)

via: http://www.quora.com/Which-is-faster-MySQL-or-MongoDB-Does-it-depend-on-the-use-case


Measuring Redis SINTER/Set Intersection Performance

Redis ☞ SINTER (set intersection) operation benchmarked. An O(N * M) op:

The complete set of benchmark results and the program i ran is at the bottom, but the results i care about are these:

taking the intersection of

  • 50,000 x 5,000 took 4 ms
  • 50,000 x 400 took 0.7 ms
  • 50,000 x 30 took 0.4 ms

☞ sorenbs.com

Question is: how many times do you need to perform set intersections in real-time/read time instead of pre-computing them.

Original title and link: Measuring Redis SINTER/Set Intersection Performance (NoSQL databases © myNoSQL)


Redis and Memcached Benchmarks

A long and interesting discussion on comparing Redis and Memcached performance. It all started ☞ here:

After crunching all of these numbers and screwing around with the annoying intricacies of OpenOffice, I’m giving Redis a big thumbs down. My initial sexual arousal from the feature list is long gone. Granted, Redis might have its place in a large architecture, but certainly not a replacement to memcache. When your site is hammering 20,000 keys per second and memcache latency is heavily dependent on delivery times, it makes no business sense to transparently drop in Redis. The features are neat, and the extra data structures could be used to offload more RDBMS activity… but 20% is just too much to gamble on the heart of your architecture.

Salvatore Sanfilippo ☞ followed up:

[…] this is why the sys/toilet benchmark is ill conceived.

  • All the tests are run using a single client into a busy loop.
  • when you run single clients benchmarks what you are metering actually is, also: the round trip time between the client and the server, and all the other kind of latencies involved, and of course, the speed of the client library implementation.
  • The test was performed with very different client libraries

But he also published a new benchmark. And Dormando ☞ published an update picking on the previous two:

The “toilet” bench and antirez’s benches both share a common issue; they’re busy-looping a single client process against a single daemon server. The antirez benchmark is written much better than the original one; it tries to be asyncronous and is much more efficient.

And it didn’t stop here, as Salvatore felt ☞ something was still missing:

The test performed by @dormando was missing an interesting benchmark, that is, given that Redis is single threaded, what happens if I run an instance of Redis per core?

I assume everyone is asking by now: which one of Redis and Memcached performed better? And the answer is: it depends (even if some would like to believe differently).

But why is this the “answer”? Firstly, because creating good benchmarks is really difficult. Most of the benchmarks are focusing on the wrong thing or they are covering not very real-life like problems.

This would be my very simple advise:

  • basic benchmarks will not give you real answers
  • you are better testing for your very specific scenario (data size, concurrency level,

Original title and link: Redis and Memcached Benchmarks (NoSQL databases © myNoSQL)


Neo4j and OrientDB Performance Compared

Sort of a benchmark based on running the ☞ TinkerPop test suite against Neo4j and OrientDB (nb: we’ve learned recently that OrientDB is a document-graph database).

OrientDB vs Neo4j Performance NoSQL benchmark

A couple of notes:

  • I don’t think the test suite is also addressing the concurrency angle of these graph databases
  • Neo4j is fully ACID compliant and transactions can have a huge impact on the performance, at least for bulk operations

If not mistaking, this is the first data comparing the performance of two graph database. It doesn’t mean it is a relevant NoSQL benchmark or performance evaluation though.

Original title and link: Neo4j and OrientDB Performance Compared (NoSQL databases © myNoSQL)

via: http://zion-city.blogspot.com/2010/09/orientdb-fastest-graphdb-available.html


Redis Performance on EC2

Michał Frąckowiak:

Anyway, initially I wanted to deploy the project at Amazon EC2 - because of hyped scalability, price etc. But here comes a surprise — the performance simply sucks

Based also on the results of running MongoDB on Amazon EC2, I’d actually say this is the real surprise (Riak’s benchmarks in Joyent’s cloud)

Update: after writing it I’ve noticed that the original post is a bit old, but that doesn’t necessarily make it irrelevant though.

Original title and link: Redis Performance on EC2 (NoSQL databases © myNoSQL)

via: http://michalf.me/blog:redis-performance


Graph Database Benchmark: TinkerPop's GraphDB-Bench

GraphDB-Bench is a collection of benchmarks for analyzing the performance of various graph frameworks

Anyone reviewed it? Anyone tried it? Is it a solid NoSQL benchmark?

Original title and link: Graph Database Benchmark: TinkerPop’s GraphDB-Bench (NoSQL databases © myNoSQL)

via: http://github.com/tinkerpop/graphdb-bench


New HBase YCSB changes - improves speed drastically

Ryan Rawson:

There is a new commit to YCSB […] This fixes performance problems in the HBase DB adapter. In my own tests I found that my short scans, which were configured to read 100-column rows, 1-300 in zipfian, went from 60ms to 35ms.

Also there is column selection pushdown enabled, which will improve the speed of any tests that are doing single column gets on a wide row (eg: readallfields=false, fieldcount=X). This is all due to changing how YCSB uses the Result object. Check out the commit for some hints. I have a longer email and patch about this stuff coming really soon.

☞ mail thread

YCSB is probably the most complete and correct NoSQL benchmark. And that’s basically a 40% speed improvement.

Original title and link: New HBase YCSB changes - improves speed drastically (NoSQL databases © myNoSQL)


Riak in the Cloud with Joyent SmartMachines

I usually don’t trust vendor benchmarks, but these Riak benchmarks look pretty much inline with Mozilla’s Riak benchmark. What is even more impressive is that these results were from running Riak on virtualized machines (the Joyent SmartMachines[1]).

Watch it for youself. Slides can be downloaded from ☞ here


  1. You can read more about Joyent SmartMachines ☞ here.  ()

Original title and link: Riak in the Cloud with Joyent SmartMachines (NoSQL databases © myNoSQL)


Redis Benchmark Ported to Memcached

Salvatore Sanfilippo:

This is a straightforward port of redis-benchmark to memcache protocol.

This way it is possible to test Redis and Memcache with not just an apple to apple comparison, but also using the exactly same mouth… :)

Does it mean that Redis is going after Memcached? I guess Membase is after the same users, so we will have some interesting competition.

Original title and link: Redis Benchmark Ported to Memcached (NoSQL databases © myNoSQL)

via: http://github.com/antirez/mc-benchmark


Redis: A Concurrency Benchmark

It looks like it is one of these days related to NoSQL benchmarks, as Jak Sprats shared on the Redis group his concurrency benchmark. Even if the thread doesn’t give details about the hardware, size of keys and values, the results are impressive.

This blew my mind, there is minimal performance degradation starting at 4000 concurrent requests and at 26000 concurrent requests the performance is 87.6K/s .. unbelievably good

Redis concurrency benchmark

As far as I know Redis is serializing all ops, so this is even more impressive. The part I’d be interested to see included in this benchmark is Redis virtual memory.

Original title and link for this post: Redis: A Concurrency Benchmark (published on the NoSQL blog: myNoSQL)

via: http://groups.google.com/group/redis-db/browse_thread/thread/1fc6fd6c8937dda7


NoSQL benchmarks and performance evaluations

Some say it is the right time to start having these around. Others are saying it’s way to early to start the “battle”. Users do want to see them and in case they’re lacking they create their own, most of the time using incomplete or wrong approaches.

But what am I talking about? As some of you might have guessed already:

NoSQL benchmarks and performance evaluations!

With their recent release of Riak 0.11.0, Basho guys have also published their internal ☞ benchmarking code. Similar internal benchmark code is ☞ available for MongoDB.

But users are more interested in seeing cross product benchmarks, even if most of the time constructing these is extremely complicated and they end up comparing apples with oranges.

All these being said and accepting that most of the time someone will figure out a way to invalidate the results, lets see what cross product benchmarks do we have in the NoSQL space.

Yahoo! Cloud Serving Benchmark

The Yahoo! Cloud Serving Benchmark’s goal is to facilitate performance comparisons of the new generation of cloud data serving systems. The source code is available on ☞ GitHub and Yahoo! has also published ☞ the results of running this benchmark against Cassandra, HBase, Yahoo!’s PNUTS, and a simple sharded MySQL implementation.

VoltDB Benchmark

VoltDB a new storage solution that calls itself the next-generation SQL RDBMS with ACID for fast-scaling OLTP applications has recently ☞ published the results of their benchmark comparing VoltDB and Cassandra.

It is worth noting that while being one of those apples to oranges comparisons (nb and the authors are well aware of it), there are still a couple of interesting and useful things to be learned from it (i.e. benchmarking procedure, tested scenarios, etc.)

Unfortunately at this time the source code is not yet available, but hopefully we will see it soon:

Going forward, we’re planning to release the code we used to do these benchmarks. We’d also like to try a few other storage layers

Hypertable and HBase Performance Evaluation

The guys behind Hypertable ☞ have published their results of comparing Hypertable with HBase using a benchmark based on the Google BigTable paper[1] from which both HBase and Hypertable are inheriting their architecture. Unfortunately, the benchmark code is not available at this moment.

Thanks to Stu Hood, now I know the code for this benchmark is available in the Hypertable distribution available ☞ here (tar.gz) and the configuration files are also available ☞ here (tar.gz)

So, as far as I could gather we have:

Did I miss any?


  1. The BigTable paper is available ☞ here  ()