ALL COVERED TOPICS

NoSQL Benchmarks NoSQL use cases NoSQL Videos NoSQL Hybrid Solutions NoSQL Presentations Big Data Hadoop MapReduce Pig Hive Flume Oozie Sqoop HDFS ZooKeeper Cascading Cascalog BigTable Cassandra HBase Hypertable Couchbase CouchDB MongoDB OrientDB RavenDB Jackrabbit Terrastore Amazon DynamoDB Redis Riak Project Voldemort Tokyo Cabinet Kyoto Cabinet memcached Amazon SimpleDB Datomic MemcacheDB M/DB GT.M Amazon Dynamo Dynomite Mnesia Yahoo! PNUTS/Sherpa Neo4j InfoGrid Sones GraphDB InfiniteGraph AllegroGraph MarkLogic Clustrix CouchDB Case Studies MongoDB Case Studies NoSQL at Adobe NoSQL at Facebook NoSQL at Twitter

NAVIGATE MAIN CATEGORIES

Close

benchmark: All content tagged as benchmark in NoSQL databases and polyglot persistence

HBase Benchmarking for Multi-Threaded Environment

This weekend I attempted to figure out how HBase writes perform in multi-threaded environments.  […]

I used three variants to write records into HBase:

  1. Use one HTable for every write
    1. HTable is created using a singleton instance of HBaseConfiguration
    2. HTable is created using new instance of HBaseConfiguration (Why? - I wanted to check how the behavior changes if connections to servers are not shared.  Reference: HTable and HConnectionManager documentation)
  2. Use HTablePool

Not really a benchmark, but you could probably use the GitHub code to build one.

Original title and link: HBase Benchmarking for Multi-Threaded Environment (NoSQL database©myNoSQL)

via: http://www.srikanthps.com/2011/06/hbase-benchmarking-for-multi-threaded.html


Amazon RDS TPC-C Benchmark

A paper by Md. Borhan Uddin, Bo He, and Radu Sion:

Experiments were performed to benchmark the Amazon Relational Database Service (RDS) within a TPC-C benchmarking framework. The TPC-C benchmark is one of the most widely adopted database performance benchmarking frameworks comparing OLTP performance of online transaction processing systems. Two types of Amazon RDS services were tested, namely the standard RDS (single availability zone) and the Multi- AZ RDS (synchronous ‘standby’ replica in multiple availability zones). For each service type, five different RDS instances were tested: Small, Large, Extra Large (XLarge), Double Extra Large (2XLarge), and Quadruple Extra Large (4XLarge).

Results are interesting to say the least:

Overall, we observed that at a very low load, the resulting throughput was also relatively low; at medium load, the throughput increased to a peak; at very high loads, the throughput decreased again.

You can get the paper from here. Some independent comments—independent in the sense of not belonging to the authors, but to a company offering a scaling solution for MySQL—about the results here.

Original title and link: Amazon RDS TPC-C Benchmark (NoSQL database©myNoSQL)


The 11 Commandments of Benchmarking

Mark Nottingham has a great post about benchmarking HTTP servers. All the 11 rules exposed in the post apply as they are to NoSQL benchmarks and generally to storage benchmarks:

  1. Consistency
  2. One machine, one job
  3. Check the network
  4. Remove OS limitations
  5. Don’t test the client
  6. Overload is not capacity
  7. Thirty Seconds isn’t a test
  8. Do more than Hello world
  9. Not just averages
  10. Publish it all
  11. Try different tools

Go read the post now before creating yet another irrelevant benchmark.

Original title and link: The 11 Commandments of Benchmarking (NoSQL databases © myNoSQL)

via: http://www.mnot.net/blog/2011/05/18/http_benchmark_rules


Benchmarking MongoDB

The code is purposely a naive implementation, to test how fast each back end is without resorting to optimizations, hacks or tricks. There are probably ways of making it much faster. And even though the production code will be very different to this early experiment, it is not an evil, synthetic micro-benchmark: on the contrary, it is a real application!

You could say that being a benchmark for a specific scenario the results are relevant in that context. But I’d also include the following two checks:

  • inserting some rogue data and try to recover
  • run a kill -9 midway through the import

Original title and link: Benchmarking MongoDB (NoSQL databases © myNoSQL)

via: http://tobami.wordpress.com/2011/02/28/benchmarking-mongodb/


Railo Cache Benchmark - CouchDB, MongoDB, RAM

They’re all fast, but what amazes me is how little difference there is between RAM vs MongoDB performance!

Not sure why that’d would be amazing considering MongoDB will keep all that data in memory. In fact I’d say that the interesting part is CouchDB performance considering it goes to the disk for each read.

Original title and link: Railo Cache Benchmark - CouchDB, MongoDB, RAM (NoSQL databases © myNoSQL)

via: http://zefer.posterous.com/railo-cache-benchmark-couchdb-mongodb-ram


Benchmarks or Which is faster?

Evan Weaver answering a question on quora:

Whether a benchmark is relevant to you always depends on the use case. Very few systems are ever categorically better than others, but many are situationally better (but your situation may change).

The shortest and one of the best comments you can make.

From all NoSQL benchmarks I’ve covered so far, very few represent solid performance evaluations.

Original title and link: Benchmarks or Which is faster? (NoSQL databases © myNoSQL)

via: http://www.quora.com/Which-is-faster-MySQL-or-MongoDB-Does-it-depend-on-the-use-case


Measuring Redis SINTER/Set Intersection Performance

Redis ☞ SINTER (set intersection) operation benchmarked. An O(N * M) op:

The complete set of benchmark results and the program i ran is at the bottom, but the results i care about are these:

taking the intersection of

  • 50,000 x 5,000 took 4 ms
  • 50,000 x 400 took 0.7 ms
  • 50,000 x 30 took 0.4 ms

☞ sorenbs.com

Question is: how many times do you need to perform set intersections in real-time/read time instead of pre-computing them.

Original title and link: Measuring Redis SINTER/Set Intersection Performance (NoSQL databases © myNoSQL)


Redis and Memcached Benchmarks

A long and interesting discussion on comparing Redis and Memcached performance. It all started ☞ here:

After crunching all of these numbers and screwing around with the annoying intricacies of OpenOffice, I’m giving Redis a big thumbs down. My initial sexual arousal from the feature list is long gone. Granted, Redis might have its place in a large architecture, but certainly not a replacement to memcache. When your site is hammering 20,000 keys per second and memcache latency is heavily dependent on delivery times, it makes no business sense to transparently drop in Redis. The features are neat, and the extra data structures could be used to offload more RDBMS activity… but 20% is just too much to gamble on the heart of your architecture.

Salvatore Sanfilippo ☞ followed up:

[…] this is why the sys/toilet benchmark is ill conceived.

  • All the tests are run using a single client into a busy loop.
  • when you run single clients benchmarks what you are metering actually is, also: the round trip time between the client and the server, and all the other kind of latencies involved, and of course, the speed of the client library implementation.
  • The test was performed with very different client libraries

But he also published a new benchmark. And Dormando ☞ published an update picking on the previous two:

The “toilet” bench and antirez’s benches both share a common issue; they’re busy-looping a single client process against a single daemon server. The antirez benchmark is written much better than the original one; it tries to be asyncronous and is much more efficient.

And it didn’t stop here, as Salvatore felt ☞ something was still missing:

The test performed by @dormando was missing an interesting benchmark, that is, given that Redis is single threaded, what happens if I run an instance of Redis per core?

I assume everyone is asking by now: which one of Redis and Memcached performed better? And the answer is: it depends (even if some would like to believe differently).

But why is this the “answer”? Firstly, because creating good benchmarks is really difficult. Most of the benchmarks are focusing on the wrong thing or they are covering not very real-life like problems.

This would be my very simple advise:

  • basic benchmarks will not give you real answers
  • you are better testing for your very specific scenario (data size, concurrency level,

Original title and link: Redis and Memcached Benchmarks (NoSQL databases © myNoSQL)


Neo4j and OrientDB Performance Compared

Sort of a benchmark based on running the ☞ TinkerPop test suite against Neo4j and OrientDB (nb: we’ve learned recently that OrientDB is a document-graph database).

OrientDB vs Neo4j Performance NoSQL benchmark

A couple of notes:

  • I don’t think the test suite is also addressing the concurrency angle of these graph databases
  • Neo4j is fully ACID compliant and transactions can have a huge impact on the performance, at least for bulk operations

If not mistaking, this is the first data comparing the performance of two graph database. It doesn’t mean it is a relevant NoSQL benchmark or performance evaluation though.

Original title and link: Neo4j and OrientDB Performance Compared (NoSQL databases © myNoSQL)

via: http://zion-city.blogspot.com/2010/09/orientdb-fastest-graphdb-available.html


Redis Performance on EC2

Michał Frąckowiak:

Anyway, initially I wanted to deploy the project at Amazon EC2 - because of hyped scalability, price etc. But here comes a surprise — the performance simply sucks

Based also on the results of running MongoDB on Amazon EC2, I’d actually say this is the real surprise (Riak’s benchmarks in Joyent’s cloud)

Update: after writing it I’ve noticed that the original post is a bit old, but that doesn’t necessarily make it irrelevant though.

Original title and link: Redis Performance on EC2 (NoSQL databases © myNoSQL)

via: http://michalf.me/blog:redis-performance


Graph Database Benchmark: TinkerPop's GraphDB-Bench

GraphDB-Bench is a collection of benchmarks for analyzing the performance of various graph frameworks

Anyone reviewed it? Anyone tried it? Is it a solid NoSQL benchmark?

Original title and link: Graph Database Benchmark: TinkerPop’s GraphDB-Bench (NoSQL databases © myNoSQL)

via: http://github.com/tinkerpop/graphdb-bench


New HBase YCSB changes - improves speed drastically

Ryan Rawson:

There is a new commit to YCSB […] This fixes performance problems in the HBase DB adapter. In my own tests I found that my short scans, which were configured to read 100-column rows, 1-300 in zipfian, went from 60ms to 35ms.

Also there is column selection pushdown enabled, which will improve the speed of any tests that are doing single column gets on a wide row (eg: readallfields=false, fieldcount=X). This is all due to changing how YCSB uses the Result object. Check out the commit for some hints. I have a longer email and patch about this stuff coming really soon.

☞ mail thread

YCSB is probably the most complete and correct NoSQL benchmark. And that’s basically a 40% speed improvement.

Original title and link: New HBase YCSB changes - improves speed drastically (NoSQL databases © myNoSQL)