ALL COVERED TOPICS

NoSQL Benchmarks NoSQL use cases NoSQL Videos NoSQL Hybrid Solutions NoSQL Presentations Big Data Hadoop MapReduce Pig Hive Flume Oozie Sqoop HDFS ZooKeeper Cascading Cascalog BigTable Cassandra HBase Hypertable Couchbase CouchDB MongoDB OrientDB RavenDB Jackrabbit Terrastore Amazon DynamoDB Redis Riak Project Voldemort Tokyo Cabinet Kyoto Cabinet memcached Amazon SimpleDB Datomic MemcacheDB M/DB GT.M Amazon Dynamo Dynomite Mnesia Yahoo! PNUTS/Sherpa Neo4j InfoGrid Sones GraphDB InfiniteGraph AllegroGraph MarkLogic Clustrix CouchDB Case Studies MongoDB Case Studies NoSQL at Adobe NoSQL at Facebook NoSQL at Twitter

NAVIGATE MAIN CATEGORIES

Close

Redis Memory Usage

You probably already know by now that Redis is that super fast in-memory (disk persistency being available through snapshots or append only mode) key-value store providing smart data types. In the past we’ve also talked about Redis virtual memory as a solution to improve memory efficiency.

Rich Schumacher and Will Larson started an experiment to better understand the memory costs of Redis datastructures. The testing methodology and all results have been published ☞ here, but you can also see the data summarized below:

Redis memory costs for datastructures

In his last ☞ Redis update, Salvatore Sanfilippo explains why hashes are more memory efficient:

If you store an N (with N small) fields object as independent keys (one field per key), or the same object as an hash with five fields, the latter will require in the average five times less memory. No magic involved, simply hashes (and lists and sets in 2.2) are designed to be encoded in a much more efficient fashion when they are composed of a few elements.

In practice instead of using a real hash table, with all the overhead involved (the hash table itself, more pointers, redis objects for every field name and value) we store it as a single “blob” with prefixed-length strings. The speed will not suffer as we do this trick only for small fields and small number of elements and there is more cache locality, but the memory reduction is impressive.

Salvatore goes on and exposes some plans for improving Redis memory consumption for the complete Redis key space:

What is the memory gain here? It is simply huge!

  • Space needed to store 10 million keys as normal keys: 1.7 GB (RSS)
  • Space needed to store 10 million keys with this trick: 300 MB (RSS)

Note that the dump file size is 200MB, so we are very near to the theoretical overhead limits.

In the future we’ll make this transparent changing the key space implementation to do this tricks automatically, but I guess client library hackers may want to implement a similar interface already. Note that since the hashes also provide the HINCRBY command it’s possible to have tons of atomic counters using very little memory.

Interesting days ahead for Redis.

Original title and link for this post: Redis Memory Usage (published on the NoSQL blog: myNoSQL)