memcached: All content tagged as memcached in NoSQL databases and polyglot persistence
Unfortunately both of them are just new examples of useless benchmarks:
- only 1000 keys
- the benchmark doesn’t vary the size of keys and values
- no concurrency
- no mixed reads/writes
It is kind of difficult to figure out a complete description of what Membase is as the ratio of signal to noise in today’s announcement is still very low. Anyways, here is what I’ve been able to put together:
- a cache using memcached protocol
- Apache licensed open source version of NorthScale Membase Server
- project homepage is membase.org and (some) code can be found on GitHub
- can persist data
- supports replication (note: source code repository contains a reference to master-slave setup)
- elastic, allowing addition and removal of new nodes and automatic rebalancing
- used by Zynga and NHN, which are also listed as project contributors
While details are extremely scarce, this sounds a lot like Gear6 Memcached.
According to this paper the execution of a write operation involves the following steps
- The set arrives into the membase listener-receiver.
- Membase immediately replicates the data to replica servers – the number of replica copies is user defined. Upon arrival at replica servers, the data is persisted.
- The data is cached in main memory.
- The data is queued for persistence and de-duplicated if a write is already pending. Once the pending write is pulled from the queue, the value is retrieved from cache and written to disk (or SSD).
- Set acknowledgment return to application.
There is also:
In membase 1.6, data migration is based on an LRU algorithm, keeping recently used items in low-latency media while “aging out” colder items; first to SSD (if available) and then to spinning media.
A couple of comments:
- it looks like a write operation is blocking until data is completely replicated
- it is not completely clear if “hot data” is persisted to disk on a write operation or only once it’s becoming “cold”
Membase uses the notion of virtual buckets or vBucket (currently it supports up to 4096) which contains or owns a subset of the key space (note this is similar to Riak Vnodes). Each vBucket replication can be configured independently, but at any time there is only 1 master node that coordinates reads and writes.
Membase runs on each node a couple of “processes” that are dealing with data rebalancing (part of a so called: cluster manager). Once it is determined that a master node (the coordinator for all reads and writes for a particular virtual bucket) becomes unavailable, a Rebalance Orchestrator process will coordinate the migration of the virtual buckets (note: both master and replica data of the virtual bucket will be moved).
When machines are scheduled to join or leave the cluster, these are placed in a pending operation set that is used upon the next rebalancing operation. I’m not sure, but I think it is possible to manually trigger a rebalancing op.
- NorthScale Unleashes Membase Server (NorthScale blog)
- NothScale, Zynga team up on NoSQL (CNET)
- Open Sourced Membase Joins NoSQL Party (GigaOm)
- NorthScale Releases High-Performance NoSQL Database (marketwire.com)
- NorthScale Membase Server web page (↩)
- While I read that
“Membase is currently serving data for some of the busiest web applications on the planet.”, I couldn’t find any other users besides Zynga and NHN. (↩)
- Riak is using a similar notion: vnode. While the terms are the same you should not confuse Riak buckets for membase buckets though. (↩)
- Gear6 memcached provides an enhanced API that allows querying the key/value space
- Gear6 memcached is looking to support more data types by using Redis support for types like lists, sets, ordered sets, hashes
- or Gear6 is looking to provide commercial support for Redis
These left me with the question: why would you use memcached on top of Redis?
- if the integration would preserve the same memcached API (nb I am not sure though this would be possible) then
- such a product might be useful for projects needing both RDBMS and Redis (note: but in the end the project would still need to be aware of both storage APIs)
such a product might be useful for transitioning towards Redis alone
the integration would just add features missing from the current version of Redis (f.e. elastic scaling, sharding, etc.)
Do you see any other reasons for using memcahed on top of Redis?
- ☞ NoSQL player questions big data (nb the title has pretty much nothing to do with the article)
- ☞ Gear6 Enhances Memcached to Include Native Query Support and Redis Integration
 The only documentation I’ve found about cache query is ☞ here and the only mention to Redis integration found ☞ here talks only about support for Redis: (↩)
Gear6 currently offers commercial support for Memcached. If you are interested in purchasing support for Redis please contact us.
Gear6 will soon contribute a number of enhancements to the Redis community.
-  You can read more about Redis data types ☞ here (↩)
A couple of days before 2009 ended, Salvatore Sanfilippo ( @antirez) has announced his intention to implement virtual memory in Redis. In his message to the Redis user group, he has also mentioned some of the goals or advantages of virtual memory in Redis:
- If the dataset access pattern is not random, but there is a bias towards a subset of keys (let’s call this subset of keys the “hot spot”), with VM Redis can deliver performance similar to the case where you have in memory only the hot spot, using only the memory required to hold the hot spot.
- Your hotspot is much bigger than your RAM, but you are willing to pay a performance penalty because you want to use Redis.
Today, Salvatore has reported that the first phase of implementing virtual memory in Redis was completed and the Redis Twitter-clone app is already running on this new version.
According to the initial plan, the first phase is a blocking implementation VM.
This means that Redis will work as usually, but will have a new layer to access keys that will be able to understand if a key is in memory or swapped out on disk: when Redis tries to access an on-disk key, it will block to load the key from disk to memory (this includes not only I/O, but also CPU time needed to convert the serialized object into the memory representation).
Right now it is not yet decided if this is just an intermediary step before implementing a non blocking VM or it will become part of a release.
While I am neither a concurrency nor a Redis expert, I must confess that my previous experience with a similar solution to Redis single threaded approach was disappointing — I am referring to the Jackrabbit, the Apache JCR implementation where we had to circumvent the serialized single threaded access for read only clients. On the other hand, there are other well known systems (f.e. memcached) which are using the same solution (some will point out that as opposed to Redis, memcached is never touching the disk, while Jackrabbit has a behavior much closer to Redis).
Anyway, we will always have around these Redis benchmarks for sanity checks.
Terrastore is a very young Apache licensed document store solution built on top of the Terracotta (an in-memory clustering technology) that released its 0.2 version a couple of days ago.
I had the opportunity to chat with Sergio Bossa (@sbtourist) and have him answer a couple of questions about Terrastore.
Alex: What is it that made you create Terrastore in the first place?
Sergio: I wanted a scalable document store with consistency features, because I think that’s an uncovered topic/space in current implementations, which are all geared toward BASE.
Being a document database, Terrastore belongs to the same category as CouchDB, MongoDB, and Riak. In some regards (f.e. partitioning), Terrastore is similar to Riak. You should also check  to find out more about Terrastore and the CAP theorem.
Terracotta replication is not full, nor geared toward all nodes, but only those actually requiring the replicated data. This is more and more optimized in Terrastore, where, thanks to consistent hashing and partitioning, data is not duplicated at all. Terrastore also guarantees that data will never be duplicated among nodes, unless new nodes are joining or older nodes are leaving, thus requiring data redistribution. A Terrastore client doesn’t need to know where the data is: it can contact whatever Terrastore node and requests will be routed to the proper node holding the value (note: this is similar to the way Dynamo, Project Voldemort, Cassandra and other distributed stores are working)
At this point, more people have joined the chat and so more interesting questions and answers were coming up.
Alex: Considering Terrastore is built on top of Terracotta, is it an in-memory storage making it somehow similar to Redis?
Sergio: Correct, it stores everything in memory, but it is persistent as well. It is not as fast as Redis mainly due to some overhead related to its distributed features.
Paulo Gaspar: Terrastore looks very much like a persistent, transactional Memcached service.
Sergio: Persistent, transactional, and partitioned/sharded. An interesting difference is that afaik Memcached partitioning is done client side, while Terrastore has builtin support for data partitioning, distribution and access routing.
Terrastore is already HTTP and JSON friendly  and the future might bring support for the memcached protocol too.
Please see the following resources to learn more about Terrastore: