Earlier today when writing about a benchmark of MongoDB, Redis, Memcached-based Rails caches, I’ve refered to eviction policies and their possible impact on the cache performance. But as I wasn’t sure about how Memcached works in a couple of areas, I did a bit of research and found a tweet from Tim de Pater pointing to 2 great articles about Memcached.
Joshua Thijssen’s post covers 4 topics: Memcached operations
Big-O, LRU eviction policy, memory allocation, consistent hashing.
Now, in order to combat this “malloc()” problem, memcache does its own memory management by default (you can let memcache use the standard malloc() function, but that would not be advisable). Memcache’s memory manager will allocate the maximum amount of memory from the operating system that you have set (for instance, 64Mb, but probably more) through one malloc() call. From that point on, it will use its own memory manager system called the slab allocator.
Then this post describes in great detail how Memcached eviction policy works:
The other day I was chatting with a colleague about Memcached. Eviction policy came up, and I casually mentioned that Memcache isn’t strictly LRU. But a quick Bing search said Memcache is LRU, like this Wikipedia entry. Hmm, I was 99.9% sure Memcache is not LRU, something to do with how it manages memory, but maybe I was wrong all these years. After reading through some Danga mailing lists and documentation, the answer is, Memcached is LRU per slab class, but not globally LRU.
Original title and link: Memcached Internals: Memory Allocation, Eviction Policy, Consistent Hashing ( ©myNoSQL)