For people who don’t really grok what’s been said in this post (maybe because it was just too long to read), my recommended setup is: “Use Redis for small datasets that don’t grow fast (stay far less than 1GB). Have at least 2x memory than the dataset. Use default snapshotting and disable AOF.”
Considering this time I was one of those that didn’t really follow the first part of the article, filing it under “not sure what all these have to do with Redis persistence implementation”, I’ve found Jeff Darcy’s ☞ follow up adding a bit more context(!) to the discussion:
I’d rephrase above as “Use Redis for small datasets (less than 50GB this year) that don’t need to be highly available, have memory at least 2x your actual dataset (until the snapshot implementation improves), use frequent snapshotting or AOF (depending on your need for performance vs. durability – not both) and always avoid overcommit.” I also have nothing against Redis, it’s a fine tool for what it does, but I think its durability story is a bit confused and its reinvented VM can only serve a need that it’s not good for anyway. As always, the real answer is to use multiple data stores to serve multiple needs, with careful consideration of the tradeoffs each represents.