here at rapidrabbit we deliver many 1,000 requests per second. doing this while only using a handful of servers and ruby on rails we employ very clever caching using redis and nginx. once the cache is written, it is directly accessed by nginx via a module, which makes it around 500-2,000 times faster than any rails controller.
Bypassing the slowest component in your stack by using caching might be a good idea. But then you need to answer how you control the lifecycle of your cached data. Taking inspiration from HTTP declarative caching mechanisms could be a start.
Original title and link: High Performance Rails Caching with Redis and nginx ( ©myNoSQL)
Enter the nginx modules Redis2 and Lua. The former allows you to make any call you like to Redis, as opposed to HttpRedis which only allows the plain old GET command. The latter allows you do embed Lua scripts in your nginx config, effectively giving nginx a bigger brain and allowing you to do some pretty fancy stuff. In this post we’ll set nginx and redis up to serve as a reverse-proxy LRU cache.
Don’t forget there’s also a Redis with Lua support branch.
Original title and link: LRU Reverse-proxy with Nginx, Redis, and Lua (NoSQL databases © myNoSQL)
So, MongoDB presented us with two problems:
- When sharding it, we can’t even use the basic security that it supports.
- The basic security that it offers is not reasonable for allowing external servers to connect to MongoDB.
Let’s just review:
Doesn’t sound easy or out of the box anymore.
Original title and link: Securing MongoDB (NoSQL databases © myNoSQL)