NoSQL Benchmarks NoSQL use cases NoSQL Videos NoSQL Hybrid Solutions NoSQL Presentations Big Data Hadoop MapReduce Pig Hive Flume Oozie Sqoop HDFS ZooKeeper Cascading Cascalog BigTable Cassandra HBase Hypertable Couchbase CouchDB MongoDB OrientDB RavenDB Jackrabbit Terrastore Amazon DynamoDB Redis Riak Project Voldemort Tokyo Cabinet Kyoto Cabinet memcached Amazon SimpleDB Datomic MemcacheDB M/DB GT.M Amazon Dynamo Dynomite Mnesia Yahoo! PNUTS/Sherpa Neo4j InfoGrid Sones GraphDB InfiniteGraph AllegroGraph MarkLogic Clustrix CouchDB Case Studies MongoDB Case Studies NoSQL at Adobe NoSQL at Facebook NoSQL at Twitter



ActiveMQ: All content tagged as ActiveMQ in NoSQL databases and polyglot persistence

Apache Kafka: Next generation distributed messaging system

Abhishek Sharma in an 3000 words article on InfoQ:

Its architecture consists of the following components:

  • A stream of messages of a particular type is defined as a topic. A Message is defined as a payload of bytes and a Topic is a category or feed name to which messages are published.
  • A Abhishek Sharma can be anyone who can publish messages to a Topic.
  • The published messages are then stored at a set of servers called Brokers or Kafka Cluster.
  • A Consumer can subscribe to one or more Topics and consume the published Messages by pulling data from the Brokers.

Producer can choose their favorite serialization method to encode the message content. For efficiency, the producer can send a set of messages in a single publish request. Following code examples shows how to create a Producer to send messages.

Kafka is an amazing system. I just wish the article would have actually looked into what makes it unique and how it compares to systems like RabbitMQ or ActiveMQ.

✚ Cameron Purdy in one of the comments:

If you carefully read the article, you’ll note that Kafka is not actually a message queue. It’s just a specialized database with some messaging semantics in its API. That means if you need the behaviors that you would associate with a message queue, you can’t get them with Kafka (or if you can, the performance will plummet.)

Original title and link: Apache Kafka: Next generation distributed messaging system (NoSQL database©myNoSQL)


MongoDB Work Queues: Techniques to Easily Store and Process Complex Jobs

David Berube’s article debuts with a very good overview of the different approaches for creating and managing work queues:

There are many approaches to creating work queues. One option, though naive, is to use a relational database management system (RDBMS). This is simple to implement because many architectures already have a database system such as MySQL. However, performance is less than optimal compared with other approaches. The atomicity, consistency, isolation, and durability (ACID) compliance required for RDBMS is not necessary for this scenario and negatively impacts performance. A simpler system can perform better.

One system that has gained in popularity for this use is Redis. It’s a key-value data store, like the highly popular memcached, but with more features. For example, Redis has support for pushing and popping elements off lists in a highly scalable and efficient way. Resque, often used with Ruby on Rails, is a system built on top of Redis (see Resources for more details). However, Redis supports only simple primitives. You can’t insert complex objects into the lists, and it has relatively limited support for managing items in those lists.

Alternatively, many systems use a message broker such as Apache ActiveMQ or RabbitMQ. Although these systems are fast and scalable, they’re designed for simple messages. If you want to perform nontrivial reporting on your work queues or modify items in the queues, you are stuck because message brokers rarely offer those features. Fortunately, a powerful, scalable solution is available: MongoDB.

MongoDB allows you to create queues that contain complex nested data. Its locking semantics guarantee you won’t experience problems with concurrency, and its scalability ensures you can run large systems. Because MongoDB is a powerful relational database, you can also run robust reporting on your queue and prioritize by complex criteria. However, MongoDB is not a traditional RDBMS. For instance, it does not support Structured Query Language (SQL) queries.

MongoDB has many appealing features in addition to excellent performance for work queues, such as a flexible, schemaless approach. It supports nested data structures, meaning you can even store subdocuments. Because it is a more full-featured data store than Redis, it provides a richer set of management functions so you can easily view, query, update, and delete jobs on any arbitrary criteria.

Using MongoDB as a queueing system is in many regards as good and as wrong as using a relational database for this type of functionality. They completely lack the semantics and features required by both queues and pubsub. Redis (and obviously the dedicated MOMs) supports natively both queues and pubsub semantics.

So even if the article lists a couple of reasons why MongoDB could be used as a queuing system, consider this solution if and only if the only system you are allowed to run on your environment is MongoDB.

Original title and link: MongoDB Work Queues: Techniques to Easily Store and Process Complex Jobs (NoSQL database©myNoSQL)


Building an Ad Network Ready for Failure

The architecture of a fault-tolerant ad network built on top of HAProxy, Apache with mod_wsgi and Python, Redis, a bit of PostgreSQL and ActiveMQ deployed on AWS:

The real workhorse of our ad targeting platform was Redis. Each box slaved from a master Redis, and on failure of the master (which happened once), a couple “slaveof” calls got us back on track after the creation of a new master. A combination of set unions/intersections with algorithmically updated targeting parameters (this is where experimentation in our setup was useful) gave us a 1 round-trip ad targeting call for arbitrary targeting parameters. The 1 round-trip thing may not seem important, but our internal latency was dominated by network round-trips in EC2. The targeting was similar in concept to the search engine example I described last year, but had quite a bit more thought regarding ad targeting. It relied on the fact that you can write to Redis slaves without affecting the master or other slaves. Cute and effective. On the Python side of things, I optimized the redis-py client we were using for a 2-3x speedup in network IO for the ad targeting results.

Original title and link: Building an Ad Network Ready for Failure (NoSQL database©myNoSQL)