ALL COVERED TOPICS

NoSQL Benchmarks NoSQL use cases NoSQL Videos NoSQL Hybrid Solutions NoSQL Presentations Big Data Hadoop MapReduce Pig Hive Flume Oozie Sqoop HDFS ZooKeeper Cascading Cascalog BigTable Cassandra HBase Hypertable Couchbase CouchDB MongoDB OrientDB RavenDB Jackrabbit Terrastore Amazon DynamoDB Redis Riak Project Voldemort Tokyo Cabinet Kyoto Cabinet memcached Amazon SimpleDB Datomic MemcacheDB M/DB GT.M Amazon Dynamo Dynomite Mnesia Yahoo! PNUTS/Sherpa Neo4j InfoGrid Sones GraphDB InfiniteGraph AllegroGraph MarkLogic Clustrix CouchDB Case Studies MongoDB Case Studies NoSQL at Adobe NoSQL at Facebook NoSQL at Twitter

NAVIGATE MAIN CATEGORIES

Close

Zookeeper: All content tagged as Zookeeper in NoSQL databases and polyglot persistence

Last NoSQL Releases in 2011: MongoDB, Hive, ZooKeeper, Whirr, HBase, Redis, and Hadoop 1.0.0

Let’s start the year with a quick review of the latest releases that happened in December. Make sure that you scroll to the end as there are quite a few important ones.

MongoDB 2.0.2

Announced on Dec.15th, MongoDB 2.0.2 is a bug fix release:

  • Hit config server only once per mongos on meta data change to not overwhelm
  • Removed unnecessary connection close and open between mongos and mongod after getLastError
  • Replica set primaries close all sockets on stepDown()
  • Do not require authentication for the buildInfo command
  • scons option for using system libraries

Apache Hive 0.8.0

Apache Hive 0.8.0 came out on Dec.19th. The list of new features, improvements, and bug fixes is extremely long.

Just as a side note, who came out with the idea of having a Hive fans’ page on Facebook?

Apache ZooKeeper 3.4.2

ZooKeeper 3.4.0 has been followed up shortly by two new minor version updates fixing some critical bugs. The list of issues fixed in ZooKeeper 3.4.1 can be found here and for ZooKeeper 3.4.2 the 2 fixed bugs are listed here.

As with ZooKeeper 3.4.0, these versions are not yet production ready.

Apache Whirr 0.7.0

Apache Whirr 0.7.0 has been released on Dec.21st featuring 56 improvements and bug fixes including support for Puppet & Chef, and Mahout and Ganglia as a service. The complete list can be found here.

Some more details about Whirr 0.7.0 can be found here.

Apache HBase 0.90.5

Released Dec.23rd, HBase 0.90.5 packs 81 bug fixes. The complete list can be found here.

Redis 2.4.5

Redis 2.4.5 was released on Dec.23rd and provides 4 bug fixes:

  • [BUGFIX] Fixed a ZUNIONSTORE/ZINTERSTORE bug that can cause a NaN to be inserted as a sorted set element score. This happens when one of the elements has +inf/-inf score and the weight used is 0.
  • [BUGFIX] Fixed memory leak in CLIENT INFO.
  • [BUGFIX] Fixed a non critical SORT bug (Issue 224).
  • [BUGFIX] Fixed a replication bug: now the timeout configuration is respected during the connection with the master.
  • --quiet option implemented in the Redis test.

Last but definitely one of the most important announcements that came in December:

Hadoop 1.0.0

Based on the 0.20-security code line, Hadoop 1.0.0 was announced on Dec.29. This release includes support for:

  • HBase (append/hsynch/hflush) and Security
  • Webhdfs (with full support for security)
  • Performance enhanced access to local files for HBase
  • Other performance enhancements, bug fixes, and features
  • All version 0.20.205 and prior 0.20.2xx features

Complete release notes are available here.

Stéphane Fréchette, Ryan Slobojan, Duane Moore, Arun C. Murthy

And with this we are ready for 2012.

Original title and link: Last NoSQL Releases in 2011: MongoDB, Hive, ZooKeeper, Whirr, HBase, Redis, and Hadoop 1.0.0 (NoSQL database©myNoSQL)


Cassandra, Zookeeper, Scribe, and Node.js Powering Rackspace Cloud Monitoring

Paul Querna describes the original architecture of Cloudkick and the one that powers the recently announced Rackspace Cloud Monitoring service:

Development framework: from Twisted Python and Django to Node.js

Cloudkick was primarily written in Python. Most backend services were written in Twisted Python. The API endpoints and web server were written in Django, and used mod_wsgi. […] Cloud Monitoring is primarily written in Node.js.

Storage: from master-slave MySQL to Cassandra

Cloudkick was reliant upon a MySQL master and slaves for most of its configuration storage. This severely limited both scalability, performance and multi-region durability. These issues aren’t necessarily a property of MySQL, but Cloudkick’s use of the Django ORM made it very difficult to use MySQL radically differently. The use of MySQL was not continued in Cloud Monitoring, where metadata is stored in Apache Cassandra.

Even more Cassandra:

Cloudkick used Apache Cassandra primarily for metrics storage. This was a key element in keeping up with metrics processing, and providing a high quality user experience, with fast loading graphs. Cassandra’s role was expanded in Cloud Monitoring to include both configuration data and metrics storage.

Event processing: from RabbitMQ to Zookeeper and a bit more Cassandra

RabbitMQ is not used by Cloud Monitoring. Its use cases are being filled by a combination of Apache Zookeeper, point to point REST or Thrift APIs, state storage in Cassandra and changes in architecture.

And finally Scribe:

Cloudkick used an internal fork of Facebook’s Scribe for transporting certain types of high volume messages and data. Scribe’s simple configuration model and API made it easy to extend for our bulk messaging needs. Cloudkick extended Scribe to include a write ahead journal and other features to improve durability. Cloud Monitoring continues to use Scribe for some of our event processing flows.

Original title and link: Cassandra, Zookeeper, Scribe, and Node.js Powering Rackspace Cloud Monitoring (NoSQL database©myNoSQL)

via: http://journal.paul.querna.org/articles/2011/12/17/technology-cloud-monitoring/


Netflix Open Sources Curator ZooKeeper Library

I’m excited to read that Netflix is getting even more involved in the open source world and the team there is starting to open source some of the tools developed internally. As some of you may already know, Netflix has been experimenting with quite a few and it is a heavy user of NoSQL databases running the majority of they services in the cloud.

The first project announced and available already on GitHub is Curator,a ZooKeeper client wrapper and rich ZooKeeper framework (nb: ZooKeeper just released version 3.4.0).

Curator deals with ZooKeeper complexity in the following ways:

  • Retry Mechanism: Curator supports a pluggable retry mechanism. All ZooKeeper operations that generate a recoverable error get retried per the configured retry policy. Curator comes bundled with several standard retry policies (e.g. exponential backoff).
  • Connection State Monitoring: Curator constantly monitors the ZooKeeper connection. Curator users can listen for state changes in the connection and respond accordingly.
  • ZooKeeper Instance Management: Curator manages the actual connection to the ZooKeeper cluster using the standard ZooKeeper class. However, the instance is managed internally (though you can access it if needed) and recreated as needed. Thus, Curator provides a reliable handle to the ZooKeeper cluster (unlike the built-in implementation).
  • Correct, Reliable Recipes: Curator comes bundled with implementations of most of the important ZooKeeper recipes (and some additional recipes as well). The implementations are written using ZooKeeper best practices and take account of all known edge cases (as mentioned above).
  • Curator’s focus on recipes makes your code more resilient as you can focus strictly on the ZooKeeper feature you’re interested in without worrying about correctly implementing ZooKeeper housekeeping requirements.

Next on the list of open source projects from Netflix:

  • Astyanax - The Netflix Cassandra Client
  • Priam - Co-Process for backup/recovery, Token Management, and Centralized Configuration management for Cassandra
  • CassJMeter - JMeter plugin to run cassandra tests

Original title and link: Netflix Open Sources Curator ZooKeeper Library (NoSQL database©myNoSQL)

via: http://techblog.netflix.com/2011/11/introducing-curator-netflix-zookeeper.html


Apache ZooKeeper 3.4.0 Released to Be Followed Soon by Production-Ready Version

Apache ZooKeeper, the high-performance coordination service exposing services like naming, configuration management, synchronization, etc. for distributed applications, has reached version 3.4.0.

Even if the official announcement was laconic, ZooKeeper 3.4.0 features over 150 fixes.

The most important ones are summarized by Patrick Hunt in this Cloudera blog post:

  • ZooKeeper 3.3.3 clients are compatible with 3.4.0 servers
  • Native Windows version of C client
  • Support Kerberos authentication of clients
  • Support Kerberos authentication of clients
  • Improved REST Interface
  • Existing monitoring support has been extended through the introduction of a new ‘mntr’ 4 letter word
  • Add tools and recipes for monitoring as a contrib
  • Web-based Administrative Interface
  • Automating log and snapshot cleaning
  • Add logging/stats to identify production deployment issues
  • Support for building RPM and DEB packages

Something to keep in mind though: ZooKeeper 3.4.0 is not production ready yet. After extensive testing, it will be followed soon by a minor release that will be production-ready.

Original title and link: Apache ZooKeeper 3.4.0 Released to Be Followed Soon by Production-Ready Version (NoSQL database©myNoSQL)


Experimenting with Hadoop using Cloudera VirtualBox Demo

CDH Mac OS X VirtualBox VM

If you don’t count the download, you’ll get this up and running in 5 minutes tops. At the end you’ll have Hadoop, Sqoop, Pig, Hive, HBase, ZooKeeper, Oozie, Hume, Flume, and Whirr all configured and ready to experiment with.

Making it easy for users to experiment with these tools increases the chances for adoption. Adoption means business.

Original title and link: Experimenting with Hadoop using Cloudera VirtualBox Demo (NoSQL databases © myNoSQL)

via: http://www.cloudera.com/blog/2011/06/cloudera-distribution-including-apache-hadoop-3-demo-vm-installation-on-mac-os-x-using-virtualbox-cdh/


riak_zab: ZooKeeper’s Zab Protocol for Riak

Last night, Mark Phillips[1] sent me the following message:

Yesterday a Riak community member, Joseph Blumstedt released a pair of repos: riak_zab and riak_zab_example. In short, riak_zab is an extension for riak_core that provides totally ordered atomic broadcast capabilities.

If your first reaction was something along the line “so what?”, then you are not alone. Anyway I hope I’ve done my homework and now I understand why this is both interesting and important.

ZooKeeper is a coordination service created by Yahoo! to allow large scale applications to perform coordination tasks such as leader election, status propagation, and rendezvous. Embedded into ZooKeeper is a totally ordered broadcast protocol: Zab[2].

ZooKeeper makes the following requirements on the Zab broadcast protocol:

Reliable delivery
If a message, m, is delivered by one server, then it will be eventually delivered by all correct servers.
Total order
If message a is delivered before message b by one server, then every server that delivers a and b delivers a before b.
Causal order
If message a causally precedes message b and both messages are delivered, then a must be ordered before b.
Prefix property:
If m is the last message delivered for a leader L, any message proposed before m by L must also be delivered

Putting it together:

riak_zab is an extension for riak_core that provides totally ordered atomic broadcast capabilities. This is accomplished through a pure Erlang implementation of Zab, the Zookeeper Atomic Broadcast protocol invented by Yahoo! Research.

Basically this means that many operations that were typically hard to do well using the Dynamo eventual consistency model which Riak implements with riak_core are now theoretically possible. In case you’d like to read more about riak_core I’d suggest these posts Building blocks of Dynamo-like distributed systems, riak_core: building distributed applications without shared state, and Where to start with riak_core .

Indeed there are other Dynamo aspects like synchronization, scaling, ring rebalancing, handoff that riak_zab had to address and its approach is documented on the project page

Technically, riak_zab isn’t focused on message delivery. Rather, riak_zab provides consistent replicated state.

So why is this important? While parts of an application can deal with eventual consistency, there are typically a few aspects of an app where stronger consistency is needed for various reasons. A few such examples:

riak_zab will theoretically allow users to reap the availability of the standard eventually consistent model Riak provides and opt in to stronger consistency for the parts of the app that require it. In other words, it expands the number of CAP trade offs that a developer can make beyond the R, W and N model and makes possible functionality that relies on consistency guarantees.

After connecting all these dots, I’ve finally got it (nb: thanks Mark for sending this out!). And it definitely sounds like an exciting addition to riak_core. The caveat here is that even if this code has been tested in an academic environment the project is still an alpha version. Hopefully other members of the Riak community will pick it up and together with Basho guys will make it production ready.


  1. Mark Phillips: Community Manager at Basho Technologies, @pharkmillups  

  2. A simple totally ordered broadcast protocol  

Original title and link: riak_zab: ZooKeeper’s Zab Protocol for Riak (NoSQL databases © myNoSQL)


Recipe for a Distributed Realtime Tweet Search System

Ingredients:

Method:

  1. Place Voldemort, Kafka, and Sensei on a couple of servers.
  2. Arrange them with taste:

    Chirp Architecture

  3. Spray a large quantity of tweets on the system

Preparation time:

24 hours

Notes:

For more servings, add the appropriate number of servers.

Result:

Chirper on Github

Reviews

  • One design choice was letting the process that writes to Voldemort also be a Kafka consumer. Although this would be cleaner, we would risk a data-race where search may return hit array before they are yet added to Voldemort. By making sure it is first added to Voldemort, we can rely on it being an authoritative storage for our tweets.
  • You may have already realized Kafka is acting as a proxy for twitter stream, and we could have also streamed tweets directly into the search systems, bypassing the Kafka layer. What we would be missing is the ability to play back tweet events from a specific check-point. One really nice feature about Kafka is that you can keep a consumption point to have data replayed. This makes reindexing for cases such as data corruption and schema changes, etc., possible. Furthermore, to scale search, we would have a growing number of search nodes consume from the same Kafka stream. Kafka is written in a way where adding consumers does not affect through-put of the system really helps in scaling the entire system.
  • Another important design decision was on using Voldemort for storage. One solution would be instead store tweets in the search index, e.g. Lucene stored fields. The benefits with this approach would be stronger consistency between search and store, and also the stored data would follow the retention policy of that’s defined by the search system. However, other than the fact that Lucene stored field is no-where near as optimal comparing to a Voldemort cluster (an implementation issue), there are more convincing reasons:
    • We can first see the consistency benefit for having search and store be together is negligible. Actually, if we follow our assumption of tweets being append-only and we always write to Voldemort first, we really wouldn’t have consistency issues. Yet, having data storage reside on the same search system would disproportionally introduce contention for IO bandwidth and OS cache, as data volume increases, search performance can be negatively impacted.
    • The point about retention is rather valid. As search index guarantees older tweets to be expired, Voldemort store would continue to grow. Our decision ultimately came down to two points: 1) Voldemort’s growth factor is very different, e.g. adding new records into the system is much cheaper, so it is feasible to have a much longer data retention policy. 2) Having have cluster of tweet storage allows us to integrate with other systems if desired for analytics, display etc.

Original title and link: Recipe for a Distributed Realtime Tweet Search System (NoSQL databases © myNoSQL)

via: http://sna-projects.com/blog/2011/02/build-a-distributed-realtime-tweet-search-system-in-no-time-part-12/


Rainbird: Twitter’s ZooKeeper + Cassandra Based Realtime Analytics Solution

Kevin Weil[1] presented Twitter’s ZooKeeper and Cassandra based solution for realtime analytics named Rainbird at Strata 2011:

Until recently, counters where a unique feature of HBase. While the latest version of Cassandra does not include distributed counters, this feature is available in Cassandra’s trunk.


  1. Kevin Weil: Product Lead for Revenue, Twitter, @kevinweil  

Original title and link: Rainbird: Twitter’s ZooKeeper + Cassandra Based Realtime Analytics Solution (NoSQL databases © myNoSQL)


The Backstory of Yahoo and Hadoop

We currently have nearly 100 people working on Apache Hadoop and related projects, such as Pig, ZooKeeper, Hive, Howl, HBase and Oozie. Over the last 5 years, we’ve invested nearly 300 person-years into these projects. […] Today Yahoo runs on over 40,000 Hadoop machines (>300k cores). They are used by over a thousand regular users from our science and development teams. Hadoop is at the center of our research in search, advertising, spam detection, personalization and many other topics.

I assume there’s no surpise to anyone I’m a big fan of Yahoo! open source initiatives.

Original title and link: The Backstory of Yahoo and Hadoop (NoSQL databases © myNoSQL)

via: http://developer.yahoo.com/blogs/hadoop/posts/2011/01/the-backstory-of-yahoo-a


Apache Whirr: Cloud Common Service API

Whirr:

  • A cloud-neutral way to run services. You don’t have to worry about the idiosyncrasies of each provider.
  • A common service API. The details of provisioning are particular to the service.
  • Smart defaults for services. You can get a properly configured system running quickly, while still being able to override settings as needed.

For now, Whirr supports only Hadoop, Cassandra, and Zookeeper on Amazon EC2, but roadmap specify support for both more products and cloud solutions.

While understanding the advantages of abstracting away each cloud custom API, I’m wondering why this project have been started from scratch and not built on top of solution like Chef or Puppet1.


  1. Could anyone please remind me what similar projects are available in Python?  

Original title and link: Apache Whirr: Cloud Common Service API (NoSQL databases © myNoSQL)


Quick Reference: Hadoop Tools Ecosystem

Just a quick reference of the continuously growing Hadoop tools ecosystem.

Hadoop

The Apache Hadoop project develops open-source software for reliable, scalable, distributed computing.

hadoop.apache.org

HDFS

A distributed file system that provides high throughput access to application data.

hadoop.apache.org/hdfs/

MapReduce

A software framework for distributed processing of large data sets on compute clusters.

Amazon Elastic MapReduce

Amazon Elastic MapReduce is a web service that enables businesses, researchers, data analysts, and developers to easily and cost-effectively process vast amounts of data. It utilizes a hosted Hadoop framework running on the web-scale infrastructure of Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Simple Storage Service (Amazon S3).

aws.amazon.com/elasticmapreduce/

Cloudera Distribution for Hadoop (CDH)

Cloudera’s Distribution for Hadoop (CDH) sets a new standard for Hadoop-based data management platforms.

cloudera.com/hadoop

ZooKeeper

A high-performance coordination service for distributed applications. ZooKeeper is a centralized service for maintaining configuration information, naming, providing distributed synchronization, and providing group services.

hadoop.apache.org/zookeeper/

HBase

A scalable, distributed database that supports structured data storage for large tables.

hbase.apache.org

Avro

A data serialization system. Similar to ☞ Thrift and ☞ Protocol Buffers.

avro.apache.org

Sqoop

Sqoop (“SQL-to-Hadoop”) is a straightforward command-line tool with the following capabilities:

  • Imports individual tables or entire databases to files in HDFS
  • Generates Java classes to allow you to interact with your imported data
  • Provides the ability to import from SQL databases straight into your Hive data warehouse

cloudera.com/downloads/sqoop/

Flume

Flume is a distributed, reliable, and available service for efficiently moving large amounts of data soon after the data is produced.

archive.cloudera.com/cdh/3/flume/

Hive

Hive is a data warehouse infrastructure built on top of Hadoop that provides tools to enable easy data summarization, adhoc querying and analysis of large datasets data stored in Hadoop files. It provides a mechanism to put structure on this data and it also provides a simple query language called Hive QL which is based on SQL and which enables users familiar with SQL to query this data. At the same time, this language also allows traditional map/reduce programmers to be able to plug in their custom mappers and reducers to do more sophisticated analysis which may not be supported by the built-in capabilities of the language.

hive.apache.org

Pig

A high-level data-flow language and execution framework for parallel computation. Apache Pig is a platform for analyzing large data sets that consists of a high-level language for expressing data analysis programs, coupled with infrastructure for evaluating these programs. The salient property of Pig programs is that their structure is amenable to substantial parallelization, which in turns enables them to handle very large data sets.

pig.apache.org

Oozie

Oozie is a workflow/coordination service to manage data processing jobs for Apache Hadoop. It is an extensible, scalable and data-aware service to orchestrate dependencies between jobs running on Hadoop (including HDFS, Pig and MapReduce).

yahoo.github.com/oozie

Cascading

Cascading is a Query API and Query Planner used for defining and executing complex, scale-free, and fault tolerant data processing workflows on a Hadoop cluster.

cascading.org

Cascalog

Cascalog is a tool for processing data on Hadoop with Clojure in a concise and expressive manner. Cascalog combines two cutting edge technologies in Clojure and Hadoop and resurrects an old one in Datalog. Cascalog is high performance, flexible, and robust.

github.com/nathanmarz/cascalog

HUE

Hue is a graphical user interface to operate and develop applications for Hadoop. Hue applications are collected into a desktop-style environment and delivered as a Web application, requiring no additional installation for individual users.

archive.cloudera.com/cdh3/hue

You can read more about HUE on ☞ Cloudera blog.

Chukwa

Chukwa is a data collection system for monitoring large distributed systems. Chukwa is built on top of the Hadoop Distributed File System (HDFS) and Map/Reduce framework and inherits Hadoop’s scalability and robustness. Chukwa also includes a flexible and powerful toolkit for displaying, monitoring and analyzing results to make the best use of the collected data.

incubator.apache.org/chukwa/

Mahout

A Scalable machine learning and data mining library.

mahout.apache.org

Integration with Relational databases

Integration with Data Warehouses

The only list I have is MapReduce, RDBMS, and Data Warehouse, but I’m afraid it is quite a bit old. So maybe someone could help me update it.

Anything else? Once we validate this list, I’ll probably have to move it on the NoSQL reference

Original title and link: Quick Reference: Hadoop Tools Ecosystem (NoSQL databases © myNoSQL)


Cloudera: All Your Big Data Are Belong to Us

Matt Asay (GigaOm):

Where Cloudera shines, however, is in taking these different contributions and making Hadoop relevant for enterprise IT[8], where data mining has waxed and waned over the years. […] Cloudera, in other words, is banking on the complexity of Hadoop to drive enterprise IT to its own Cloudera Enteprise tools.

Additionally, I think what Cloudera is “selling” is a good set of tools — Hadoop, HBase, Hive, Pig, Oozie, Sqoop, Flume, Zookeeper, Hue — put together based on their expertise in the field.

Original title and link: Cloudera: All Your Big Data Are Belong to Us (NoSQL databases © myNoSQL)

via: http://cloud.gigaom.com/2010/09/14/cloudera-all-your-big-data-are-belong-to-us/