ALL COVERED TOPICS

NoSQL Benchmarks NoSQL use cases NoSQL Videos NoSQL Hybrid Solutions NoSQL Presentations Big Data Hadoop MapReduce Pig Hive Flume Oozie Sqoop HDFS ZooKeeper Cascading Cascalog BigTable Cassandra HBase Hypertable Couchbase CouchDB MongoDB OrientDB RavenDB Jackrabbit Terrastore Amazon DynamoDB Redis Riak Project Voldemort Tokyo Cabinet Kyoto Cabinet memcached Amazon SimpleDB Datomic MemcacheDB M/DB GT.M Amazon Dynamo Dynomite Mnesia Yahoo! PNUTS/Sherpa Neo4j InfoGrid Sones GraphDB InfiniteGraph AllegroGraph MarkLogic Clustrix CouchDB Case Studies MongoDB Case Studies NoSQL at Adobe NoSQL at Facebook NoSQL at Twitter

NAVIGATE MAIN CATEGORIES

Close

facebook: All content tagged as facebook in NoSQL databases and polyglot persistence

How Does Flume and Scribe Compare?

I read this ☞ post about Cloudera’s Flume with much interest. Flume sounds like a very interesting tool, not to mention that from Cloudera’s business perspective it makes a lot of sense:

We’ve seen our customers have great success using Hadoop for processing their data, but the question of how to get the data there to process in the first place was often significantly more challenging.

Just in case you didn’t have the time to read about Flume yet, here’s a short description from the ☞ GitHub project page:

Flume is a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large amounts of log data. It has a simple and flexible architecture based on streaming data flows. It is robust and fault tolerant with tunable reliability mechanisms and many failover and recovery mechanisms. The system is centrally managed and allows for intelligent dynamic management. It uses a simple extensible data model that allows for online analytic applications.

In a way this sounded a bit familiar. I thought I’ve seen something kind of similar before: ☞ Scribe:

Scribe is a server for aggregating streaming log data. It is designed to scale to a very large number of nodes and be robust to network and node failures. There is a scribe server running on every node in the system, configured to aggregate messages and send them to a central scribe server (or servers) in larger groups. If the central scribe server isn’t available the local scribe server writes the messages to a file on local disk and sends them when the central server recovers. The central scribe server(s) can write the messages to the files that are their final destination, typically on an nfs filer or a distributed filesystem, or send them to another layer of scribe servers.

So my question is: how does Flume and Scribe compare? What are the major differences and what scenarios are good for one or the other?

If you have the answer to any of these questions, please drop a comment or send me an email.

Update: Looks like I’ve failed to find this ☞ useful thread, but thanks to this comment mistake is corrected:

1. Flume allows you to configure your Flume installation from a central point, without having to ssh into every machine, update a configuration variable and restart a daemon or two. You can start, stop, create, delete and reconfigure logical nodes on any machine running Flume from any command line in your network with the Flume jar available.

2. Flume also has centralised liveness monitoring. We’ve heard a couple of stories of Scribe processes silently failing, but lying undiscovered for days until the rest of the Scribe installation starts creaking under the increased load. Flume allows you to see the health of all your logical nodes in one place (note that this is different from machine liveness monitoring; often the machine stays up while the process might fail).

3. Flume supports three distinct types of reliability guarantees, allowing you to make tradeoffs between resource usage and reliability. In particular, Flume supports fully ACKed reliability, with the guarantee that all events will eventually make their way through the event flow.

4. Flume’s also really extensible - it’s really easy to write your own source or sink and integrate most any system with Flume. If rolling your own is impractical, it’s often very straightforward to have your applications output events in a form that Flume can understand (Flume can run Unix processes, for example, so if you can use shell script to get at your data, you’re golden).

— Henry Robinson

In the same thread, I’m reading about another tool ☞ Chukwa:

Chukwa is a Hadoop subproject devoted to large-scale log collection and analysis. Chukwa is built on top of the Hadoop distributed filesystem (HDFS) and MapReduce framework and inherits Hadoop’s scalability and robustness. Chukwa also includes a flexible and powerful toolkit for displaying monitoring and analyzing results, in order to make the best use of this collected data.


Cassandra Status Inside Facebook, Twitter, Digg, and More

As everyone probably knows by now, Cassandra was originated at Facebook as a solution for inbox search and then open sourced under the ASF umbrella and an Apache license. Since then, Twitter, Digg, Reddit and quite a few others started using it, but not much have been heard from Facebook.

So, in case you are wondering ☞ what’s up with Cassandra here’s a very concise update:

  1. Twitter and Digg are not planning to fork the project. In fact there are clear plans to contribute back their work on Cassandra (see this for more details)
  2. Facebook is still using Cassandra internally for the inbox search, but they are using their own version
  3. even if except the initial code share Facebook has stopped contributing to the Cassandra project, the community on ASF is doing well (read growing)
  4. Riptano, the company founded by Cassandra project lead Jonathan Ellis and Matt Pfeil, is offering technical support, professional services, and training for Cassandra

Update: interesting ☞ note (dated July 7th) from Twitter’s engineer, Nick Kallen:

Twitter no longer intends to use Cassandra for any critical data-stores in the near term future.


Integrating Hive and HBase at Facebook

While definitely interesting, something doesn’t seem to add up:

It (nb HBase) sidesteps Hadoop’s append-only constraint by keeping recently updated data in memory and incrementally rewriting data to new files, splitting and merging intelligently based on data distribution changes. Since it is based on Hadoop, making HBase interoperate with Hive is straightforward, meaning HBase tables can be accessed as if they were native Hive tables. As a result, a single Hive query can now perform complex operations such as join, union, and aggregation across combinations of HBase and native Hive tables. Likewise, Hive’s INSERT statement can be used to move data between HBase and native Hive tables, or to reorganize data within HBase itself.

What I seem to not understand is:

So why HBase?

via: http://www.cloudera.com/blog/2010/06/integrating-hive-and-hbase/


Presentation: Hive - A Petabyte Scale Data Warehouse Using Hadoop

Lately I’ve been mentioning Hive quite a few times when writing about working with NoSQL data, but I was missing a good slidedeck providing details of the Hive architecture, usage scenarios, and other interesting details about Hive.

The presentation embedded below coming from the Facebook Data Infrastructure team provides all these details and much more (i.e. Hive usage at Facebook, Hadoop and Hive clusters, etc.)