ALL COVERED TOPICS

NoSQL Benchmarks NoSQL use cases NoSQL Videos NoSQL Hybrid Solutions NoSQL Presentations Big Data Hadoop MapReduce Pig Hive Flume Oozie Sqoop HDFS ZooKeeper Cascading Cascalog BigTable Cassandra HBase Hypertable Couchbase CouchDB MongoDB OrientDB RavenDB Jackrabbit Terrastore Amazon DynamoDB Redis Riak Project Voldemort Tokyo Cabinet Kyoto Cabinet memcached Amazon SimpleDB Datomic MemcacheDB M/DB GT.M Amazon Dynamo Dynomite Mnesia Yahoo! PNUTS/Sherpa Neo4j InfoGrid Sones GraphDB InfiniteGraph AllegroGraph MarkLogic Clustrix CouchDB Case Studies MongoDB Case Studies NoSQL at Adobe NoSQL at Facebook NoSQL at Twitter

NAVIGATE MAIN CATEGORIES

Close

Facebook: All content tagged as Facebook in NoSQL databases and polyglot persistence

HDFS: Realtime Hadoop and HBase Usage at Facebook

Dhruba Borthakur started a series of posts — part 1 and part 2 — describing both the process that lead Facebook to using HBase and Hadoop, but also the projects where these are used and their requirements:

After considerable research and experimentation, we chose Hadoop and HBase as the foundational storage technology for these next generation applications. The decision was based on the state of HBase at the point of evaluation as well as our confidence in addressing the features that were lacking at that point via in- house engineering. HBase already provided a highly consistent, high write-throughput key-value store. The HDFS NameNode stood out as a central point of failure, but we were confident that our HDFS team could build a highly-available NameNode (AvatarNode) in a reasonable time-frame, and this would be useful for our warehouse operations as well. Good disk read-efficiency seemed to be within striking reach (pending adding Bloom filters to HBase’’s version of LSM Trees, making local DataNode reads efficient and caching NameNode metadata). Based on our experience operating the Hive/Hadoop warehouse, we knew HDFS was stellar in tolerating and isolating faults in the disk subsystem. The failure of entire large HBase/HDFS clusters was a scenario that ran against the goal of fault-isolation, but could be considerably mitigated by storing data in smaller HBase clusters. Wide area replication projects, both in-house and within the HBase community, seemed to provide a promising path to achieving disaster recovery.

The second part is describing 3 problems Facebook is solving using HBase and Hadoop and provides further details on the requirements of each of these.

The two posts represent a great resource for understanding not only where HBase and Hadoop can be used, but also on how to formulate the requirements (and non-requirements) for new systems.

A Facebook team will present the paper “Apache Hadoop Goes Realtime at Facebook” at ACM SIGMOD. I’m looking forward for the moment the paper will be available.

Original title and link: HDFS: Realtime Hadoop and HBase Usage at Facebook (NoSQL databases © myNoSQL)


Firefox Downloads Visualization Powered by HBase

Not only is Mozilla celebrating the release of Firefox 4, but they took the time to set up a nice visualization for downloads.

glow.mozilla.org is powered by tailing logs and streaming data into HBase:

  1. The various load balancing clusters that host download.mozilla.org are configured to log download requests to a remote syslog server.
  2. The remote server is running rsyslog and has a config that specifically filters those remote syslog events into a dedicated file that rolls over hourly
  3. SQLStream is installed on that server and it is tailing those log files as they appear.
  4. The SQLStream pipeline does the following for each request:
    • filtering out anything other than valid download requests
    • uses MaxMind GeoIP to get a geographic location from the IP address
    • uses a streaming group by to aggregate the number of downloads by product, location, and timestamp
    • every 10 seconds, sends a stream of counter increments to HBase for the timestamp row with the column qualifiers being each distinct location that had downloads in that time interval
  5. The glow backend is a python app that pulls the data out of HBase using the Python Thrift interface and writes a file containing a JSON representation of the data every minute.
  6. That JSON file can be cached on the front-end forever since each minute of data has a distinct filename
  7. The glow website pulls down that data and plays back the downloads or allows you to browse the geographic totals in the arc chart view

This sounds a lot like what Facebook is doing for the new Real-Time Analytics system.. The parts missing are Scribe and ptail.

Original title and link: Firefox Downloads Visualization Powered by HBase (NoSQL databases © myNoSQL)

via: http://blog.mozilla.com/data/2011/03/22/how-glow-mozilla-org-gets-its-data/


Facebook Builds HBase-based Real-Time Analytics

More applications of HBase at Facebook, after the new messaging system:

If you are interesting to read more about Facebook messages here’s a list of posts:

Original title and link: Facebook Builds HBase-based Real-Time Analytics (NoSQL databases © myNoSQL)


Facebook Messages: FOSDEM NoSQL Event

From this year’s FOSDEM, Facebook talking about the technology behind the messaging platform:

Original title and link: Facebook Messages: FOSDEM NoSQL Event (NoSQL databases © myNoSQL)


HBase at Facebook: The Underlying Technology of Messages

There have been lots of discussions and speculations after the announcement that Facebook is using HBase for the new messaging system. In case you missed it, here are the most important bits:

  • Kannan Muthukkaruppan: The underlying Technology of Messages (facebook.com)

    We spent a few weeks setting up a test framework to evaluate clusters of MySQL, Apache Cassandra, Apache HBase, and a couple of other systems. We ultimately chose HBase. MySQL proved to not handle the long tail of data well; as indexes and data sets grew large, performance suffered. We found Cassandra’s eventual consistency model to be a difficult pattern to reconcile for our new Messages infrastructure.

  • Quora.com: How does HBase write performance differ from write performance in Cassandra with consistency level ALL

    While setting the a write consistency level of ALL with a read level of ONE in Cassandra provides a strong consistency model similar to what HBase provides (and in fact using quorum writes and reads would as well), the two operations are actually semantically different and lead to different durability and availability guarantees.

  • Cassandra mailing list: Facebook messaging and choice of HBase over Cassandra

  • Todd Hoff: Facebook’s New Real-Time Messaging System: HBase to Store 135+ Billion Messages a Month (highscalability.com)

    HBase is a scaleout table store supporting very high rates of row-level updates over massive amounts of data. Exactly what is needed for a Messaging system. HBase is also a column based key-value store built on the BigTable model. It’s good at fetching rows by key or scanning ranges of rows and filtering. Also what is needed for a Messaging system. Complex queries are not supported however. Queries are generally given over to an analytics tool like Hive, which Facebook created to make sense of their multi-petabyte data warehouse, and Hive is based on Hadoop’s file system, HDFS, which is also used by HBase.

  • Jeremiah Peschka: Facebook messaging - HBase Comes of Age (facility9.com)

    Existing expertise: The technology behind HBase – Hadoop and HDFS – is very well understood and has been used previously at Facebook. […] Since Hive makes use of Hadoop and HDFS, these shared technologies are well understood by Facebook’s operations teams. As a result, the same technology that allows Facebook to scale their data will be the technology that allows Facebook to scale their Social Messaging feature. The operations team already understands many of the problems they will encounter.

  • Quora.com: What version of HBase is Facebook using for its new messaging platform?

    Facebook has an internal branch of HBase which periodically updates from the Apache SVN. As far as I know, the current version in production is very similar to the 0.89.20100924 development release with a couple more patches pulled in from trunk.

    Facebook engineers continue to actively contribute to the open source trunk, though - it’s not an internal “fork”

    Todd Lipcon (HBase committer)

The engineering team behind Facebook’s new messaging system has posted now a video talking more about their choice of HBase. You can watch the a bit over 1 hour long video here.

The engineering team behind Facebook Messages spent the past year building out a robust, scalable infrastructure. We shared some details about the technology on our Engineering Blog (http://fb.me/95OQ8YaD2rkb3r). This tech talk digs deeper into some of the twenty different infrastructure services we created for the project as well as how we’re using Apache HBase.

I’m still watching the video, so my notes will follow.

Why HBase?

Choosing HBase at Facebook

  • Strong consistency model
  • Automatic failover
  • Multiple shards per server for load balancing

  • Prevents cascading failures

  • Compression: save disk space and network bandwidth

  • Read-modify-write operation support, like counter increment
  • Map Reduce supported out of the box

I’m still not sure why one needs a strong consistency model for messages (and that’s the part missing from all these articles).

As a side note, I feel like the decission was based not on some major facts, but rather a sum of small but important features that HBase was offering compared to other solutions (i.e. consistent increments, perfect integration with Hadoop, etc.)

Original title and link: HBase at Facebook: The Underlying Technology of Messages (NoSQL databases © myNoSQL)


MySQL at Facebook with Mark Callaghan

When Facebook talks MySQL, it usually means BigData MySQL, high availability and scalable MySQL, and last, but not least NoSQLized MySQL. Mark Callaghan:

And another one from O’Reilly MySQL Conference:


HBase at Facebook and Why Not MySQL or Cassandra

Jeremiah Peschka on:

What makes this decision interesting is not just the reasons that Apache HBase was chosen, but also the reasons that MySQL and Cassandra were not chosen.

Briefly:

  • not MySQL: because sharding can be very difficult at the scale of Facebook messages system
  • not Cassandra: because of replication behavior
Credit Jeremiah Peschka

Update: There were a few pointing out that the original article is inaccurate. Todd Lipcon’s comment is providing some corrections pretty much demoting all technical arguments in the post. Todd concludes:

That said, the point about existing operational experience with Hadoop and HDFS is absolutely correct. There is a large Hadoop team at Facebook and they are truly experts in the technology. The HBase team there is also growing quickly and have been great contributors in the last several months.

Original title and link: HBase at Facebook and Why Not MySQL or Cassandra (NoSQL databases © myNoSQL)

via: http://facility9.com/2010/11/18/facebook-messaging-hbase-comes-of-age


Facebook Replacing Cassandra with HBase In New Messaging System

Minutes ago Facebook hosted a press conference about their upcoming messaging system, a combination of email and IM[1]. There weren’t many details about the technical solution, except one slide mentioning that while rebuilding the messaging solution:

  • Cassandra was replaced by HBase
  • Haystack was extended to be used for message attachments
  • Thrift, Zookeeper, memcached are used by the product

Ryan Rawson[2]tweeted:

Facebook’s choice of #HBase is a validation of a superior scalable architecture. Congrats to the team on some hard, excellent work.

I still need to gather more details about this before commenting if it was a scalability issue or other reasons that led to replacing Cassandra with HBase for Facebook messaging.

Update: more details about the underlying technology of Facebook messaging based on HBase.


  1. It sounded a lot like Google Wave, but that sort of discussion doesn’t fit this blog.  ()
  2. Ryan Rawson, Systems Architect at StumbleUpon, HBase committer, @ ryanobjc  ()

Original title and link: Facebook Replacing Cassandra with HBase In New Messaging System (NoSQL databases © myNoSQL)


Videos from Hadoop World

There was one NoSQL conference that I’ve missed and I was really pissed off: Hadoop World. Even if I’ve followed and curated the Twitter feed, resulting in Hadoop World in tweets, the feeling of not being there made me really sad. But now, thanks to Cloudera I’ll be able to watch most of the presentations. Many of them have already been published and the complete list can be found ☞ here.

Based on the twitter activity on that day, I’ve selected below the ones that seemed to have generated most buzz. The list contains names like Facebook, Twitter, eBay, Yahoo!, StumbleUpon, comScore, Mozilla, AOL. And there are quite a few more …


MySQL: An Online Schema Change Tool from Facebook

Most of the NoSQL databases are schema-less (key-value stores, document databases, and graph databases) or allow to (easily) update their schemas. On the other hand, updating relational databases schema can be a challenging operation.

Facebook has ☞ open sourced a tool used internally for online (nb as in no downtime required) MySQL schema updates.

OSC (Online Schema Change) algorithms typically have several phases:

  • copy - where they make a copy of the table
  • build - where they work on the copy until the copy is ready with the new schema
  • replay - where they propagate the changes that happened on the original table to the copy table. This assumes that there is a mechanism for capturing changes.
  • cut-over - where they switch the tables ie rename the copy table as original. There is typically small amount of downtime while switching the tables. A small amount of replay is also typically needed during the cut-over.

Note that the above operations can be done within the storage engine itself, or using an external (PHP) script. We followed the latter approach as it can be implemented much faster. An advantage of doing within storage engine is some optimizations can be done that are not available while doing these operations externally.

Quite smart!

Original title and link: MySQL: An Online Schema Change Tool from Facebook (NoSQL databases © myNoSQL)

via: http://www.facebook.com/note.php?note_id=430801045932


Diaspora, The Open Source Social Network, Uses MongoDB

Diaspora — the project started as an open source alternative to Facebook at the time Facebook was facing user complaints to their changes to the user privacy — has published its first alpha version on GitHub. According to the README, it sounds like Diaspora is using MongoDB.

I am pretty sure that the decision was not made based on the recent MongoDB scaling features, but rather on the feature set that made the initial developers feel comfortable and familiar to develop this first alpha version. On the other hand, seeing Rails 3 in the same list may just mean they tried their hands with the latest and greatest.

Original title and link: Diaspora, The Open Source Social Network, Uses MongoDB (NoSQL databases © myNoSQL)


Hive integrations at Facebook: HBase & RCFile

Recently I’ve written about more integration for Hive mentioning that Facebook is working on integrating Hive with HBase.

Yahoo! Developer Network Blog has ☞ published the video of John Sichi and Yongqiang He of Facebook presenting about Hive and HBase integration at Hadoop summit[1]:


  1. You can find some of the news coming from Hadoop 2010 Summit in this post Hadoop and HBase status updates after Hadoop Summit  ()