facebook: All content tagged as facebook in NoSQL databases and polyglot persistence
More applications of HBase at Facebook, after the new messaging system:
If you are interesting to read more about Facebook messages here’s a list of posts:
- Facebook replacing Cassandara with HBase in new messaging system
- The underlying technology of messages using HBase
- HBase at Facebook and why not MySQL or Cassandra
- HBase at Facebook: A technical presentation of the underlying technology
- Facebook messages: a presentation from FOSDEM
There have been lots of discussions and speculations after the announcement that Facebook is using HBase for the new messaging system. In case you missed it, here are the most important bits:
Kannan Muthukkaruppan: The underlying Technology of Messages (facebook.com)
We spent a few weeks setting up a test framework to evaluate clusters of MySQL, Apache Cassandra, Apache HBase, and a couple of other systems. We ultimately chose HBase. MySQL proved to not handle the long tail of data well; as indexes and data sets grew large, performance suffered. We found Cassandra’s eventual consistency model to be a difficult pattern to reconcile for our new Messages infrastructure.
While setting the a write consistency level of ALL with a read level of ONE in Cassandra provides a strong consistency model similar to what HBase provides (and in fact using quorum writes and reads would as well), the two operations are actually semantically different and lead to different durability and availability guarantees.
Cassandra mailing list: Facebook messaging and choice of HBase over Cassandra
Todd Hoff: Facebook’s New Real-Time Messaging System: HBase to Store 135+ Billion Messages a Month (highscalability.com)
HBase is a scaleout table store supporting very high rates of row-level updates over massive amounts of data. Exactly what is needed for a Messaging system. HBase is also a column based key-value store built on the BigTable model. It’s good at fetching rows by key or scanning ranges of rows and filtering. Also what is needed for a Messaging system. Complex queries are not supported however. Queries are generally given over to an analytics tool like Hive, which Facebook created to make sense of their multi-petabyte data warehouse, and Hive is based on Hadoop’s file system, HDFS, which is also used by HBase.
Jeremiah Peschka: Facebook messaging - HBase Comes of Age (facility9.com)
Existing expertise: The technology behind HBase – Hadoop and HDFS – is very well understood and has been used previously at Facebook. […] Since Hive makes use of Hadoop and HDFS, these shared technologies are well understood by Facebook’s operations teams. As a result, the same technology that allows Facebook to scale their data will be the technology that allows Facebook to scale their Social Messaging feature. The operations team already understands many of the problems they will encounter.
Facebook has an internal branch of HBase which periodically updates from the Apache SVN. As far as I know, the current version in production is very similar to the 0.89.20100924 development release with a couple more patches pulled in from trunk.
Facebook engineers continue to actively contribute to the open source trunk, though - it’s not an internal “fork”
Todd Lipcon (HBase committer)
The engineering team behind Facebook’s new messaging system has posted now a video talking more about their choice of HBase. You can watch the a bit over 1 hour long video here.
The engineering team behind Facebook Messages spent the past year building out a robust, scalable infrastructure. We shared some details about the technology on our Engineering Blog (http://fb.me/95OQ8YaD2rkb3r). This tech talk digs deeper into some of the twenty different infrastructure services we created for the project as well as how we’re using Apache HBase.
I’m still watching the video, so my notes will follow.
- Strong consistency model
- Automatic failover
Multiple shards per server for load balancing
Prevents cascading failures
Compression: save disk space and network bandwidth
- Read-modify-write operation support, like counter increment
- Map Reduce supported out of the box
I’m still not sure why one needs a strong consistency model for messages (and that’s the part missing from all these articles).
As a side note, I feel like the decission was based not on some major facts, but rather a sum of small but important features that HBase was offering compared to other solutions (i.e. consistent increments, perfect integration with Hadoop, etc.)
Original title and link: HBase at Facebook: The Underlying Technology of Messages (NoSQL databases © myNoSQL)
When Facebook talks MySQL, it usually means BigData MySQL, high availability and scalable MySQL, and last, but not least NoSQLized MySQL. Mark Callaghan:
And another one from O’Reilly MySQL Conference:
Minutes ago Facebook hosted a press conference about their upcoming messaging system, a combination of email and IM
. There weren’t many details about the technical solution, except one slide mentioning that while rebuilding the messaging solution:
- Cassandra was replaced by HBase
- Haystack was extended to be used for message attachments
- Thrift, Zookeeper, memcached are used by the product
Facebook’s choice of #HBase is a validation of a superior scalable architecture. Congrats to the team on some hard, excellent work.
I still need to gather more details about this before commenting if it was a scalability issue or other reasons that led to replacing Cassandra with HBase for Facebook messaging.
Update: more details about the underlying technology of Facebook messaging based on HBase.
Original title and link: Facebook Replacing Cassandra with HBase In New Messaging System (NoSQL databases © myNoSQL)
There was one NoSQL conference that I’ve missed and I was really pissed off: Hadoop World. Even if I’ve followed and curated the Twitter feed, resulting in Hadoop World in tweets, the feeling of not being there made me really sad. But now, thanks to Cloudera I’ll be able to watch most of the presentations. Many of them have already been published and the complete list can be found ☞ here.
Based on the twitter activity on that day, I’ve selected below the ones that seemed to have generated most buzz. The list contains names like Facebook, Twitter, eBay, Yahoo!, StumbleUpon, comScore, Mozilla, AOL. And there are quite a few more …
Diaspora — the project started as an open source alternative to Facebook at the time Facebook was facing user complaints to their changes to the user privacy — has published its first alpha version on GitHub. According to the README, it sounds like Diaspora is using MongoDB.
I am pretty sure that the decision was not made based on the recent MongoDB scaling features, but rather on the feature set that made the initial developers feel comfortable and familiar to develop this first alpha version. On the other hand, seeing Rails 3 in the same list may just mean they tried their hands with the latest and greatest.
Original title and link: Diaspora, The Open Source Social Network, Uses MongoDB (NoSQL databases © myNoSQL)
Recently I’ve written about more integration for Hive mentioning that Facebook is working on integrating Hive with HBase.
- You can find some of the news coming from Hadoop 2010 Summit in this post Hadoop and HBase status updates after Hadoop Summit (↩)
I read this ☞ post about Cloudera’s Flume with much interest. Flume sounds like a very interesting tool, not to mention that from Cloudera’s business perspective it makes a lot of sense:
We’ve seen our customers have great success using Hadoop for processing their data, but the question of how to get the data there to process in the first place was often significantly more challenging.
Just in case you didn’t have the time to read about Flume yet, here’s a short description from the ☞ GitHub project page:
Flume is a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large amounts of log data. It has a simple and flexible architecture based on streaming data flows. It is robust and fault tolerant with tunable reliability mechanisms and many failover and recovery mechanisms. The system is centrally managed and allows for intelligent dynamic management. It uses a simple extensible data model that allows for online analytic applications.
In a way this sounded a bit familiar. I thought I’ve seen something kind of similar before: ☞ Scribe:
Scribe is a server for aggregating streaming log data. It is designed to scale to a very large number of nodes and be robust to network and node failures. There is a scribe server running on every node in the system, configured to aggregate messages and send them to a central scribe server (or servers) in larger groups. If the central scribe server isn’t available the local scribe server writes the messages to a file on local disk and sends them when the central server recovers. The central scribe server(s) can write the messages to the files that are their final destination, typically on an nfs filer or a distributed filesystem, or send them to another layer of scribe servers.
So my question is: how does Flume and Scribe compare? What are the major differences and what scenarios are good for one or the other?
If you have the answer to any of these questions, please drop a comment or send me an email.
1. Flume allows you to configure your Flume installation from a central point, without having to ssh into every machine, update a configuration variable and restart a daemon or two. You can start, stop, create, delete and reconfigure logical nodes on any machine running Flume from any command line in your network with the Flume jar available.
2. Flume also has centralised liveness monitoring. We’ve heard a couple of stories of Scribe processes silently failing, but lying undiscovered for days until the rest of the Scribe installation starts creaking under the increased load. Flume allows you to see the health of all your logical nodes in one place (note that this is different from machine liveness monitoring; often the machine stays up while the process might fail).
3. Flume supports three distinct types of reliability guarantees, allowing you to make tradeoffs between resource usage and reliability. In particular, Flume supports fully ACKed reliability, with the guarantee that all events will eventually make their way through the event flow.
4. Flume’s also really extensible - it’s really easy to write your own source or sink and integrate most any system with Flume. If rolling your own is impractical, it’s often very straightforward to have your applications output events in a form that Flume can understand (Flume can run Unix processes, for example, so if you can use shell script to get at your data, you’re golden).
— Henry Robinson
In the same thread, I’m reading about another tool ☞ Chukwa:
Chukwa is a Hadoop subproject devoted to large-scale log collection and analysis. Chukwa is built on top of the Hadoop distributed filesystem (HDFS) and MapReduce framework and inherits Hadoop’s scalability and robustness. Chukwa also includes a ﬂexible and powerful toolkit for displaying monitoring and analyzing results, in order to make the best use of this collected data.