facebook: All content tagged as facebook in NoSQL databases and polyglot persistence
In 4 years of writing this blog I haven’t seen such a prolific month:
- Apache Hadoop 2.2.0 (more links here)
- Apache HBase 0.96 (here and here)
- Apache Hive 0.12 (more links here)
- Apache Ambari 1.4.1
- Apache Pig 0.12
- Apache Oozie 4.0.0
- Plus Presto.
Actually I don’t think I’ve ever seen such an ecosystem like the one created around Hadoop.
Original title and link: A prolific season for Hadoop and its ecosystem ( ©myNoSQL)
Authored by a mixed team from University of Southern California, University of Texas, and Facebook, a paper about a new family of erasure codes more efficient that Reed-Solomon codes:
Distributed storage systems for large clusters typically use replication to provide reliability. Recently, erasure codes have been used to reduce the large storage overhead of three-replicated systems. Reed-Solomon codes are the standard design choice and their high repair cost is often considered an unavoidable price to pay for high storage efficiency and high reliability.
This paper shows how to overcome this limitation. We present a novel family of erasure codes that are efficiently repairable and offer higher reliability compared to Reed- Solomon codes. We show analytically that our codes are optimal on a recently identified tradeoff between locality and minimum distance.
We implement our new codes in Hadoop HDFS and com- pare to a currently deployed HDFS module that uses Reed- Solomon codes. Our modified HDFS implementation shows a reduction of approximately 2? on the repair disk I/O and repair network traffic. The disadvantage of the new coding scheme is that it requires 14% more storage compared to Reed-Solomon codes, an overhead shown to be information theoretically optimal to obtain locality. Because the new codes repair failures faster, this provides higher reliability, which is orders of magnitude higher compared to replica- tion.
✚ Robin Harris has a good summary of the paper on StorageMojo:
LRC [Locally Repairable Codes] test results found several key results.
- Disk I/O and network traffic were reduced by half compared to RS codes.
- The LRC required 14% more storage than RS, information theoretically optimal for the obtained locality.
- Repairs times were much lower thanks to the local repair codes.
- Much greater reliability thanks to fast repairs.
- Reduced network traffic makes them suitable for geographic distribution.
✚ While erasure codes are meant to reduce the storage requirements, it also seems to me that they introduce a limitation into distributed data processing systems like Hadoop: having multiple copies of data available in the cluster allows for better I/O performance when compared with clusters using erasure codes where there’s only a single copy of the data.
✚ There’s also a study paper of erasure codes on Facebook warehouse cluster authored by a mixed team from Berkley and Facebook: A solution to the network challenges of data recovery in erasure-coded distributed storage systems: a study on the Facebook warehouse cluster:
Our study reveals that recovery of RS-coded [Reed-Solomon] data results in a significant increase in network traffic, more than a hundred terabytes per day, in a cluster storing multiple petabytes of RS-coded data.
To address this issue, we present a new storage code using our recently proposed_Piggybacking_ framework, that reduces the network and disk usage during recovery by 30% in theory, while also being storage optimal and supporting arbitrary design parameters.
Original title and link: Papers: Novel Erasure Codes for Big Data from Facebook ( ©myNoSQL)
Another weekend read, this time from Facebook and The Ohio State University and closer to the hot topic of the last two weeks: SQL, MapReduce, Hadoop:
MapReduce has become an effective approach to big data analytics in large cluster systems, where SQL-like queries play important roles to interface between users and systems. However, based on our Facebook daily operation results, certain types of queries are executed at an unacceptable low speed by Hive (a production SQL-to-MapReduce translator). In this paper, we demonstrate that existing SQL-to-MapReduce translators that operate in a one-operation-to-one-job mode and do not consider query correlations cannot generate high-performance MapReduce programs for certain queries, due to the mismatch between complex SQL structures and simple MapReduce framework. We propose and develop a system called YSmart, a correlation aware SQL-to- MapReduce translator. YSmart applies a set of rules to use the minimal number of MapReduce jobs to execute multiple correlated operations in a complex query. YSmart can significantly reduce redundant computations, I/O operations and network transfers compared to existing translators. We have implemented YSmart with intensive evaluation for complex queries on two Amazon EC2 clusters and one Facebook production cluster. The results show that YSmart can outperform Hive and Pig, two widely used SQL-to-MapReduce translators, by more than four times for query execution.
Facebook engineering: Under the Hood: Scheduling MapReduce jobs more efficiently with Corona:
It was pretty clear that we would ultimately need a better scheduling framework that would improve this situation in the following ways:
- Better scalability and cluster utilization
- Lower latency for small jobs
- Ability to upgrade without disruption
- Scheduling based on actual task resource requirements rather than a count of map and reduce tasks
Hadoop deployment at Facebook:
60000 Hive queries/day
- used by > 1000 people
Is Hive the preferred way Hadoop is used at Facebook?
Facebook is running it’s own version of HDFS. Once you fork, integrating upstream changes becomes a nightmare.
- How to deploy and test new features at scale: rank types of users and roll out the new feature starting with the less critical scenarios. You must be able to correctly route traffic or users.
- At scale, cluster utilization is a critical metric. All the improvements in Corona are derived from this.
- Traditional analytic databases have advanced resource-based scheduling for a long time. Hadoop needs this.
- Open source at Facebook:
- create a tool that addresses an internal problem
open source itthrow it out in the wild (nb: is there any Facebook open source project they continued to maintain?)
- Option 1: continue to develop it internally. Option 2: drop it
- if by any chance the open source project survives and becomes a standalone project, catch up from time to time
- re-fork it
- why not YARN? The best answer I could find, is Joydeep Sen Sarma’s on Quora. Summarized:
- Corona uses a push-based, event-driven, callback oriented message flow
- Corona’s JobTracker can run in the same VM with the Job Client
- Corona integrated with the Hadoop trunk Fair-Scheduler which got rewritten at Facebook
- Corona’s resource manager uses optimistic locking
- Corona’s using Thrift, while others are looking at using Protobuf or Avro
Original title and link: Facebook Corona: A Different Approach to Job Scheduling and Resource Management ( ©myNoSQL)
Nice screenshot by TechCrunch people of the slide talking about the data lifecycle at Facebook:
Based on this you’ll now have a better picture of how Facebook data ingestion numbers correlate to their architecture.
Original title and link: Life of Data at Facebook ( ©myNoSQL)