ALL COVERED TOPICS

NoSQL Benchmarks NoSQL use cases NoSQL Videos NoSQL Hybrid Solutions NoSQL Presentations Big Data Hadoop MapReduce Pig Hive Flume Oozie Sqoop HDFS ZooKeeper Cascading Cascalog BigTable Cassandra HBase Hypertable Couchbase CouchDB MongoDB OrientDB RavenDB Jackrabbit Terrastore Amazon DynamoDB Redis Riak Project Voldemort Tokyo Cabinet Kyoto Cabinet memcached Amazon SimpleDB Datomic MemcacheDB M/DB GT.M Amazon Dynamo Dynomite Mnesia Yahoo! PNUTS/Sherpa Neo4j InfoGrid Sones GraphDB InfiniteGraph AllegroGraph MarkLogic Clustrix CouchDB Case Studies MongoDB Case Studies NoSQL at Adobe NoSQL at Facebook NoSQL at Twitter

NAVIGATE MAIN CATEGORIES

Close

Papers: Novel Erasure Codes for Big Data from Facebook

Authored by a mixed team from University of Southern California, University of Texas, and Facebook, a paper about a new family of erasure codes more efficient that Reed-Solomon codes:

Distributed storage systems for large clusters typically use replication to provide reliability. Recently, erasure codes have been used to reduce the large storage overhead of three-replicated systems. Reed-Solomon codes are the standard design choice and their high repair cost is often considered an unavoidable price to pay for high storage efficiency and high reliability.

This paper shows how to overcome this limitation. We present a novel family of erasure codes that are efficiently repairable and offer higher reliability compared to Reed- Solomon codes. We show analytically that our codes are optimal on a recently identified tradeoff between locality and minimum distance.

We implement our new codes in Hadoop HDFS and com- pare to a currently deployed HDFS module that uses Reed- Solomon codes. Our modified HDFS implementation shows a reduction of approximately 2? on the repair disk I/O and repair network traffic. The disadvantage of the new coding scheme is that it requires 14% more storage compared to Reed-Solomon codes, an overhead shown to be information theoretically optimal to obtain locality. Because the new codes repair failures faster, this provides higher reliability, which is orders of magnitude higher compared to replica- tion.

✚ Robin Harris has a good summary of the paper on StorageMojo:

LRC [Locally Repairable Codes] test results found several key results.

  • Disk I/O and network traffic were reduced by half compared to RS codes.
  • The LRC required 14% more storage than RS, information theoretically optimal for the obtained locality.
  • Repairs times were much lower thanks to the local repair codes.
  • Much greater reliability thanks to fast repairs.
  • Reduced network traffic makes them suitable for geographic distribution.

✚ While erasure codes are meant to reduce the storage requirements, it also seems to me that they introduce a limitation into distributed data processing systems like Hadoop: having multiple copies of data available in the cluster allows for better I/O performance when compared with clusters using erasure codes where there’s only a single copy of the data.

✚ There’s also a study paper of erasure codes on Facebook warehouse cluster authored by a mixed team from Berkley and Facebook: A solution to the network challenges of data recovery in erasure-coded distributed storage systems: a study on the Facebook warehouse cluster:

Our study reveals that recovery of RS-coded [Reed-Solomon] data results in a significant increase in network traffic, more than a hundred terabytes per day, in a cluster storing multiple petabytes of RS-coded data.

To address this issue, we present a new storage code using our recently proposed_Piggybacking_ framework, that reduces the network and disk usage during recovery by 30% in theory, while also being storage optimal and supporting arbitrary design parameters.

Original title and link: Papers: Novel Erasure Codes for Big Data from Facebook (NoSQL database©myNoSQL)