ALL COVERED TOPICS

NoSQL Benchmarks NoSQL use cases NoSQL Videos NoSQL Hybrid Solutions NoSQL Presentations Big Data Hadoop MapReduce Pig Hive Flume Oozie Sqoop HDFS ZooKeeper Cascading Cascalog BigTable Cassandra HBase Hypertable Couchbase CouchDB MongoDB OrientDB RavenDB Jackrabbit Terrastore Amazon DynamoDB Redis Riak Project Voldemort Tokyo Cabinet Kyoto Cabinet memcached Amazon SimpleDB Datomic MemcacheDB M/DB GT.M Amazon Dynamo Dynomite Mnesia Yahoo! PNUTS/Sherpa Neo4j InfoGrid Sones GraphDB InfiniteGraph AllegroGraph MarkLogic Clustrix CouchDB Case Studies MongoDB Case Studies NoSQL at Adobe NoSQL at Facebook NoSQL at Twitter

NAVIGATE MAIN CATEGORIES

Close

Facebook: All content tagged as Facebook in NoSQL databases and polyglot persistence

Scaling the Facebook data warehouse to 300 PB

Fascinating read, raising interesting observations on different levels:

  1. At Facebook, data warehouse means Hadoop and Hive.

    Our warehouse stores upwards of 300 PB of Hive data, with an incoming daily rate of about 600 TB.

  2. I don’t see how in-memory solutions, like Hana, will see their market expanding.

    In the Enterprise Data Warehouses and the first Hadoop squeeze, Rob Klopp predicted a squeeze of the EDW market under the pressure of in-memory DBMS and Hadoop. I still think that in-memory will become just a custom engine in the Hadoop toolkit and existing EDW products.

    On the always mentioned argument that “not everybody is Facebook”, I think that the part that is hidden under the rug is that today’s size of data is the smallest you’ll ever have.

    In the last year, the warehouse has seen a 3x growth in the amount of data stored. Given this growth trajectory, storage efficiency is and will continue to be a focus for our warehouse infrastructure.

  3. At Facebook’s scale, balancing availability and costs is again a challenge. But there’s no mention of network attached storage.

    There are many areas we are innovating in to improve storage efficiency for the warehouse – building cold storage data centers, adopting techniques like RAID in HDFS to reduce replication ratios (while maintaining high availability), and using compression for data reduction before it’s written to HDFS.

  4. For the nits and bolts of effectively optimizing compression, read the rest of the post which covers the optimization Facebook brought to the ORCFile format.

    There seem to be two competing formats at play: ORCFile (with support from Hortonworks and Facebook) and Parquet (with support from Twitter and Cloudera). Unfortunately I don’t have any good comparison of the two. And I couldn’t find one (why?).

Original title and link: Scaling the Facebook data warehouse to 300 PB (NoSQL database©myNoSQL)

via: https://code.facebook.com/posts/229861827208629/scaling-the-facebook-data-warehouse-to-300-pb/


A prolific season for Hadoop and its ecosystem

In 4 years of writing this blog I haven’t seen such a prolific month:

  • Apache Hadoop 2.2.0 (more links here)
  • Apache HBase 0.96 (here and here)
  • Apache Hive 0.12 (more links here)
  • Apache Ambari 1.4.1
  • Apache Pig 0.12
  • Apache Oozie 4.0.0
  • Plus Presto.

Actually I don’t think I’ve ever seen such an ecosystem like the one created around Hadoop.

Original title and link: A prolific season for Hadoop and its ecosystem (NoSQL database©myNoSQL)


Papers: Novel Erasure Codes for Big Data from Facebook

Authored by a mixed team from University of Southern California, University of Texas, and Facebook, a paper about a new family of erasure codes more efficient that Reed-Solomon codes:

Distributed storage systems for large clusters typically use replication to provide reliability. Recently, erasure codes have been used to reduce the large storage overhead of three-replicated systems. Reed-Solomon codes are the standard design choice and their high repair cost is often considered an unavoidable price to pay for high storage efficiency and high reliability.

This paper shows how to overcome this limitation. We present a novel family of erasure codes that are efficiently repairable and offer higher reliability compared to Reed- Solomon codes. We show analytically that our codes are optimal on a recently identified tradeoff between locality and minimum distance.

We implement our new codes in Hadoop HDFS and com- pare to a currently deployed HDFS module that uses Reed- Solomon codes. Our modified HDFS implementation shows a reduction of approximately 2? on the repair disk I/O and repair network traffic. The disadvantage of the new coding scheme is that it requires 14% more storage compared to Reed-Solomon codes, an overhead shown to be information theoretically optimal to obtain locality. Because the new codes repair failures faster, this provides higher reliability, which is orders of magnitude higher compared to replica- tion.

✚ Robin Harris has a good summary of the paper on StorageMojo:

LRC [Locally Repairable Codes] test results found several key results.

  • Disk I/O and network traffic were reduced by half compared to RS codes.
  • The LRC required 14% more storage than RS, information theoretically optimal for the obtained locality.
  • Repairs times were much lower thanks to the local repair codes.
  • Much greater reliability thanks to fast repairs.
  • Reduced network traffic makes them suitable for geographic distribution.

✚ While erasure codes are meant to reduce the storage requirements, it also seems to me that they introduce a limitation into distributed data processing systems like Hadoop: having multiple copies of data available in the cluster allows for better I/O performance when compared with clusters using erasure codes where there’s only a single copy of the data.

✚ There’s also a study paper of erasure codes on Facebook warehouse cluster authored by a mixed team from Berkley and Facebook: A solution to the network challenges of data recovery in erasure-coded distributed storage systems: a study on the Facebook warehouse cluster:

Our study reveals that recovery of RS-coded [Reed-Solomon] data results in a significant increase in network traffic, more than a hundred terabytes per day, in a cluster storing multiple petabytes of RS-coded data.

To address this issue, we present a new storage code using our recently proposed_Piggybacking_ framework, that reduces the network and disk usage during recovery by 30% in theory, while also being storage optimal and supporting arbitrary design parameters.

Original title and link: Papers: Novel Erasure Codes for Big Data from Facebook (NoSQL database©myNoSQL)


Paper: YSmart - Yet Another SQL-to-MapReduce Translator

Another weekend read, this time from Facebook and The Ohio State University and closer to the hot topic of the last two weeks: SQL, MapReduce, Hadoop:

MapReduce has become an effective approach to big data analytics in large cluster systems, where SQL-like queries play important roles to interface between users and systems. However, based on our Facebook daily operation results, certain types of queries are executed at an unacceptable low speed by Hive (a production SQL-to-MapReduce translator). In this paper, we demonstrate that existing SQL-to-MapReduce translators that operate in a one-operation-to-one-job mode and do not consider query correlations cannot generate high-performance MapReduce programs for certain queries, due to the mismatch between complex SQL structures and simple MapReduce framework. We propose and develop a system called YSmart, a correlation aware SQL-to- MapReduce translator. YSmart applies a set of rules to use the minimal number of MapReduce jobs to execute multiple correlated operations in a complex query. YSmart can significantly reduce redundant computations, I/O operations and network transfers compared to existing translators. We have implemented YSmart with intensive evaluation for complex queries on two Amazon EC2 clusters and one Facebook production cluster. The results show that YSmart can outperform Hive and Pig, two widely used SQL-to-MapReduce translators, by more than four times for query execution.


A Key-Value Cache for Flash Storage: Facebook's McDipper and What Preceded It

A post on Facebook Engineering’s blog:

The outgrowth of this was McDipper, a highly performant flash-based cache server that is Memcache protocol compatible. The main design goals of McDipper are to make efficient use of flash storage (i.e. to deliver performance as close to that of the underlying device as possible) and to be a drop-in replacement for Memcached. McDipper has been in active use in production at Facebook for nearly a year.

I know at least 3 companies that have attacked this problem with different approaches and different results:

  1. Couchbase (ex-Membase, ex-NorthScale) started as a persistent clustered Memcached implementation. It was not optimized for Flash storage though. Today’s Couchbase product is still based on the memcache protocol, but it adding new features inspired by CouchDB.
  2. RethinkDB, a YC company and the company that I work for, has worked and released in 2011 a Memcache compatible storage engine optimized for SSDs. Since then, RethinkDB has been building and released an enhanced product, a distributed JSON store with advanced data manipulation support.
  3. Aerospike (ex Citrusleaf) sells a storage engine for flash drives. Its API is not Memcache compatible though.

People interested in this market segment have something to learn from this.

Original title and link: A Key-Value Cache for Flash Storage: Facebook’s McDipper and What Preceded It (NoSQL database©myNoSQL)

via: http://www.facebook.com/notes/facebook-engineering/mcdipper-a-key-value-cache-for-flash-storage/10151347090423920


Which Big Data Company Has the World's Biggest Hadoop Cluster?

Jimmy Wong:

Which companies use Hadoop for analyzing big data? How big are their clusters? I thought it would be fun to compare companies by the size of their Hadoop installations. The size would indicate the company’s investment in Hadoop, and subsequently their appetite to buy big data products and services from vendors, as well as their hiring needs to support their analytics infrastructure.

hadoopwizard_companies_by_node_size600

Unfortunately the data available is sooo little and soooo old.

Original title and link: Which Big Data Company Has the World’s Biggest Hadoop Cluster? (NoSQL database©myNoSQL)

via: http://www.hadoopwizard.com/which-big-data-company-has-the-worlds-biggest-hadoop-cluster/


Facebook: From Function-Based to Resource-Based Servers With the Disaggregated Rack

Most of the servers are configured depending on the functionality they’ll provide: here are the web servers, here were the cache servers, and these are the database servers. Facebook has been using the same approach, but according to Mark Hackman’s post: “How Facebook Will Power Graph Search“, they’ll start looking into having resource-based racks: here’s the compute power, here’s the RAM, and here’s the storage. If you think about it, this is exactly how Amazon has structured their cloud infrastructure services.

Original title and link: Facebook: From Function-Based to Resource-Based Servers With the Disaggregated Rack (NoSQL database©myNoSQL)

via: http://slashdot.org/topic/datacenter/how-facebook-will-power-graph-search/


Facebook Open Compute: A New Database Server Design

Facebook has released under the Open Compute umbrella the design of a new database server they’ve introduced in one of the datacenters. The bit that caught my eyes is that this is not about more disk space or more CPU, but redundant power supplies:

According to Frankovsky, for certain database functions at Facebook, it was more important to have redundant power supplies for a database node than it was to have multiple compute nodes in an Open Compute V2 chassis sharing a single power supply. […] Frankovsky said that by doubling up the power supplies and making an Open Compute-style database server, it was able to cut the costs over its current database servers by 40 per cent.

The spec can be found ‎here (PDF).

Original title and link: Facebook Open Compute: A New Database Server Design (NoSQL database©myNoSQL)

via: http://www.theregister.co.uk/2013/01/17/open_compute_facebook_servers/


Automating MySQL Backups at Facebook Scale

Eric Barrett (Facebook) describes the process used for backing up Facebook’s MySQL cluster1:

Backups are not the most glamorous type of engineering. They are technical, repetitive, and when everything works, nobody notices. They are also cross-discipline, requiring systems, network, and software expertise from multiple teams. But ensuring your memories and connections are safe is incredibly important, and at the end of the day, incredibly rewarding.

If you’d want to make it sound simple, just enumerate the steps:

  1. Binary logs and mysqldump
  2. Hadoop DFS
  3. Long-term storage

Then start asking how you’d accomplish this. With 1 server. With more servers. With more servers while maintaining the availability of the system. See how far you’d be able to answer these questions. At least theoretically.


  1. As a side note, in Fun with numbers: How much data is Facebook ingesting, I’ve guestimated the number of MySQL servers in the 20k range. This post mentions: “thousands of database servers in multiple regions”. 

Original title and link: Automating MySQL Backups at Facebook Scale (NoSQL database©myNoSQL)

via: https://www.facebook.com/notes/facebook-engineering/under-the-hood-automated-backups/10151239431923920


Hadoop Implementers, Take Some Advice From My Grandmother

Paige Roberts on Pervasive blog:

The Hadoop distributed computing concept is inherently parallel and, therefore, should be friendly to better utilization models. But parallel programming, beyond the basic data level, the embarrassingly parallel level, requires different habits. MapReduce is already heading us in the wrong direction. Most Hadoop data centers aren’t doing any better when it comes to usage levels than traditional data centers. There’s still a tremendous amount of energy and compute power going to waste.

YARN gives us the option to use other compute models in Hadoop clusters; better, more efficient compute models, if we can create them.

People running Hadoop at scale always want to optimize power consumption. As the first example that comes to my mind, in November, Facebook, which most probably runs the largest Hadoop cluster, open sourced their work on improving MapReduce jobs scheduling in a project named Corona which was meant to increase the efficiency of using the resources available in their Hadoop clusters:

In heavy workloads during our testing, the utilization in the Hadoop MapReduce system topped out at 70%. Corona was able to reach more than 95%.

Original title and link: Hadoop Implementers, Take Some Advice From My Grandmother (NoSQL database©myNoSQL)

via: http://bigdata.pervasive.com/Blog/Big-Data-Blog/EntryId/1131/Hadoop-Implementers-Take-Some-Advice-From-My-Grandmother.aspx


Facebook Corona: A Different Approach to Job Scheduling and Resource Management

Facebook engineering: Under the Hood: Scheduling MapReduce jobs more efficiently with Corona:

It was pretty clear that we would ultimately need a better scheduling framework that would improve this situation in the following ways:

  • Better scalability and cluster utilization
  • Lower latency for small jobs
  • Ability to upgrade without disruption
  • Scheduling based on actual task resource requirements rather than a count of map and reduce tasks
  1. Hadoop deployment at Facebook:

    • 100PB

    • 60000 Hive queries/day

    • used by > 1000 people

    Is Hive the preferred way Hadoop is used at Facebook?

  2. Facebook is running it’s own version of HDFS. Once you fork, integrating upstream changes becomes a nightmare.

  3. How to deploy and test new features at scale: rank types of users and roll out the new feature starting with the less critical scenarios. You must be able to correctly route traffic or users.
  4. At scale, cluster utilization is a critical metric. All the improvements in Corona are derived from this.
  5. Traditional analytic databases have advanced resource-based scheduling for a long time. Hadoop needs this.
  6. Open source at Facebook:
    1. create a tool that addresses an internal problem
    2. open source it throw it out in the wild (nb: is there any Facebook open source project they continued to maintain?)
    3. Option 1: continue to develop it internally. Option 2: drop it
    4. if by any chance the open source project survives and becomes a standalone project, catch up from time to time
    5. re-fork it
  7. why not YARN? The best answer I could find, is Joydeep Sen Sarma’s on Quora. Summarized:
    1. Corona uses a push-based, event-driven, callback oriented message flow
    2. Corona’s JobTracker can run in the same VM with the Job Client
    3. Corona integrated with the Hadoop trunk Fair-Scheduler which got rewritten at Facebook
    4. Corona’s resource manager uses optimistic locking
    5. Corona’s using Thrift, while others are looking at using Protobuf or Avro

Original title and link: Facebook Corona: A Different Approach to Job Scheduling and Resource Management (NoSQL database©myNoSQL)


Improving HBase Read Performance at Facebook

Starting from Hypertable v HBase benchmark and building on the things HBase could learn from it, the Facebook team set to improve the read performance in HBase. And they’ve accomplished it:

HBase v Hypertable read performance before-after improvements

Original title and link: Improving HBase Read Performance at Facebook (NoSQL database©myNoSQL)

via: http://hadoopstack.com/hbase-versus-hypertable/