Facebook: All content tagged as Facebook in NoSQL databases and polyglot persistence
Another weekend read, this time from Facebook and The Ohio State University and closer to the hot topic of the last two weeks: SQL, MapReduce, Hadoop:
MapReduce has become an effective approach to big data analytics in large cluster systems, where SQL-like queries play important roles to interface between users and systems. However, based on our Facebook daily operation results, certain types of queries are executed at an unacceptable low speed by Hive (a production SQL-to-MapReduce translator). In this paper, we demonstrate that existing SQL-to-MapReduce translators that operate in a one-operation-to-one-job mode and do not consider query correlations cannot generate high-performance MapReduce programs for certain queries, due to the mismatch between complex SQL structures and simple MapReduce framework. We propose and develop a system called YSmart, a correlation aware SQL-to- MapReduce translator. YSmart applies a set of rules to use the minimal number of MapReduce jobs to execute multiple correlated operations in a complex query. YSmart can significantly reduce redundant computations, I/O operations and network transfers compared to existing translators. We have implemented YSmart with intensive evaluation for complex queries on two Amazon EC2 clusters and one Facebook production cluster. The results show that YSmart can outperform Hive and Pig, two widely used SQL-to-MapReduce translators, by more than four times for query execution.
Facebook engineering: Under the Hood: Scheduling MapReduce jobs more efficiently with Corona:
It was pretty clear that we would ultimately need a better scheduling framework that would improve this situation in the following ways:
- Better scalability and cluster utilization
- Lower latency for small jobs
- Ability to upgrade without disruption
- Scheduling based on actual task resource requirements rather than a count of map and reduce tasks
Hadoop deployment at Facebook:
60000 Hive queries/day
- used by > 1000 people
Is Hive the preferred way Hadoop is used at Facebook?
Facebook is running it’s own version of HDFS. Once you fork, integrating upstream changes becomes a nightmare.
- How to deploy and test new features at scale: rank types of users and roll out the new feature starting with the less critical scenarios. You must be able to correctly route traffic or users.
- At scale, cluster utilization is a critical metric. All the improvements in Corona are derived from this.
- Traditional analytic databases have advanced resource-based scheduling for a long time. Hadoop needs this.
- Open source at Facebook:
- create a tool that addresses an internal problem
open source itthrow it out in the wild (nb: is there any Facebook open source project they continued to maintain?)
- Option 1: continue to develop it internally. Option 2: drop it
- if by any chance the open source project survives and becomes a standalone project, catch up from time to time
- re-fork it
- why not YARN? The best answer I could find, is Joydeep Sen Sarma’s on Quora. Summarized:
- Corona uses a push-based, event-driven, callback oriented message flow
- Corona’s JobTracker can run in the same VM with the Job Client
- Corona integrated with the Hadoop trunk Fair-Scheduler which got rewritten at Facebook
- Corona’s resource manager uses optimistic locking
- Corona’s using Thrift, while others are looking at using Protobuf or Avro
Original title and link: Facebook Corona: A Different Approach to Job Scheduling and Resource Management ( ©myNoSQL)
Nice screenshot by TechCrunch people of the slide talking about the data lifecycle at Facebook:
Based on this you’ll now have a better picture of how Facebook data ingestion numbers correlate to their architecture.
Original title and link: Life of Data at Facebook ( ©myNoSQL)
A recent GigaOM article provides some interesting data points about how much data Facebook is handling:
- 2.5 bil. content items shared per day
- 2.7bil. likes per day
- 300mil. uploaded photos
- 500+ terabytes of ingested data per day
The numbers above do not include any details about how many data points Facebook is collecting for analytic purposes. But I don’t think I’d be off by assuming this number should probably be a good multiplier of the above numbers. We’ll go with 10 to keep things simple.
A couple of days ago, James Hamilton posted an analysis of Facebook’s Carbon and Energy Impact:
Using the Facebook PUE number of 1.07, we know they are delivering 54.27MW to the IT load (servers and storage). We don’t know the average server draw at Facebook but they have excellent server designs (see Open Compute Server Design) so they likely average at or below as 300W per server. Since 300W is an estimate, let’s also look at 250W and 400W per server:
- 250W/server: 217,080 servers
- 300W/server: 180,900 servers
- 350W/server: 155,057 servers
It’s difficult to determine how many of the 180k servers are databases, but if considering a 1:10 ratio for databases to front end + cache servers, that would give us an approximate number of 18k database servers ingesting 500+ terabytes of data through a guestimated 50+ billion calls.
There’s also something that confuses me about these numbers. If Facebook is getting 300mil. photo uploads per day and ingests 500+ terabytes that could mean that either 1) the average photo size is very low; or 2) Facebook doesn’t count photos when mentioning the ingested data size.
Original title and link: Fun With Numbers: How Much Data Is Facebook Ingesting ( ©myNoSQL)
The mid-part of this Wired article talks a bit about the way Facebook is storing its Open Graph data:
We have an object store, which stores things like users and events and groups and photos, and then we have an edge store that stores the relationship between objects. With Open Graph, we built a layer on top of those systems that allowed developers to define what their objects look like and what their edges look like and then publish those third party objects and edges into the same infrastructure that we used to store all of the first party objects and edges.
Couple of thoughts:
- this data is a good example of a multigraph
- I don’t think Facebook is actually using a graph database for storing the data. Considering the size of the data Facebook is handling, this could be understandable
- There’s no mention of how the metadata, the description of the objects and edges, is stored. I assume this should somehow be connected to historical data to allow the evolution of the data while maintaining its original meaning over time.
- The processing happening on this multigraph data sounds like cluster analysis
Original title and link: Inside Facebook’s Open Graph ( ©myNoSQL)