Facebook engineering: Under the Hood: Scheduling MapReduce jobs more efficiently with Corona:
It was pretty clear that we would ultimately need a better scheduling framework that would improve this situation in the following ways:
- Better scalability and cluster utilization
- Lower latency for small jobs
- Ability to upgrade without disruption
- Scheduling based on actual task resource requirements rather than a count of map and reduce tasks
Hadoop deployment at Facebook:
60000 Hive queries/day
- used by > 1000 people
Is Hive the preferred way Hadoop is used at Facebook?
Facebook is running it’s own version of HDFS. Once you fork, integrating upstream changes becomes a nightmare.
- How to deploy and test new features at scale: rank types of users and roll out the new feature starting with the less critical scenarios. You must be able to correctly route traffic or users.
- At scale, cluster utilization is a critical metric. All the improvements in Corona are derived from this.
- Traditional analytic databases have advanced resource-based scheduling for a long time. Hadoop needs this.
- Open source at Facebook:
- create a tool that addresses an internal problem
open source itthrow it out in the wild (nb: is there any Facebook open source project they continued to maintain?)
- Option 1: continue to develop it internally. Option 2: drop it
- if by any chance the open source project survives and becomes a standalone project, catch up from time to time
- re-fork it
- why not YARN? The best answer I could find, is Joydeep Sen Sarma’s on Quora. Summarized:
- Corona uses a push-based, event-driven, callback oriented message flow
- Corona’s JobTracker can run in the same VM with the Job Client
- Corona integrated with the Hadoop trunk Fair-Scheduler which got rewritten at Facebook
- Corona’s resource manager uses optimistic locking
- Corona’s using Thrift, while others are looking at using Protobuf or Avro
Original title and link: Facebook Corona: A Different Approach to Job Scheduling and Resource Management ( ©myNoSQL)