ALL COVERED TOPICS

NoSQL Benchmarks NoSQL use cases NoSQL Videos NoSQL Hybrid Solutions NoSQL Presentations Big Data Hadoop MapReduce Pig Hive Flume Oozie Sqoop HDFS ZooKeeper Cascading Cascalog BigTable Cassandra HBase Hypertable Couchbase CouchDB MongoDB OrientDB RavenDB Jackrabbit Terrastore Amazon DynamoDB Redis Riak Project Voldemort Tokyo Cabinet Kyoto Cabinet memcached Amazon SimpleDB Datomic MemcacheDB M/DB GT.M Amazon Dynamo Dynomite Mnesia Yahoo! PNUTS/Sherpa Neo4j InfoGrid Sones GraphDB InfiniteGraph AllegroGraph MarkLogic Clustrix CouchDB Case Studies MongoDB Case Studies NoSQL at Adobe NoSQL at Facebook NoSQL at Twitter

NAVIGATE MAIN CATEGORIES

Close

Corona: All content tagged as Corona in NoSQL databases and polyglot persistence

Hadoop Implementers, Take Some Advice From My Grandmother

Paige Roberts on Pervasive blog:

The Hadoop distributed computing concept is inherently parallel and, therefore, should be friendly to better utilization models. But parallel programming, beyond the basic data level, the embarrassingly parallel level, requires different habits. MapReduce is already heading us in the wrong direction. Most Hadoop data centers aren’t doing any better when it comes to usage levels than traditional data centers. There’s still a tremendous amount of energy and compute power going to waste.

YARN gives us the option to use other compute models in Hadoop clusters; better, more efficient compute models, if we can create them.

People running Hadoop at scale always want to optimize power consumption. As the first example that comes to my mind, in November, Facebook, which most probably runs the largest Hadoop cluster, open sourced their work on improving MapReduce jobs scheduling in a project named Corona which was meant to increase the efficiency of using the resources available in their Hadoop clusters:

In heavy workloads during our testing, the utilization in the Hadoop MapReduce system topped out at 70%. Corona was able to reach more than 95%.

Original title and link: Hadoop Implementers, Take Some Advice From My Grandmother (NoSQL database©myNoSQL)

via: http://bigdata.pervasive.com/Blog/Big-Data-Blog/EntryId/1131/Hadoop-Implementers-Take-Some-Advice-From-My-Grandmother.aspx


Facebook Corona: A Different Approach to Job Scheduling and Resource Management

Facebook engineering: Under the Hood: Scheduling MapReduce jobs more efficiently with Corona:

It was pretty clear that we would ultimately need a better scheduling framework that would improve this situation in the following ways:

  • Better scalability and cluster utilization
  • Lower latency for small jobs
  • Ability to upgrade without disruption
  • Scheduling based on actual task resource requirements rather than a count of map and reduce tasks
  1. Hadoop deployment at Facebook:

    • 100PB

    • 60000 Hive queries/day

    • used by > 1000 people

    Is Hive the preferred way Hadoop is used at Facebook?

  2. Facebook is running it’s own version of HDFS. Once you fork, integrating upstream changes becomes a nightmare.

  3. How to deploy and test new features at scale: rank types of users and roll out the new feature starting with the less critical scenarios. You must be able to correctly route traffic or users.
  4. At scale, cluster utilization is a critical metric. All the improvements in Corona are derived from this.
  5. Traditional analytic databases have advanced resource-based scheduling for a long time. Hadoop needs this.
  6. Open source at Facebook:
    1. create a tool that addresses an internal problem
    2. open source it throw it out in the wild (nb: is there any Facebook open source project they continued to maintain?)
    3. Option 1: continue to develop it internally. Option 2: drop it
    4. if by any chance the open source project survives and becomes a standalone project, catch up from time to time
    5. re-fork it
  7. why not YARN? The best answer I could find, is Joydeep Sen Sarma’s on Quora. Summarized:
    1. Corona uses a push-based, event-driven, callback oriented message flow
    2. Corona’s JobTracker can run in the same VM with the Job Client
    3. Corona integrated with the Hadoop trunk Fair-Scheduler which got rewritten at Facebook
    4. Corona’s resource manager uses optimistic locking
    5. Corona’s using Thrift, while others are looking at using Protobuf or Avro

Original title and link: Facebook Corona: A Different Approach to Job Scheduling and Resource Management (NoSQL database©myNoSQL)