ALL COVERED TOPICS

NoSQL Benchmarks NoSQL use cases NoSQL Videos NoSQL Hybrid Solutions NoSQL Presentations Big Data Hadoop MapReduce Pig Hive Flume Oozie Sqoop HDFS ZooKeeper Cascading Cascalog BigTable Cassandra HBase Hypertable Couchbase CouchDB MongoDB OrientDB RavenDB Jackrabbit Terrastore Amazon DynamoDB Redis Riak Project Voldemort Tokyo Cabinet Kyoto Cabinet memcached Amazon SimpleDB Datomic MemcacheDB M/DB GT.M Amazon Dynamo Dynomite Mnesia Yahoo! PNUTS/Sherpa Neo4j InfoGrid Sones GraphDB InfiniteGraph AllegroGraph MarkLogic Clustrix CouchDB Case Studies MongoDB Case Studies NoSQL at Adobe NoSQL at Facebook NoSQL at Twitter

NAVIGATE MAIN CATEGORIES

Close

Moving Away From Amazon’s EMR Service to an In-House Hadoop Cluster

Many of our systems use Amazon’s S3 as a backup repository for log data.  Our data became too large to process by traditional techniques, so we started using Amazon’s Elastic MapReduce (EMR) to do more expensive queries on our data stored in S3.  The major advantage of EMR for us was the lack of operational overhead.  With a simple API call, we could have a 20 or 40 node cluster running to crunch our data, which we shutdown at the conclusion of the run. We had two systems interacting with EMR.  The first consisted of shell scripts to start an EMR cluster, run a pig script, and load the output data from S3 into our data warehousing system.  The second was a Java application that launched pig jobs on an EMR cluster via the Java API and consumed the data in S3 produced by EMR.

What might make you consider moving from the cloud version of MapReduce, the Amazon Elastic MapReduce, to an on-premise Hadoop cluster:

  1. performance and tuning
  2. monitoring
  3. API access
  4. lack of latest features

Original title and link: Moving Away From Amazon’s EMR Service to an In-House Hadoop Cluster (NoSQL database©myNoSQL)

via: http://www.cloudera.com/blog/2011/06/migrating-from-elastic-mapreduce-to-a-cloudera?s-distribution-including-apache-hadoop-cluster/