Many of our systems use Amazon’s S3 as a backup repository for log data. Our data became too large to process by traditional techniques, so we started using Amazon’s Elastic MapReduce (EMR) to do more expensive queries on our data stored in S3. The major advantage of EMR for us was the lack of operational overhead. With a simple API call, we could have a 20 or 40 node cluster running to crunch our data, which we shutdown at the conclusion of the run.
We had two systems interacting with EMR. The first consisted of shell scripts to start an EMR cluster, run a pig script, and load the output data from S3 into our data warehousing system. The second was a Java application that launched pig jobs on an EMR cluster via the Java API and consumed the data in S3 produced by EMR.
What might make you consider moving from the cloud version of MapReduce, the Amazon Elastic MapReduce, to an on-premise Hadoop cluster:
- performance and tuning
- API access
- lack of latest features
Original title and link: Moving Away From Amazon’s EMR Service to an In-House Hadoop Cluster (NoSQL database©myNoSQL)