NoSQL Benchmarks NoSQL use cases NoSQL Videos NoSQL Hybrid Solutions NoSQL Presentations Big Data Hadoop MapReduce Pig Hive Flume Oozie Sqoop HDFS ZooKeeper Cascading Cascalog BigTable Cassandra HBase Hypertable Couchbase CouchDB MongoDB OrientDB RavenDB Jackrabbit Terrastore Amazon DynamoDB Redis Riak Project Voldemort Tokyo Cabinet Kyoto Cabinet memcached Amazon SimpleDB Datomic MemcacheDB M/DB GT.M Amazon Dynamo Dynomite Mnesia Yahoo! PNUTS/Sherpa Neo4j InfoGrid Sones GraphDB InfiniteGraph AllegroGraph MarkLogic Clustrix CouchDB Case Studies MongoDB Case Studies NoSQL at Adobe NoSQL at Facebook NoSQL at Twitter



MapReduce: All content tagged as MapReduce in NoSQL databases and polyglot persistence

Pig cheat sheet

Cheat sheet? Check. Pig? Check. Where do I get it?


The Hadoop as ETL part in migrating from MongoDB to Cassandra at FullContact

While I’ve found the whole post very educative — and very balanced considering the topic — the part that I’m linking to is about integrating MongoDB with Hadoop. After reading the story of integrating MongoDB and Hadoop at Foursquare, there were quite a few questions bugging me. This post doesn’t answer any of them, but it brings in some more details about existing tools, a completely different solution, and what seems to be an overarching theme when using Hadoop and MongoDB in the same phrase:

We’re big users of Hadoop MapReduce and tend to lean on it whenever we need to make large scale migrations, especially ones with lots of transformation. That fact along with our existing conversion project from before, we used 10gen’s mongo-hadoop project which has input and output formats for Hadoop. We immediately realized that the InputFormat which connected to a MongoDB cluster was ill-suited to our usage. We had 3TB of partially-overlapping data across 2 clusters. After calculating input splits for a few hours, it began pulling documents at an uncomfortably slow pace. It was slow enough, in fact, that we developed an alternative plan.

You’ll have to read the post to learn how they’ve accomplished their goal, but as a spoiler, it was once again more of an ETL process rather than an integration.

✚ The corresponding HN thread; it’s focused mostly on the from MongoDB to Cassandra parts.

Original title and link: The Hadoop as ETL part in migrating from MongoDB to Cassandra at FullContact (NoSQL database©myNoSQL)


Hadoop vs Redshift

This is how Yaniv Mor’s “Hadoop vs. Redshift” ends:

We have a tie! Huh!? Didn’t Hadoop win most of the rounds? Yes, it did, but Big Data’s superheroes are better off working together as a team rather than fighting. Turn on the Hadoop-Signal when you need relatively cheap data storage, batch processing of petabytes, or processing data in non-relational formats. Call out to red-caped Redshift for analytics, fast performance for terabytes, and an easier transition for your PostgreSQL team. As Airbnb concluded in their benchmark: “We don’t think Redshift is a replacement of the Hadoop family due to its limitations, but rather it is a very good complement to Hadoop for interactive analytics”. We Agree.

I’m wondering why wasting 1337 words for an apple-to-oranges comparison.

Original title and link: Hadoop vs Redshift (NoSQL database©myNoSQL)


A guide to write and run Giraph jobs on Hadoop

A good setup guide by Mirko Kämpf:

In this how-to, you will learn how to use Giraph 1.0.0 on top of CDH 4.x using a simple example dataset, and run example jobs that are already implemented in Giraph. You will also learn how to set up your own Giraph- based development environment. The end result will be a setup (not intended for production) for writing and testing Giraph jobs, or just for playing around with Giraph and small sample datasets.


Anatomy of the Giraph data flow

Original title and link: A guide to write and run Giraph jobs on Hadoop (NoSQL database©myNoSQL)


Hadoop and Teradata’s business

Earlier today I’ve posted about Teradata’s take on the evolution of databases. As expected, everything is safe and under control. Now this report from Larry Dignan for ZDNet about Teradata Q4 earnings call presents Teradata’s perspective about Hadoop:

Teradata’s fourth quarter earnings were solid, but analysts peppered management with questions about Hadoop as data warehouse revenue worries persist.

Teradata CEO Mike Koehler and CFO Steve Scheppmann talked Hadoop throughout the company’s conference call. Was Hadoop taking Teradata’s business away? What’s the revenue hit? Can Teradata co-exist?

Once again everything is safe with a bright future. Until it isn’t anymore and Hadoop eats the enterprise data warehouse space. In Teradata’s defense, they’ve been one of the first companies that has looked seriously at Hadoop and came up with a coherent positioning.

Original title and link: Hadoop and Teradata’s business (NoSQL database©myNoSQL)

5 achievements of Hadoop

John Santaferraro (Actian) posted a great list of 5 major changes that Hadoop brought to the data market:

  1. Enormous, affordable scale compared to old storage paradigms
  2. Capture and store all data without first determining its value
  3. New types of data present new opportunities for analytics
  4. Data discovery and data provisioning now common in most organizations
  5. Analytics applications emerge as the number one use for big data.

Original title and link: 5 achievements of Hadoop (NoSQL database©myNoSQL)


Does Hadoop replace or augment the enterprise data warehouse?

Wayne Eckerson:

For Cloudera, the first vendor to offer a Hadoop distribution, the answer is an unequivocal yes. Last November, Cloudera finally exposed its true sentiments by introducing the Enterprise Data Hub in which Hadoop replaces the data warehouse, among other things, as the center of an organization’s data management strategy. In contrast, Hortonworks takes a hybrid approach, partnering with leading commercial data management and analytics vendors to create a data environment that blends the best of Hadoop and commercial software. In short, Cloudera offers revolution, Hortonworks evolution.

You know what? Both are right. To replace existing enterprise data warehouse, the first step is in cohabiting with them.

Original title and link: Does Hadoop replace or augment the enterprise data warehouse? (NoSQL database©myNoSQL)

Investments in the Hadoop market in 2013

A post looking at the investments made in the Hadoop market in 2013:

In 2013, the zeitgeist around big data hit a fever pitch, and with that surge came venture capital love for the Hadoop ecosystem – to the tune of $270 million. On a year-over-year basis, Hadoop VC funding grew 50% while deal activity rose 30%.

To see investment relationships you could use the beautiful Big Data investment map 2014 — I wish it was easier to navigate though.

Back to the original post, there were two aspects that caught my attention:

  1. The majority of funding growth in 2013 came from Series A. This could mean two things: 1) investors consider the market still open; or 2) there are many investors that realized quite late the potential of this market that are trying to make up for they late reaction. I’d go with the first option though.
  2. There seems quite a bit of variability (or inconsistency) in the investments made in the big data market since 2012. This chart shows exactly what I mean:

    hadoop investments

Original title and link: Investments in the Hadoop market in 2013 (NoSQL database©myNoSQL)


HDFS Explorer: Accessing HDFS from Windows Explorer

HDFS Explorer, by Red Gate Big Data:

At Red Gate we have been working on some query tools for Hadoop for a while and while testing we found ourselves endlessly typing hadoop fs. Getting data sets from our Windows desktops, to the cluster, or inspecting job output files was just taking too many steps. It should be as easy for us to access files on HDFS as files on my local drive. So we created HDFS Explorer, which works just like Windows Explorer, but connects to the WebHDFS APIs so we can browse files on our clusters.

Solving a pain point. Making HDFS more accessible and thus friendlier. Very good reasons for such a tool.

Original title and link: HDFS Explorer: Accessing HDFS from Windows Explorer (NoSQL database©myNoSQL)


Enterprise Hadoop Market in 2013: Reflections and Directions

By end of last year, Shaun Connoly (Hortonworks) has posted a fantastic blog looking at the Hadoop market and its future, reflecting on the open source community and its ability to continuously innovate at a fast pace, and putting all these in perspective from a business point of view using the vistory of RedHat.

It is a must read.

Peter Goldmacher (analyst Cowen & Co):

“We believe Hadoop is a big opportunity and we can envision a small number of billion dollar companies based on Hadoop. We think the bigger opportunity is Apps and Analytics companies selling products that abstract the complexity of working with Hadoop from end users and sell solutions into a much larger end market of business users. The biggest opportunity in our mind, by far, is the Big Data Practitioners that create entirely new business opportunities based on data where $1M spent on Hadoop is the backbone of a $1B business.”.

Original title and link: Enterprise Hadoop Market in 2013: Reflections and Directions (NoSQL database©myNoSQL)


2013 and 2014 for Hadoop adoption

Syncsort’s Keith Kohl, in a guest post on Hortonworks’s blog (on an unrelated topic):

I heard a quote the other day that really made me think about the experiences I hear from our customers and partners: 2013 was the year companies tried to find budget for Hadoop, 2014 is the year they ARE budgeting for Hadoop projects.

If I remember correctly, Gartner’s data doesn’t fully support this, but on the other hand I’m convinced that more projects using Hadoop will be rolled in production this year. The only questions to be answered:

  1. will this number grow significantly?
  2. what distributions will see most of the growth?

Original title and link: 2013 and 2014 for Hadoop adoption (NoSQL database©myNoSQL)


Big Data's 2 big years is actually Hadoop

Doug Henschen makes two great points:

  1. Everyone wants to sell Hadoop:

    Practically every vendor out there has embraced Hadoop, going well beyond the fledgling announcements and primitive “connectors” that were prevalent two years ago. Industry heavyweights IBM, Microsoft, Oracle, Pivotal, SAP, and Teradata are all selling and supporting Hadoop distributions — partnering, in some cases, with Cloudera and Hortonworks. Four of these six have vendor-specific distributions, Hadoop appliances, or both.

  2. Then everyone is building SQL-on-Hadoop.

Original title and link: Big Data’s 2 big years is actually Hadoop (NoSQL database©myNoSQL)