NoSQL Benchmarks NoSQL use cases NoSQL Videos NoSQL Hybrid Solutions NoSQL Presentations Big Data Hadoop MapReduce Pig Hive Flume Oozie Sqoop HDFS ZooKeeper Cascading Cascalog BigTable Cassandra HBase Hypertable Couchbase CouchDB MongoDB OrientDB RavenDB Jackrabbit Terrastore Amazon DynamoDB Redis Riak Project Voldemort Tokyo Cabinet Kyoto Cabinet memcached Amazon SimpleDB Datomic MemcacheDB M/DB GT.M Amazon Dynamo Dynomite Mnesia Yahoo! PNUTS/Sherpa Neo4j InfoGrid Sones GraphDB InfiniteGraph AllegroGraph MarkLogic Clustrix CouchDB Case Studies MongoDB Case Studies NoSQL at Adobe NoSQL at Facebook NoSQL at Twitter



How to Escape the Dark Valley of Your Hadoop Journey

The conclusion of the post is more balanced than the beginning which reads like it’s doomsday1:

The power of big data has been established, but our understanding of how to exploit it in the most productive way is still maturing. The initial toolset that came with Hadoop didn’t anticipate the kinds of enterprise applications and powerful analyses that businesses would want to build on it. Thus, many have fallen into the Dark Valley. But a new breed of middleware (APIs and DSLs) has arrived. They keep track of all the variables and peculiarities of Hadoop, abstract them away from development, and offer better reliability, sustainability and operational characteristics so that enterprises can find their way back out into the light.

Everyone that doesn’t have extensive experience with Hadoop will realize its complexity right away. But…

Is this complexity insurmountable? No. Does addressing Hadoop’s complexity really require huge budgets? No. Is it the fault of Hadoop that other tools aren’t working well with it? Definitely not. Can Hadoop and vendors offer a better experience? The answer is a resounding Yes.

  1. Keep in mind that the article was written by the CEO of Concurrent, a company that promotes better tools for Hadoop. 

Original title and link: How to Escape the Dark Valley of Your Hadoop Journey (NoSQL database©myNoSQL)