Mike Miller naming 3 issues that Hadoop and its ecosystem are facing: investment, data complexity, and
Batch. I find this issue the most compelling. Big data, for better or worse, is still generally confined to the data warehouse. That largely means “offline” data that is subject to the classic extract-transform-load (ETL) workflow. Hadoop helps minimize the turnaround time for ETL, but batch processing still means something more akin to “tomorrow” than “real-time.” In contrast, Google’s original MapReduce paper gave an inspiring example of inserting MapReduce directly into a sequential C++ program for efficient real-time computation. Unfortunately, we find far too few real-time examples in production.
There are at least a dozen of news in the last months proving investment in Hadoop is a non-issue. And I’m sure that sooner than later we will attack the rest of those problems that do not fit nicely in Hadoop’s area of expertise. Here is just the latest example.
Original title and link: Is Hadoop Our Only Hope? ( ©myNoSQL)