Cloudant’s Mike Miller post published by GigaOm about Hadoop being a technology whose days has already passed generated quite a stir on the internet:
In summary, Hadoop is an incredible tool for large-scale data processing on clusters of commodity hardware. But if you’re trying to process dynamic data sets, ad-hoc analytics or graph data structures, Google’s own actions clearly demonstrate better alternatives to the MapReduce paradigm. Percolator, Dremel and Pregel make an impressive trio and comprise the new canon of big data. I would be shocked if they don’t have a similar impact on IT as Google’s original big three of GFS, GMR, and BigTable have had.
However, Hadoop, MapReduce, and the related ecosystem are still going to be around as core functions for a long time (even the GigaOm article says ‘at least another decade’, which is forever in technology years) as they are still fundamentally good ways of processing a lot of data cheaply and effectively.
And others used the opportunity to look at the prons and cons of Hadoop:
The question I want to address is for a new big-data project, or companies about to embark on building a data-platform, which technology should I base my data platform on?
Mike Miller’s article is correct about every point it exposes. It’s only the title that’s psychologically sensationalist: 1) all technologies are sooner or later replaced; 2) there’s no technology that can solve all the problems.
Blogging is dead. RSS is dead. Hadoop has its days numbered. But all of them are still around because they actually solve a problem.
Original title and link: Hadoop’s Time: Present and Future ( ©myNoSQL)