Ben Werther in the post announcing the Platfora:
The questions that these swirling datasets would one day support couldn’t be know yet. And yet to build a data warehouse I’d be expected to perfectly predict what data would be important and how I’d want to question it, years in advance, or spend months rearchitecting every time I was wrong. This is actually considered ‘best practice’.
The brilliance of what Hadoop does differently is that it doesn’t ask for any of these decisions up front. You can land raw data, in any format and at any size, in Hadoop with virtually no friction. You don’t have to think twice about how you are going to use the data when you write it. No more throwing away data because of cost, friction or politics.
You cannot do much without the right data models, but if having strict data models means you have to let data slip through your fingers and/or ignore the datasets that do not perfectly match your well designed models, then this is not really an option.
Original title and link: The Brilliance of Hadoop ( ©myNoSQL)