R: All content tagged as R in NoSQL databases and polyglot persistence
In the series of big announcements coming out this month, Cloudera and Revolution Analytics, the enterprise provider of R software, have announced their partnership to integrate Cloudera’s Hadoop distribution with Revolution R Enterprise platform thus offering R developers direct access to Hadoop data stores and the possibility to write MapReduce jobs directly in R.
The integration packages, named RevoConnectR for Apache Hadoop, are already available freely on GitHub and they will also get commercial support with Revolution R Enterprise 5.0 Server for Linux.
You can read more about this announcement on:
Original title and link: R and Hadoop: Revolution Analytics and Cloudera Partnership Announced ( ©myNoSQL)
Over two million analysts worldwide use R, and they come from an extremely diverse pool of industries that ranges from journalism to financial services to life sciences.
If you replace R with data analytics, this could seen as a very appealing future of Big Data and data analytics. Something like a generalized version of data analytics at work.
But before loosing myself in this perspective, I thought I should take a look at the present and see how what is done now is going to lead to that amazing tomorrow:
- Tim O’Reilly said a couple of years ago “Data is the Intel inside” and since then we’re seeing lots and lots of companies trying to materialize this slogan.
- More new technologies for storage, processing, and analysis are developed and reaching the market then in the 10 previous years.
- People are starting to embrace big data overcoming their fear of privacy invasion
All these are good signs that we could consider as a good basis for the future. On the other hand the past and today’s reality tell a different story:
- Even if technology costs decreased over time, the investment in creating data startups are still high.
- Financial institutions are not investing (too much) into data technology companies.
- There are only a few companies that are able to accumulate significant amounts of useful data.
- There are even fewer companies that are able to use effectively the huge amounts of data.
What worries me is that even if we will continue to see both a commoditization and impressive improvement of data solutions, by the time all tools will be in place and accessible to everyone, as per the opening paragraph, really valuable data will reside in just a few private well locked silos.
Original title and link: The Appealing Future of Big Data and Data Analytics ( ©myNoSQL)
In the blue corner we have IBM with Netezza as analytic database, Cognos for BI, and SPSS for predictive analytics. In the green corner we have EMC with Greenplum and the partnership with SAS. And in the open source corner we have Hadoop and R.
Update: there’s also another corner I don’t know how to color where Teradata and its recently acquired Aster Data partner with SAS.
Who is ready to bet on which of these platforms will be processing more data in the next years?
The last couple of posts were about BigData and Jeffrey Horner’s presentation is inline with this topic:
If there is ever a time to learn R and web application development, it is now…in the age of Big Data. The upcoming release of R 2.13 will provide basic functionality for developing R web applications on the desktop via the internal HTTP server, but the interface is incompatible with rApache. Jeffrey will talk about Rack, a web server interface and package for R, and how you can start creating your own Big Data stories from the comfort of your own desktop.
Note: The video is missing the beginning and it is not a generic talk about R, so it will be interesting mostly to those using R and planning to develop web applications directly from R.