ALL COVERED TOPICS

NoSQL Benchmarks NoSQL use cases NoSQL Videos NoSQL Hybrid Solutions NoSQL Presentations Big Data Hadoop MapReduce Pig Hive Flume Oozie Sqoop HDFS ZooKeeper Cascading Cascalog BigTable Cassandra HBase Hypertable Couchbase CouchDB MongoDB OrientDB RavenDB Jackrabbit Terrastore Amazon DynamoDB Redis Riak Project Voldemort Tokyo Cabinet Kyoto Cabinet memcached Amazon SimpleDB Datomic MemcacheDB M/DB GT.M Amazon Dynamo Dynomite Mnesia Yahoo! PNUTS/Sherpa Neo4j InfoGrid Sones GraphDB InfiniteGraph AllegroGraph MarkLogic Clustrix CouchDB Case Studies MongoDB Case Studies NoSQL at Adobe NoSQL at Facebook NoSQL at Twitter

NAVIGATE MAIN CATEGORIES

Close

Data science: All content tagged as Data science in NoSQL databases and polyglot persistence

R: What Is and How Can It Help?

From Loraine Lawson interview with Jeff Erhardt[1].

What is R?

R is an open source statistical programming language. The easiest way to think about it is the largest commercial competitor in the states is a company called SAS, and while it’s not a perfect analogy, one way to think about R is as an open source version of SAS. It’s not perfectly correct, but for people who have not heard of R, that’s one way to explain it.

Where can R help?

  • analyzing and gaining meaning from collected data
  • developing models and extracting the insight from data
  • implementing these analytics within an enterprise and disseminating the knowledge across the enterprise

Now are you ready to bet what will be the data processing platform of tomorrow?


  1. Jeff Erhardt: COO of Revolution Analytics, the company offering products and services for R  

Original title and link: R: What Is and How Can It Help? (NoSQL databases © myNoSQL)

via: http://www.itbusinessedge.com/cm/community/features/interviews/blog/the-power-of-r-more-companies-using-language-for-business-analytics/?cs=46325


The Data Processing Platform for Tomorrow

In the blue corner we have IBM with Netezza as analytic database, Cognos for BI, and SPSS for predictive analytics. In the green corner we have EMC with Greenplum and the partnership with SAS[1]. And in the open source corner we have Hadoop and R.

Update: there’s also another corner I don’t know how to color where Teradata and its recently acquired Aster Data partner with SAS.

Who is ready to bet on which of these platforms will be processing more data in the next years?


  1. GigaOm has a good article on this subject here  

Original title and link: The Data Processing Platform for Tomorrow (NoSQL databases © myNoSQL)


Origin of BigData and How Hadoop Can Help

Michael Olson[1] about origins of BigData in an interview on ODBMS Industry Watch:

It used to be that data was generated at human scale. You’d buy or sell something and a transaction record would happen. You’d hire or fire someone and you’d hit the “employee” table in your database.

These days, data comes from machines talking to machines. The servers, switches, routers and disks on your LAN are all furiously conversing. The content of their messages is interesting, and also the patterns and timing of the messages that they send to one another. (In fact, if you can capture all that data and do some pattern detection and machine learning, you have a pretty good tool for finding bad guys breaking into your network.) Same is true for programmed trading on Wall Street, mobile telephony and many other pieces of technology infrastructure we rely on.

and how Hadoop can help:

Hadoop knows how to capture and store that data cheaply and reliably, even if you get to petabytes. More importantly, Hadoop knows how to process that data — it can run different algorithms and analytic tools, spread across its massively parallel infrastructure, to answer hard questions on enormous amounts of information very quickly.


  1. Michael Olson: CEO Cloudera, former CEO of Sleepycat Software, makers of Berkeley DB acquired by Oracle, @mikeolson  

Original title and link: Origin of BigData and How Hadoop Can Help (NoSQL databases © myNoSQL)


Stop Trying to Put a Monetary Value on Data - It's the Wrong Path

Rob Karel:

data in and of itself has no value!
The only value data/information has to offer – and the reason I do still consider it an “asset” at all – is in the context of the business processes, decisions, customer experiences, and competitive differentiators it can enable.

Just a different way to correctly say that BigData is snake oil.

Original title and link: Stop Trying to Put a Monetary Value on Data - It’s the Wrong Path (NoSQL databases © myNoSQL)

via: http://www.information-management.com/blogs/data_management_business_intelligence_BPM_ROI-10020057-1.html


The Birth of a Word: The Future of Data Science

Even if the name of this TED talk is “The birth of a word”, I would have called it anything from the future of data science, extreme data analysis, and brilliant informatio visualization. Anyway, it is a must see:

Original title and link: The Birth of a Word: The Future of Data Science (NoSQL databases © myNoSQL)


Does Big Data Need Big Budgets?

If you’d ask me this question, I’m sure my initial answer would be: “absolutely”. And I guess I would not be alone. But is that the right answer?

While watching GigaOm’s Structure Big Data event, there were two talks that gave me a different perspective on this question.

Firstly, it was the interview with Kevin Krim, the Global Head of Bloomberg Digital, which told the story of adopting, mining, and materializing Big Data inside a corporation that didn’t believe in it, nor did it allocate large budgets to it. The result: collecting more than a terabyte of data every day from 100 data points for every pageview and running 15 different parallel algorithms to make recommendations that led sometimes to 10x clickthrough rates. The interview is embedded at the end of this post.

The second story, coming from Pete Warden, founder of OpenHeatMap, is even more exciting. Pete has used a combination of right tools deployed on the cloud to mine Facebook data: 500 million pages for $100 — that was the cost before being sued by Facebook.

Pete Warden distilled his experience with these tools and has made available at datasciencetoolkit.org a collection of data tools and open APIs in both an Amazon AMI format to be run on the cloud and as a VMWare image to run locally. I highly recommend watching Pete’s talk which I’ve embedded below.

While it depends on what definition of BigData we’d use, both these talks are leading to a simple conclusion:

  • you need imagination to get started with Big Data
  • you need to use the right tools for getting good results

Is this going to work at the scale of Twitter, LinkedIn, Facebook, Google? Probably not. But before getting at that size, you need to start somewhere. And both these talks suggest a clear answer to the question “does big data need big budgets?”: not always.


R and the web in 2011

The last couple of posts were about BigData and Jeffrey Horner’s presentation is inline with this topic:

If there is ever a time to learn R and web application development, it is now…in the age of Big Data. The upcoming release of R 2.13 will provide basic functionality for developing R web applications on the desktop via the internal HTTP server, but the interface is incompatible with rApache. Jeffrey will talk about Rack, a web server interface and package for R, and how you can start creating your own Big Data stories from the comfort of your own desktop.

Note: The video is missing the beginning and it is not a generic talk about R, so it will be interesting mostly to those using R and planning to develop web applications directly from R.

Original title and link: R and the web in 2011 (NoSQL databases © myNoSQL)


Strategies for Exploiting Large-scale Data

In a guest post hosted by Cloudera blog, Bob Gourley[1] enumerates the characteristics of working with Big Data from federal agencies perspective.

I think these can be generalized to all businesses and problems that require big data:

Federal IT leaders are increasingly sharing lessons learned across agencies. But approaches vary from agency to agency.

For a long time each business worked in its own silo.

Yesterday, tools and algorithms represented the competitive advantage. Today the competitive advantage is in data. Sharing algorithms, experience, and ideas is safe.

federal thought leaders across all agencies are confronted with more data from more sources, and a need for more powerful analytic capabilities

If you are not confronted with this problem it is just because you didn’t realize it. If you think single sources of data are good enough, your business might be at risk.

Large-scale distributed analysis over large data sets is often expected to return results almost instantly.

Name a single manager or a business or a problem solver that wouldn’t like to get immediate answers.

  • Most agencies face challenges that involve combining multiple data sets — some structured, some complex — in order to answer mission questions.

  • increasingly seeking automated tools, more advanced models and means of leveraging commodity hardware and open source software to conduct distributed analysis over distributed data stores

Ditto

considering ways of enhancing the ability of citizens to contribute to government understanding by use of crowd-sourcing type models

Werner Vogels mentioned in his Strata talk using Amazon Mechanical Turk for adding human-based processing for data control, data validation and correction, and data enrichment.


  1. Bob Gourley: editor of CTOvision.com and a former Defense Intelligence Agency (DIA) CTO, @bobgourley  

Original title and link: Strategies for Exploiting Large-scale Data (NoSQL databases © myNoSQL)


The Fourth Paradigm: Data-Intensive Scientific Discovery

This book is about a new, fourth paradigm for science based on data- intensive computing. In such scientific research, we are at a stage of development that is analogous to when the printing press was invented. Printing took a thousand years to develop and evolve into the many forms it takes today. Using computers to gain understanding from data created and stored in our electronic data stores will likely take decades — or less.

In Jim Gray’s last talk to the Computer Science and Telecommunications Board on January 11, 2007, he described his vision of the fourth paradigm of scientific research. He outlined a two-part plea for the funding of tools for data capture, curation, and analysis, and for a communication and publication infrastructure. He argued for the establishment of modern stores for data and documents that are on par with traditional libraries.

Microsoft Research has made the book available for free here. During his Strata conference presentation, Werner Vogels encouraged everyone to read it.

Original title and link: The Fourth Paradigm: Data-Intensive Scientific Discovery (NoSQL databases © myNoSQL)


Big Data: Millionfold Mashups and the Shape of Data

Philip (flip) Kromer (infochimps.com) talking about origins of big data, generating big data, and some ideas on using big data. Very interesting talk.

Original title and link: Big Data: Millionfold Mashups and the Shape of Data (NoSQL databases © myNoSQL)


Big Data Analysis at BackType

RWW has a nice post diving into the data flow and the tools used by BackType, a company with only 3 engineers, to deal and analyze large amounts of data.

They’ve invented their own language, Cascalog, to make analysis easy, and their own database, ElephantDB, to simplify delivering the results of their analysis to users. They’ve even written a system to update traditional batch processing of massive data sets with new information in near real-time.

Some highlights:

  • 25 terabytes of compressed binary data, over 100 billion individual records
  • all services and data storage are on Amazon S3 and EC2
  • 60 up to 150 EC2 instances servicing an average of 400 requests/s
  • Clojure and Python as platform languages
  • Hadoop, Cascading and Cascalog are central pieces of BackType’s platform
  • Cascalog, a Clojure-based query language for Hadoop, was created and open sourced by BackType’s engineer Nathan Marz
  • ElephantDB, the storage solution, is a read-only cluster built on top of BerkleyDB files
  • Crawlers place data in Gearman queues for processing and storing

BackType data flow is presented in the following diagram:

BackType data flow

Included below is an interview with Nathan about Cascalog:

@pharkmillups .

Original title and link: Big Data Analysis at BackType (NoSQL databases © myNoSQL)

via: http://www.readwriteweb.com/hack/2011/01/secrets-of-backtypes-data-engineers.php


The Beauty of Data Visualization

David McCandless talking at TED about data visualization:

Data science is the future and there cannot be data science without data visualization and vice versa.

Or in Bundy’s Frank Sinatra words: You can’t have one without the other.

Original title and link: The Beauty of Data Visualization (NoSQL databases © myNoSQL)