ALL COVERED TOPICS

NoSQL Benchmarks NoSQL use cases NoSQL Videos NoSQL Hybrid Solutions NoSQL Presentations Big Data Hadoop MapReduce Pig Hive Flume Oozie Sqoop HDFS ZooKeeper Cascading Cascalog BigTable Cassandra HBase Hypertable Couchbase CouchDB MongoDB OrientDB RavenDB Jackrabbit Terrastore Amazon DynamoDB Redis Riak Project Voldemort Tokyo Cabinet Kyoto Cabinet memcached Amazon SimpleDB Datomic MemcacheDB M/DB GT.M Amazon Dynamo Dynomite Mnesia Yahoo! PNUTS/Sherpa Neo4j InfoGrid Sones GraphDB InfiniteGraph AllegroGraph MarkLogic Clustrix CouchDB Case Studies MongoDB Case Studies NoSQL at Adobe NoSQL at Facebook NoSQL at Twitter

NAVIGATE MAIN CATEGORIES

Close

7 quick facts about R

Based on a slide deck by David Smith:

  1. R is the highest paid IT skill (Dice.com survey, January 2014)
  2. R is the most-used data science language after SQL (O’Reilly survey, January 2014)
  3. R is used by 70% of data miners (Rexer survey, October 2013)
  4. R is #15 of all programming languages (RedMonk language rankings, January 2014)
  5. R is growing faster than any other data science language (KDNuggets survey, August 2013)
  6. R is the #1 Google Search for Advanced Analytics software (Google Trends, March 2014)
  7. R has more than 2 million users worldwide (Oracle estimate, February 2012)

I can see a couple of actionable items based on this list:

  1. if you’re interested in data science, you should consider R
  2. if you are already using R, ask for a raise

Original title and link: 7 quick facts about R (NoSQL database©myNoSQL)


Three opinions about the future of Hadoop and Data Warehouse

Building on the same data coming from Gartner and a talk from Hadoop Summit (exactly the same), Matt Asay1 and Timo Elliott2 place Hadoop on the data warehouse map.

Matt Asay writes in the ReadWrite article that Hadoop is not replacing existing data warehouses, but it’s taking all new projects:

Hadoop (and its kissing cousin, the NoSQL database) isn’t replacing legacy technology so much as it’s usurping its place in modern workloads. This means enterprises will end up supporting both legacy technology and Hadoop/NoSQL to manage both existing and new workloads […]

Of course, given “the effective price of core Hadoop distribution software and support services is nearly zero” at this point, as Jeff Kelly highlights, more and more workloads will gravitate to Hadoop. So while data warehouse vendors aren’t dead—they’re not even gasping for breath—they risk being left behind for modern data workloads if they don’t quickly embrace Hadoop and other 21st Century data infrastructure.

On his blog, Timo Elliott makes sure that there’s some SAP in that future picture and uses their Hadoop partner, Hortonworks to depict it:

No. Ignoring the many advantages of Hadoop would be dumb. But it would be just as dumb to ignore the other revolutionary technology breakthroughs in the DW space. In particular, new in- memory processing opportunities have created a brand-new category that Gartner calls “hybrid transactional/analytic platforms” (HTAP)

hadoopmodernarchitecture_thumb

The future I’d like to see is the one where:

  1. there is an integrated data platform. Note that in this ideal world, integrated does not mean any form of ETL
  2. it supports and runs in isolation different workloads from online transactions and bulk upload to various forms of analytics
  3. data is stored on dedicated mediums (spinning disks, flash, memory) depending on the workloads that touch it
  4. data would move between these storage mediums automatically, but the platform would allow fine tuning for maintaining the SLAs of the different components

  1. Matt Asay is VP of business development and corporate strategy at MongoDB 

  2. Timo Elliott is an Innovation Evangelist for SAP 

Original title and link: Three opinions about the future of Hadoop and Data Warehouse (NoSQL database©myNoSQL)


When is MongoDB the Right Tool for the Job?

This puts me in a quandary, because my recent stint on the job market has shown that just about everybody is using MongoDB, and I’ve just never been in any situation that I have needed to use it.

I also can’t foresee any situation where there is a solid technical reason for choosing MongoDB over it’s competitors either, and the last thing I want to do is lead people astray or foist my preconceptions onto them.

2438326-laughing-hysterically

Then the top comment on reddit.

Original title and link: When is MongoDB the Right Tool for the Job? (NoSQL database©myNoSQL)

via: http://daemon.co.za/2014/04/when-is-mongodb-the-right-tool/


How to choose the best tool for your Big Data project

A decision matrix by Wenming Ye, Microsoft Research senior research program manager:

Wenming Ye

If things would be so easy, we wouldn’t even have a buzzword for Big Data.

Original title and link: How to choose the best tool for your Big Data project (NoSQL database©myNoSQL)

via: http://www.lifehacker.com.au/2014/04/how-to-choose-the-best-tool-for-your-big-data-project/


A genetic census of America

Nice little study and visualization example:

Where did the ancestors of today’s Americans come from? Do Americans in the Midwest hail from similar places of the world as in the Northeast, or as in the South?

A Genetic Census of America

Original title and link: A genetic census of America (NoSQL database©myNoSQL)

via: http://blogs.ancestry.com/ancestry/2014/04/04/a-genetic-census-of-america/


Aerospike has C and Node.js Clients for Mac OS X Available [sponsor]

Aerospike, sponsor of myNoSQL, has a quick announcement for developers using OS X:


In case you missed it, Aerospike C client 3.0.51 and Node.js client for the Mac OS X are now available. These client libraries also run on CentOS 6, RHEL 6, Debian 6+ and Ubuntu 12.04.

Check these out.

Original title and link: Aerospike has C and Node.js Clients for Mac OS X Available [sponsor] (NoSQL database©myNoSQL)


Big data: are we making a big mistake?

In a very entertaining article for FT.com, Tim Harford writes:

Cheerleaders for big data have made four exciting claims, each one reflected in the success of Google Flu Trends: that data analysis produces uncannily accurate results; that every single data point can be captured, making old statistical sampling techniques obsolete; that it is passé to fret about what causes what, because statistical correlation tells us what we need to know; and that scientific or statistical models aren’t needed because, to quote “The End of Theory”, a provocative essay published in Wired in 2008, “with enough data, the numbers speak for themselves”.

Unfortunately, these four articles of faith are at best optimistic oversimplifications.

  1. I’m very sure that the first wheel was not a Pirelli.
  2. If I’m yelling that I want a blue unicorn, I’m pretty sure sooner rather than later a bunch of people will try to sell me one.

✚ As you’d expect the Hacker News thread is also highly entertaining:

Another conclusion to draw from this article (which I really enjoyed, by the way) is that Big Data has been turned into one of the most abstract buzzwords ever. You thought “cloud” was bad? “Big Data” is far worse in its specificity.

Original title and link: Big data: are we making a big mistake? (NoSQL database©myNoSQL)

via: http://www.ft.com/intl/cms/s/2/21a6e7d8-b479-11e3-a09a-00144feabdc0.html


Thoughts on The Future of Hadoop in Enterprise Environments

In case you are looking for some sort of reassurance that big companies are into Hadoop, check SAP’s Innovation Evangelist, Timo Elliott’s perspective on the Hadoop market. It should be no surprise what he sees as the main trend:

Companies want to take advantage of the cost advantages of Hadoop systems, but they realize that Hadoop doesn’t yet do everything they need (for example, Gartner surveys show a steady decline in the proportion of CIOs that believe that NoSQL will replace existing data warehousing rather than augmenting it – now just 3%). And companies see the performance advantages of in-memory processing, but aren’t sure how it can make a difference to their business.

Original title and link: Thoughts on The Future of Hadoop in Enterprise Environments (NoSQL database©myNoSQL)

via: http://timoelliott.com/blog/2014/03/thoughts-on-the-future-of-hadoop-in-enterprise-environments.html


Continuent Replication to Hadoop – Now in Stereo!

Hopefully by now you have already seen that we are working on Hadoop replication. I’m happy to say that it is going really well. I’ve managed to push a few terabytes of data and different data sets through into Hadoop on Cloudera, HortonWorks, and Amazon’s Elastic MapReduce (EMR). For those who have been following my long association with the IBM InfoSphere BigInsights Hadoop product, and I’m pleased to say that it’s working there too.

Continuent is the company behing Tungsten connector and replicator products which, in their words:

Continuent Tungsten allows enterprises running business- critical MySQL applications to provide high-availability (HA) and globally reduntant disaster recover (DR) capabilities for cloud-based and private data center installations. Tungsten Replicator provides high performance open source data replication for MySQL and Oracle and is a key part of Continuent Tungsten.

Original title and link: Continuent Replication to Hadoop – Now in Stereo! (NoSQL database©myNoSQL)

via: http://mcslp.wordpress.com/2014/03/31/continuent-replication-to-hadoop-now-in-stereo/


Writing your own @Resource: Apache TomEE + NoSQL

Alex Soto:

@Resource annotation appeared for the first time in Java EE 5 specification. When you annotate an attribute with @Resource, it will be the container responsible of injecting the requested resource. […] With Apache TomEE you can do it by providing your own @Resource provider class. In this post we are going to see how you write your own provider for MongoDB so you can use @Resource for MongoDB too.

Code seems to be pretty simple. But it doesn’t go into details like lifecycle & such, which usually makes these things much more interesting.

Original title and link: Writing your own @Resource: Apache TomEE + NoSQL (NoSQL database©myNoSQL)

via: http://www.lordofthejars.com/2014/03/apache-tomee-nosql-writing-your-own.html


April 3 Webinar: The BlueKai Playbook for Scaling to 10 Trillion Transactions a Month [sponsor]

myNoSQL’s supporter Aerospike is getting ready for a new case study webinar:


As the industry’s largest online data exchange, BlueKai knows a thing or two about pushing the limits of scale. Find out how they are processing up to 10 trillion transactions per month from Vice President of Data Delivery, Ted Wallace.

Register today for the webinar on this Thursday, Apr. 3rd.

Original title and link: April 3 Webinar: The BlueKai Playbook for Scaling to 10 Trillion Transactions a Month [sponsor] (NoSQL database©myNoSQL)


A practical comparison of Map-Reduce in MongoDB and RavenDB

Ben Foster looks at MongoDB’s Map-Reduce and aggregation framework and then compares them with RavenDB’s Map-Reduce:

I thought it would be interesting to do a practical comparison of Map-Reduce in both MongoDB and RavenDB.

There are more differences than similarities — I’m not referring to the API differences, but to fundamental differences to the ways they operate.

✚ RavenDB’s author has a follow up post in which he underlines another major difference: RavenDB’s Map-Reduce operates as an index, while MongoDB’s Map-Reduce is an online operation.

Original title and link: A practical comparison of Map-Reduce in MongoDB and RavenDB (NoSQL database©myNoSQL)

via: http://benfoster.io/blog/map-reduce-in-mongodb-and-ravendb