ALL COVERED TOPICS

NoSQL Benchmarks NoSQL use cases NoSQL Videos NoSQL Hybrid Solutions NoSQL Presentations Big Data Hadoop MapReduce Pig Hive Flume Oozie Sqoop HDFS ZooKeeper Cascading Cascalog BigTable Cassandra HBase Hypertable Couchbase CouchDB MongoDB OrientDB RavenDB Jackrabbit Terrastore Amazon DynamoDB Redis Riak Project Voldemort Tokyo Cabinet Kyoto Cabinet memcached Amazon SimpleDB Datomic MemcacheDB M/DB GT.M Amazon Dynamo Dynomite Mnesia Yahoo! PNUTS/Sherpa Neo4j InfoGrid Sones GraphDB InfiniteGraph AllegroGraph MarkLogic Clustrix CouchDB Case Studies MongoDB Case Studies NoSQL at Adobe NoSQL at Facebook NoSQL at Twitter

NAVIGATE MAIN CATEGORIES

Close

mapreduce: All content about mapreduce in NoSQL databases and polyglot persistence

SSDs and MapReduce performance

Conclusions of comparing SSDs and HDDs for different cluster scenarios from the cost perspective of performance and storage capacity:

  • For a new cluster, SSDs deliver up to 70 percent higher MapReduce performance compared to HDDs of equal aggregate IO bandwidth.
  • For an existing HDD cluster, adding SSDs lead to more gains if configured properly.
  • On average, SSDs show 2.5x higher cost-per-performance, a gap far narrower than the 50x difference in cost-per-capacity.

The post offers many details of the tests run and also various results. But the 3 bullets above should be enough to drive your decision.

Original title and link: SSDs and MapReduce performance (NoSQL database©myNoSQL)

via: http://blog.cloudera.com/blog/2014/03/the-truth-about-mapreduce-performance-on-ssds/


Cloudera Search Interface: Inside Cloudera's customer support Enterprise Data Hub

Great use of their own technologies to better server the customer:

This application goes way beyond simple indexing and searching. We are using Cloudera Search, HBase, and MapReduce to process, store, and visualize stack traces that wouldn’t be possible with just a search index. How Monocle Stack Trace integrates with the larger CSI application goes way beyond that, though. It’s a great feeling when you are able to execute a search in Monocle Stack Trace that links directly to a point in time in a customer log file that an Impala query returned after churning through tens of GBs of data — done interactively from a Web UI on the order of a second or two.

I can easily see this becoming a real product used by software companies that offer direct customer support.

Original title and link: Cloudera Search Interface: Inside Cloudera’s customer support Enterprise Data Hub (NoSQL database©myNoSQL)

via: http://blog.cloudera.com/blog/2014/02/secrets-of-cloudera-support-inside-our-own-enterprise-data-hub/


The Forrester Wave for Hadoop market

Update: I’d like to thank the people that pointed out in the comment thread that I’ve messed up quite a few aspects in my comments about the report. I don’t believe in taking down posts that have been out for a while, so please be warned that basically this article can be ignored.

Thank you and my apologies for those comments that were a misinterpretation of the report..


This is the Q1 2014 Forrester Wave for Hadoop:

Forrester wave for Hadoop

A couple of thoughts:

  1. Cloudera, Hortonworks, MapR are positioned very (very) close.

    1. Hortonworks is position closer to the top right meaning they report more customers/larger install base
    2. MapR is higher on the vertical axis meaning that MapR’s strategy is slightly better.

      For me, MapR’s strategy can be briefly summarized as:

      1. address some of the limitations in the Hadoop ecosystem
      2. provide API-compatible products for major components of the Hadoop ecosystem
      3. use these Apache product (trade marked) names to advertise their products

      I think the 1st point above explains the better positioning of MapR’s current offering.

    3. Even if Cloudera has been the first pure-play Hadoop distribution it’s positioned behind behind both Hortonworks and MapR.

  2. IBM has the largest market presence. That’s a big surprise as I’m very rarely hearing clear messages from IBM.

  3. IBM and Pivotal Software are considered to have the strongest strategy. That’s another interesting point in Forrester’s report. Except the fact that IBM has a ton of data products and that Pivotal Software is offering more than Hadoop, I don’t know what exactly explains this position.

    The Forrester report Strategy positioning is based on quantifying the following categories: Licensing and pricing, Ability to execute, Product road map, Customer support. IBM and Pivotal are ranked the first in all these categories (with maximum marks for the last 3). As a comparison Hortonworks has 3/5 for Ability to execute — this must be related only to budget; Cloudera has 3/5 for both Ability to execute and Customer support.

    Pivotal is the 3rd last in terms of current offering. I guess my hypothesis for ranking Pivotal as 1st in terms of strategy is wrong.

  4. Microsoft who through the collaboration with Hortonworks came up with HDInsight, which basically enabled Hadoop for Excel and its data warehouse offering, it positioned the 2nd last on all 3 axes.

    No one seems to love Microsoft anymore.

  5. While not a pure Hadoop player, DataStax has been offering the DataStax Enterprise platform that includes support for analytics through Hadoop and search through Solr for at least 2 years. That’s actually way before anyone else from the group of companies in the Forrester’s report had anything similar1.

    This report focuses only on “general-purpose Hadoop solutions based on a differentiated, commercial Hadoop distribution”.

You can download the report after registering on Hortonwork’s site: here.


  1. DataStax is my employer. But what I wrote is a pure fact. 

Original title and link: The Forrester Wave for Hadoop market (NoSQL database©myNoSQL)


Hortonworks raises $100M to grow engineering and company's ecosystem globally

Derrick Harris for GigaOm has the scoop:

Hadoop vendor Hortonworks has raised $100 million in a new round of venture capital led by BlackRock and Passport Capital. The company’s existing investors — Dragoneer, Tenaya Capital, Benchmark, Index Ventures and Yahoo — also participated in the latest round. Hortonworks CEO Rob Bearden said in an interview that the new funding will help Hortonworks scale its engineering efforts, grow the company’s ecosystem and scale its global operations.

Last week’s round E for Cloudera turned up to be $160 instead of the Bloomberg rumored $200.

These big rounds raised by the Hadoop pure-players are a confirmation of the Hadoop market. But I also think they can be explained by the tough competition Cloudera and Hortonworks are facing from large corporations like IBM, Teradata, Oracle, Microsoft. At least in terms of budget.

✚ While some of the above mentioned companies are partnering with at least one pure-play Hadooper — Cloudera, Hortonworks, MapR — that doesn’t mean they are not keeping an eye on the prize.

Original title and link: Hortonworks raises $100M to grow engineering and company’s ecosystem globally (NoSQL database©myNoSQL)

via: http://gigaom.com/2014/03/24/hortonworks-raises-100m-to-scale-its-hadoop-business/


The NoSQL Family Tree

NoSQL-Family-Tree

Even if it includes just a handful of NoSQL databases, it’s still a nice visualization.

Original title and link: The NoSQL Family Tree (NoSQL database©myNoSQL)

via: https://cloudant.com/blog/the-nosql-family-tree/


Examples of analytics applications across industries

A great matrix of the different analytics use cases across industries in Hortonworks’s post “Enterprise Hadoop and the Journey to a Data Lake“:

Anaylitcs use cases

The data type column section covers multiple dimensions of data. And the authors took a conservative approach for the structured and unstructured categories (in the sense that they marked very few categories as unstructured).

A couple of interesting exercises that can be done using this matrix as an input:

  1. figure out how adding data from different categories to a specific use case would benefit it. One obvious example is: how would Telecom companies benefit from adding to their infrastructure analysis social data?

    Building on the above, decide what tools exist to help with this extra scenario.

  2. can one use case from an industry be applied to a different industry to disrupt it?

    What would be the quickest road to accomplish it?

Original title and link: Examples of analytics applications across industries (NoSQL database©myNoSQL)


Pig cheat sheet

Cheat sheet? Check. Pig? Check. Where do I get it?

via: http://www.qubole.com/wp-content/uploads/2014/01/Pig-Cheat-Sheet.pdf


The Hadoop as ETL part in migrating from MongoDB to Cassandra at FullContact

While I’ve found the whole post very educative — and very balanced considering the topic — the part that I’m linking to is about integrating MongoDB with Hadoop. After reading the story of integrating MongoDB and Hadoop at Foursquare, there were quite a few questions bugging me. This post doesn’t answer any of them, but it brings in some more details about existing tools, a completely different solution, and what seems to be an overarching theme when using Hadoop and MongoDB in the same phrase:

We’re big users of Hadoop MapReduce and tend to lean on it whenever we need to make large scale migrations, especially ones with lots of transformation. That fact along with our existing conversion project from before, we used 10gen’s mongo-hadoop project which has input and output formats for Hadoop. We immediately realized that the InputFormat which connected to a MongoDB cluster was ill-suited to our usage. We had 3TB of partially-overlapping data across 2 clusters. After calculating input splits for a few hours, it began pulling documents at an uncomfortably slow pace. It was slow enough, in fact, that we developed an alternative plan.

You’ll have to read the post to learn how they’ve accomplished their goal, but as a spoiler, it was once again more of an ETL process rather than an integration.

✚ The corresponding HN thread; it’s focused mostly on the from MongoDB to Cassandra parts.

Original title and link: The Hadoop as ETL part in migrating from MongoDB to Cassandra at FullContact (NoSQL database©myNoSQL)

via: http://www.fullcontact.com/blog/mongo-to-cassandra-migration/


Hadoop vs Redshift

This is how Yaniv Mor’s “Hadoop vs. Redshift” ends:

We have a tie! Huh!? Didn’t Hadoop win most of the rounds? Yes, it did, but Big Data’s superheroes are better off working together as a team rather than fighting. Turn on the Hadoop-Signal when you need relatively cheap data storage, batch processing of petabytes, or processing data in non-relational formats. Call out to red-caped Redshift for analytics, fast performance for terabytes, and an easier transition for your PostgreSQL team. As Airbnb concluded in their benchmark: “We don’t think Redshift is a replacement of the Hadoop family due to its limitations, but rather it is a very good complement to Hadoop for interactive analytics”. We Agree.

I’m wondering why wasting 1337 words for an apple-to-oranges comparison.

Original title and link: Hadoop vs Redshift (NoSQL database©myNoSQL)

via: http://www.xplenty.com/uncategorized/2014/02/hadoop-vs-redshift/


A guide to write and run Giraph jobs on Hadoop

A good setup guide by Mirko Kämpf:

In this how-to, you will learn how to use Giraph 1.0.0 on top of CDH 4.x using a simple example dataset, and run example jobs that are already implemented in Giraph. You will also learn how to set up your own Giraph- based development environment. The end result will be a setup (not intended for production) for writing and testing Giraph jobs, or just for playing around with Giraph and small sample datasets.

giraph

Anatomy of the Giraph data flow

Original title and link: A guide to write and run Giraph jobs on Hadoop (NoSQL database©myNoSQL)

via: http://blog.cloudera.com/blog/2014/02/how-to-write-and-run-giraph-jobs-on-hadoop/


Hadoop and Teradata’s business

Earlier today I’ve posted about Teradata’s take on the evolution of databases. As expected, everything is safe and under control. Now this report from Larry Dignan for ZDNet about Teradata Q4 earnings call presents Teradata’s perspective about Hadoop:

Teradata’s fourth quarter earnings were solid, but analysts peppered management with questions about Hadoop as data warehouse revenue worries persist.

Teradata CEO Mike Koehler and CFO Steve Scheppmann talked Hadoop throughout the company’s conference call. Was Hadoop taking Teradata’s business away? What’s the revenue hit? Can Teradata co-exist?

Once again everything is safe with a bright future. Until it isn’t anymore and Hadoop eats the enterprise data warehouse space. In Teradata’s defense, they’ve been one of the first companies that has looked seriously at Hadoop and came up with a coherent positioning.

Original title and link: Hadoop and Teradata’s business (NoSQL database©myNoSQL)


5 achievements of Hadoop

John Santaferraro (Actian) posted a great list of 5 major changes that Hadoop brought to the data market:

  1. Enormous, affordable scale compared to old storage paradigms
  2. Capture and store all data without first determining its value
  3. New types of data present new opportunities for analytics
  4. Data discovery and data provisioning now common in most organizations
  5. Analytics applications emerge as the number one use for big data.

Original title and link: 5 achievements of Hadoop (NoSQL database©myNoSQL)

via: http://www.actian.com/about-us/blog/big-data-2-0-happened-big-data-1-0/