ALL COVERED TOPICS

NoSQL Benchmarks NoSQL use cases NoSQL Videos NoSQL Hybrid Solutions NoSQL Presentations Big Data Hadoop MapReduce Pig Hive Flume Oozie Sqoop HDFS ZooKeeper Cascading Cascalog BigTable Cassandra HBase Hypertable Couchbase CouchDB MongoDB OrientDB RavenDB Jackrabbit Terrastore Amazon DynamoDB Redis Riak Project Voldemort Tokyo Cabinet Kyoto Cabinet memcached Amazon SimpleDB Datomic MemcacheDB M/DB GT.M Amazon Dynamo Dynomite Mnesia Yahoo! PNUTS/Sherpa Neo4j InfoGrid Sones GraphDB InfiniteGraph AllegroGraph MarkLogic Clustrix CouchDB Case Studies MongoDB Case Studies NoSQL at Adobe NoSQL at Facebook NoSQL at Twitter

NAVIGATE MAIN CATEGORIES

Close

Greenplum HD: All content tagged as Greenplum HD in NoSQL databases and polyglot persistence

Petabyte-Scale Hadoop Clusters

Curt Monash quoting Omer Trajman (Cloudera) in a post counting petabyte-scale Hadoop deployments:

The number of Petabyte+ Hadoop clusters expanded dramatically over the past year, with our recent count reaching 22 in production (in addition to the well-known clusters at Yahoo! and Facebook). Just as our poll back at Hadoop World 2010 showed the average cluster size at just over 60 nodes, today it tops 200. While mean is not the same as median (most clusters are under 30 nodes), there are some beefy ones pulling up that average. Outside of the well-known large clusters at Yahoo and Facebook, we count today 16 organizations running PB+ clusters running CDH across a diverse number of industries including online advertising, retail, government, financial services, online publishing, web analytics and academic research. We expect to see many more in the coming years, as Hadoop gets easier to use and more accessible to a wide variety of enterprise organizations.

First questions that bumped in my head after reading it:

  1. How many deployments DataStax’ Brisk has? How many close or over petabyte?
  2. How many clients run EMC Greenplum HD and how many are close to this scale?
  3. Same question about NetApp Hadoopler clients.
  4. Same question for MapR.

Answering these questions would give us a good overview of the Hadoop ecosystem.

Original title and link: Petabyte-Scale Hadoop Clusters (NoSQL database©myNoSQL)

via: http://www.dbms2.com/2011/07/06/petabyte-hadoop-clusters/


EMC Partners with MapR for Greenplum HD Enterprise Edition

EMC plans to bring MapR’s proprietary replacement for the Hadoop Distributed File System to its enterprise-ready Apache Hadoop Greenplum HD:

Because MapR’s file system is more efficient than HDFS, users will achieve two to five times the performance over standard Hadoop nodes in a cluster, according to Schroeder. That translates into being able to use about half the number of nodes typically required in a cluster, he said.

“Hadoop nodes cost about $4,000 per node depending on configuration. If you add in power costs, HVAC, switching, and rackspace, you’ll probably double that,” Schroeder said. “Our product can immediately save you $4,000 and over 8 years it’ll save you $8000 per node.”

In terms of what MapR is bringing to the table, the article mentions MapR’s improvements to Apache Hadoop:

  • multiple channels to data through NFS protocol
  • a re-architected NameNode for high availability
  • eliminated single points of failure and automated jobs failover
  • data mirroring and snapshot capabilities
  • wide area replication

Filing this to the announced section of the Hadoop-related solutions list.

Original title and link: EMC Partners with MapR for Greenplum HD Enterprise Edition (NoSQL databases © myNoSQL)

via: http://www.infoworld.com/d/storage/emc-joins-forces-hadoop-distributor-mapr-technologies-193?page=0,0