NoSQL Benchmarks NoSQL use cases NoSQL Videos NoSQL Hybrid Solutions NoSQL Presentations Big Data Hadoop MapReduce Pig Hive Flume Oozie Sqoop HDFS ZooKeeper Cascading Cascalog BigTable Cassandra HBase Hypertable Couchbase CouchDB MongoDB OrientDB RavenDB Jackrabbit Terrastore Amazon DynamoDB Redis Riak Project Voldemort Tokyo Cabinet Kyoto Cabinet memcached Amazon SimpleDB Datomic MemcacheDB M/DB GT.M Amazon Dynamo Dynomite Mnesia Yahoo! PNUTS/Sherpa Neo4j InfoGrid Sones GraphDB InfiniteGraph AllegroGraph MarkLogic Clustrix CouchDB Case Studies MongoDB Case Studies NoSQL at Adobe NoSQL at Facebook NoSQL at Twitter



Serengeti: All content tagged as Serengeti in NoSQL databases and polyglot persistence

Hadoop Virtualization

Roberto V. Zicari interviewing Joe Russell1 about Hadoop virtualization with Serengeti:

A common misconception when virtualizing Hadoop clusters is that we decouple the data nodes from the physical infrastructure. This is not necessarily true. When users virtualize a Hadoop cluster using Project Serengeti, they separate data from compute while preserving data locality. By preserving data locality, we ensure that performance isn’t negatively impacted, or essentially making the infrastructure appear as static. Additionally, it creates true multi-tenancy within more layers of the Hadoop stack, not just the name node.

I’m not 100% sure I get this, but the way I explained it to myself to actually make sense this would mean that HDFS lives directly on the physical hardware and only the compute part is virtualized. Is that what he means?

  1. Joe Russell is Product Line Marketing Manager at VMware. 

Original title and link: Hadoop Virtualization (NoSQL database©myNoSQL)


Deploying Hadoop With Serengeti

Duncan Epping timing how long would take to deploy Hadoop with Serengeti:

How long did that take me? Indeed ~10 minutes

So, Project Serengeti is a sort of Apache Whirr for VMware vSphere.

Original title and link: Deploying Hadoop With Serengeti (NoSQL database©myNoSQL)


Why Virtualize Hadoop and How Project Serengeti Can Help

A very long post by Richard McDougall explaining why virtualizing Hadoop may make sense and how VMware’s Project Serengeti can help. Answering the question in the title, McDougall enumerates 6 reasons:

  1. Consolidation/sharing of a big-data platform
  2. Rapid provisioning
  3. Resource sharing
  4. High availability
  5. Security
  6. Versioned Hadoop environments

He’s also addressing two of the most common questions about Hadoop virtualization:

  1. Isn’t there a large performance overhead? Benchmark results are available in a whitepaper that can be read or downloaded below.
  2. Doesn’t vSphere use shared SAN storage only? (nb: the short answer is that vSphere supports both local and shared storage)

Project Serengeti


VMWare Project Serengeti: Virtualization-Friendly Hadoop

VMWare Project Serengeti:

Serengeti is an open source project initiated by VMware to enable the rapid deployment of an Apache Hadoop cluster (HDFS, MapReduce, Pig, Hive, ..) on a virtual platform.

Serengeti 0.5 currently supports vSphere, with the ability to support other platforms. The project is at an early stage, and is endorsed by all major Hadoop distributions including Cloudera, Greenplum, Hortonworks and MapR.

The Hadoop wiki has a page dedicated to running Hadoop in a virtual environment. And there’s also the recent post by Steve Loughran about pros and cons of Hadoop in the cloud and a paper authored by VMWare about virtualizing Apache Hadoop (pdf).

Original title and link: VMWare Project Serengeti: Virtualization-Friendly Hadoop (NoSQL database©myNoSQL)