ALL COVERED TOPICS

NoSQL Benchmarks NoSQL use cases NoSQL Videos NoSQL Hybrid Solutions NoSQL Presentations Big Data Hadoop MapReduce Pig Hive Flume Oozie Sqoop HDFS ZooKeeper Cascading Cascalog BigTable Cassandra HBase Hypertable Couchbase CouchDB MongoDB OrientDB RavenDB Jackrabbit Terrastore Amazon DynamoDB Redis Riak Project Voldemort Tokyo Cabinet Kyoto Cabinet memcached Amazon SimpleDB Datomic MemcacheDB M/DB GT.M Amazon Dynamo Dynomite Mnesia Yahoo! PNUTS/Sherpa Neo4j InfoGrid Sones GraphDB InfiniteGraph AllegroGraph MarkLogic Clustrix CouchDB Case Studies MongoDB Case Studies NoSQL at Adobe NoSQL at Facebook NoSQL at Twitter

NAVIGATE MAIN CATEGORIES

Close

hdfs: All content tagged as hdfs in NoSQL databases and polyglot persistence

Defending Hadoop’s HDFS - Hortonworks Version

In reply to the attack to HDFS, Eric Baldeschwieler emphasizes the pros of HDFS:

  • Extreme low cost per byte
  • Very high bandwidth to support MapReduce workloads
  • Data reliability

but also the state of the HDFS competition:

  • not designed for Hadoop scale
  • not using commodity hardware or open source software
  • not meant for MapReduce
  • unproven technology

Original title and link: Defending Hadoop’s HDFS - Hortonworks Version (NoSQL database©myNoSQL)

via: http://hortonworks.com/blog/thinking-about-the-hdfs-vs-other-storage-technologies/


Attacking Hadoop’s HDFS: 8 Ways to Replace HDFS

Derick Harris for GigaOm:

Ironically, one of Hadoop’s biggest shortcomings now is also one of its biggest strengths going forward —the Hadoop Distributed File System.

But if the growing number of options for replacing HDFS signifies anything, it’s that HDFS isn’t quite where it needs to be.

No alternatives => vendor lock-in => bad

Multiple options => proof of weaknesses => bad

Confused? I am a bit.

Original title and link: Attacking Hadoop’s HDFS: 8 Ways to Replace HDFS (NoSQL database©myNoSQL)

via: http://gigaom.com/cloud/because-hadoop-isnt-perfect-8-ways-to-replace-hdfs/


Comparing File Formats and Compression Methods in HDFS and Hive

The post is a bit old, but the data contained comparing different compression methods is helpful:

HDFS and Hive comparing file formats and compression methods

Original title and link: Comparing File Formats and Compression Methods in HDFS and Hive (NoSQL database©myNoSQL)


Hadoop Namenode High Availability Merged to HDFS Trunk

As I’m slowly recovering after a severe poisoning that I initially ignored but finally put me to bed for almost a week, I’m going to post some of the most interesting articles I’ve read while resting.

Hadoop Namenode’s single point of failure has always been mentioned as one of the weaknesses of Hadoop and also as a differentiator of other Hadoop-based commercial offerings. But now the Namenode HA branch was merged into trunk and while it will take a couple of cicles to complete the tests, this will become soon part of the Hadoop distribution.

Here’s Jitendra Pandey announcement on Hortonworks’s blog:

Significant enhancements were completed to make HOT Failover work:

  • Configuration changes for HA
  • Notion of active and standby states were added to the Namenode
  • Client-side redirection
  • Standby processing journal from Active
  • Dual block reports to Active and Standby

In a follow up post to Gartner’s article Apache Hadoop 1.0 Doesn’t Clear Up Trunks and Branches Questions. Do Distributions?, the advantage of using custom distributions will slowly vanish and the open source version will be the one you’ll want to have in production.

Original title and link: Hadoop Namenode High Availability Merged to HDFS Trunk (NoSQL database©myNoSQL)


The components and their functions in the Hadoop ecosystem

Edd Dumbill enumerates the various components of the Hadoop ecosystem:

Hadoop ecosystem

My quick reference of the Hadoop ecosystem is including a couple of other tools that are not in this list, with the exception of Ambari and HCatalog which were released later.

Original title and link: The components and their functions in the Hadoop ecosystem (NoSQL database©myNoSQL)


How Web giants store big data

An ArsTechnica, not very technical, overview of the storage engines developed and used by Google (Google File System, BigTable), Amazon (Dynamo), Microsoft (Azure DFS), plus the Hadoop Distributed File System (HDFS).

Original title and link: How Web giants store big data (NoSQL database©myNoSQL)

via: http://arstechnica.com/business/news/2012/01/the-big-disk-drive-in-the-sky-how-the-giants-of-the-web-store-big-data.ars/1


Hadoop Distributed File System HDFS: A Cartoon Is Worth A

A picture is worth a thousand words. A comic-like explanation of HDFS is worth some too:

Hadoop Distributed File System HDFS

See it in full size. Credit Maneesh Varshney

Skuffed & Shiny

Original title and link: Hadoop Distributed File System HDFS: A Cartoon Is Worth A (NoSQL database©myNoSQL)


Database Sharding Using a Proxy

ScaleBase’s Liran Zelkha is making the case for database sharding using a proxy:

First and foremost, since the sharding logic is not embedded inside the application, third party applications can be used, be it MySQL Workbench, MySQL command line interface or any other third party product. This translates to a huge saving in the day-to-day costs of both developers and system administrators.

Compare ScaleBase’s proxy-based sharding:

ScaleBase Proxy Sharding

with MongoDB’s sharding:

MongoDB sharding

Another example would be the Hadoop HDFS NodeName which provides somehow similar functionality.

Original title and link: Database Sharding Using a Proxy (NoSQL database©myNoSQL)

via: http://www.scalebase.com/making-the-case-for-sharding-using-a-database-proxy/


HTTP REST Access to HDFS: WebHDFS

Nicholas Sze:

Apache Hadoop provides a high performance native protocol for accessing HDFS. While this is great for Hadoop applications running inside a Hadoop cluster, users often want to connect to HDFS from the outside. […] To address this we have developed an additional protocol to access HDFS using an industry standard RESTful mechanism, called WebHDFS. As part of this, WebHDFS takes advantages of the parallelism that a Hadoop cluster offers. Further, WebHDFS retains the security that the native Hadoop protocol offers. It also fits well into the overall strategy of providing web services access to all Hadoop components.

WebHDFS opens up opportunities for many new tools. For example, tools like FUSE or C/C++ client libraries using WebHDFS are fairly straightforward to be written. It allows existing Unix/Linux utilities and non-Java applications to interact with HDFS. Besides, there is no Java binding in those tools and Hadoop installation is not required.

I think Andre Luckow’s webhdfs-py is the first library (in Python) to take advantage of WebHDFS.

Original title and link: HTTP REST Access to HDFS: WebHDFS (NoSQL database©myNoSQL)

via: http://hortonworks.com/webhdfs-?-http-rest-access-to-hdfs/


MySQL Cluster Used to Implement a Highly Available and Scalable Hadoop NodeName

Given the following Hadoop NameNode problem:

the problem is, if the Namenode crashes, the entire file system becomes inoperable because clients and Datanodes still need the metadata to do anything useful. Furthermore, since the Namenode maintains all the metadata only in memory, the number of files you can store on the filesystem is directly proportional to the amount of RAM the Namenode has. As if that’s not enough, the Namenode will be completely saturated under write intensive workloads, and will be unable to respond to even simple client side queries like ls. Have a look at Shvachko’s paper which describes these problems at great length and depth, on which we’ve based our work.

Lalith Suresh has worked for the last couple of months on the following solution:

“Move all of the Namenode’s metadata storage into an in-memory, replicated, share-nothing distributed database.”

[…] We chose MySQL Cluster as our database because of its wide spread use and stability. So for the filesystem to scale to a larger number of files, one needs to add more MySQL Cluster Datanodes, thus moving the bottleneck from the Namenode’s RAM to the DB’s storage capacity. For the filesystem to handle heavier workloads, one needs to add only more Namenode machines and divide the load amongst them. Another interesting aspect is that if a single Namenode machine has to reboot, it needn’t fetch any state into memory and will be ready for action within a few seconds (although it still has to sync with Datanodes). Another advantage of our design is that the modifications will not affect the clients or Datanodes in anyway, except that we might need to find a way to divide the load among the Namenodes.

His post covers the how, but also pros and cons of his solution. And the result is available on GitHub.

Update: Hortonworks is already working on a the next generation of Apache Hadoop MapReduce which is focusing on reliability, availability, scalability, and predictable latency. But this doesn’t make Lalith’s work less interesting .

Original title and link: MySQL Cluster Used to Implement a Highly Available and Scalable Hadoop NodeName (NoSQL database©myNoSQL)

via: http://lalith.in/2011/12/15/towards-a-scalable-and-highly-available-namenode/


A Short Incursion Into Alternate Hadoop Filesystems

Steve Loughran starts with a critical look at Netapp Open solution for Hadoop paper:

Actually it is weirder than I first thought. This is still HDFS, just running on more expensive hardware. You get the (current) HDFS limitations: no native filesystem mounting, a namenode to care about, security on a par with NFS, without the cost savings of pure-SATA-no-licensing-fees. Instead you have to use RAID everywhere, which not only bumps up your cost of storage, puts you at risk of RAID controller failure and errors in the OS drivers for those controller (hence their strict rules about which Linux releases to trust). If you do follow their recommendations and rely on hardware for data integrity, you’ve cut down the probability of node-local job execution, so all FUD about replication traffic is now moot as at least 1/3 more of your tasks will be running remote -possibly even with the Fair Scheduler, which waits for a bit to see if a local slot becomes free. What they are doing then is adding some HA hardware underneath a filesystem that is designed to give strong availability out of medium availability hardware. I have seen such a design before, and thought it sucked then too.  Information week says this is a response to EMC, but it looks more like NetApp’s strategy to stay relevant, and Cloudera are partnering with them as NetApp offered them money and if it sells into more “enterprise customers” then why not? With the extra hardware costs of NetApp the cloudera licenses will look better value, and clearly both NetApp and their customers are in need of the hand-holding that Cloudera can offer.

Then in a follow up post, he looks at a couple of alternatives (Lustre, GPFS, IBRIX, etc):

I’m not against running MapReduce—or the entire Hadoop stack—against alternate filesystems. There are some good cases where it makes sense. Other filesystems offer security, NFS mounting, the ability to be used by other applications and other features. HDFS is designed to scale well on “commodity” hardware, (where servers containing Xeon E5 series parts with 64GB RAM, 10GbE and 8-12 SFF HDDs are considered a subset of “commodity”).

Original title and link: A Short Incursion Into Alternate Hadoop Filesystems (NoSQL database©myNoSQL)


How Does Google MegaStore Compare Against HDFS/HBase?

Alex Feinberg answering the question in the title:

This is like saying “how does a General Motors bus compare against a Ford engine”. MegaStore is built on of Google’s BigTable/GFS. HBase/HDFS are BigTable/HDFS work-alikes.

BigTable and HBase give up availability (in the CAP Theorem sense) in favour of consistency: when a tablet master node (HRegionServer in HBase) goes down, the portion of the keyspace the failed node is responsible for becomes (briefly) unavailable until another node takes over the portion of the key space. This is efficient, as the data/write-ahead-log is stored GFS (or HDFS): in a way serializing writes to GFS/HDFS (a file system with relaxed consistency semantics) through a single node ensures serializable consistency.

Make sure you read it all.

Original title and link: How Does Google MegaStore Compare Against HDFS/HBase? (NoSQL database©myNoSQL)

via: http://www.quora.com/How-does-Google-MegaStore-compare-against-HDFS-HBase