NoSQL Benchmarks NoSQL use cases NoSQL Videos NoSQL Hybrid Solutions NoSQL Presentations Big Data Hadoop MapReduce Pig Hive Flume Oozie Sqoop HDFS ZooKeeper Cascading Cascalog BigTable Cassandra HBase Hypertable Couchbase CouchDB MongoDB OrientDB RavenDB Jackrabbit Terrastore Amazon DynamoDB Redis Riak Project Voldemort Tokyo Cabinet Kyoto Cabinet memcached Amazon SimpleDB Datomic MemcacheDB M/DB GT.M Amazon Dynamo Dynomite Mnesia Yahoo! PNUTS/Sherpa Neo4j InfoGrid Sones GraphDB InfiniteGraph AllegroGraph MarkLogic Clustrix CouchDB Case Studies MongoDB Case Studies NoSQL at Adobe NoSQL at Facebook NoSQL at Twitter



Bulk Importing Data Into HBase

One such method is a tool called Scoop that automatically imports data between these formats, given schema of both source and destination. We did some in depth research on this tool and concluded it could work well if you have relational data. In addition to this tool Map/Reduce allows various other options including a bulk import method. Bulk Importing bypasses the HBase API and writes contents, which are properly formatted as HBase data files - HFiles, directly to the file system. […]

There are two steps involved in Bulk Import

  • Preparing Data (HFiles) using Map/Reduce
  • Importing Prepared Data into HBase table

Slowly but steady the Big Data issues are moving away from storage itself to data migration and data integration. Even if it is about relocating your servers handling the move of petabytes of data is a challenge.

Data migration and integration—I call this BigData pipelines— represents a huge business opportunity. There are some solutions out there, but there’s still a lot of potential—just look at the projects Cloudera has been open sourcing that are most of the time meant to facilitate the ingestion of data from various sources into Hadoop clusters.

Getting back to bulk imports into HBase, make sure to check Ryan Rawson’s HBase: Importing large data.

Original title and link: Bulk Importing Data Into HBase (NoSQL database©myNoSQL)