ALL COVERED TOPICS

NoSQL Benchmarks NoSQL use cases NoSQL Videos NoSQL Hybrid Solutions NoSQL Presentations Big Data Hadoop MapReduce Pig Hive Flume Oozie Sqoop HDFS ZooKeeper Cascading Cascalog BigTable Cassandra HBase Hypertable Couchbase CouchDB MongoDB OrientDB RavenDB Jackrabbit Terrastore Amazon DynamoDB Redis Riak Project Voldemort Tokyo Cabinet Kyoto Cabinet memcached Amazon SimpleDB Datomic MemcacheDB M/DB GT.M Amazon Dynamo Dynomite Mnesia Yahoo! PNUTS/Sherpa Neo4j InfoGrid Sones GraphDB InfiniteGraph AllegroGraph MarkLogic Clustrix CouchDB Case Studies MongoDB Case Studies NoSQL at Adobe NoSQL at Facebook NoSQL at Twitter

NAVIGATE MAIN CATEGORIES

Close

Apache Sqoop Announces First Release

Sqoop, originally created at Cloudera and now on Apache incubator, is a tool designed for efficiently transferring bulk data between Apache Hadoop and structured datastores. You can use Sqoop to import data from external structured datastores into HDFS or related systems like Hive and HBase. Conversely, Sqoop can be used to extract data from Hadoop and export it to external structured datastores such as relational databases and enterprise data warehouses.

Yesterday Apache Sqoop announced its first release under the Apache umbrella with a long list of changes.

To get a better idea of where Apache Sqoop fits, check this video from Hadoop World 2011 (requires registration) which describes key scenarios driving Hadoop and RDBMS integration and reviewes Apache Sqoop project, which besides supporting data movement between Hadoop and any JDBC database, it is also providing an framework which allows developers and vendors to create connectors optimized for specific targets such as Oracle, Netezza etc.

Original title and link: Apache Sqoop Announces First Release (NoSQL database©myNoSQL)