ALL COVERED TOPICS

NoSQL Benchmarks NoSQL use cases NoSQL Videos NoSQL Hybrid Solutions NoSQL Presentations Big Data Hadoop MapReduce Pig Hive Flume Oozie Sqoop HDFS ZooKeeper Cascading Cascalog BigTable Cassandra HBase Hypertable Couchbase CouchDB MongoDB OrientDB RavenDB Jackrabbit Terrastore Amazon DynamoDB Redis Riak Project Voldemort Tokyo Cabinet Kyoto Cabinet memcached Amazon SimpleDB Datomic MemcacheDB M/DB GT.M Amazon Dynamo Dynomite Mnesia Yahoo! PNUTS/Sherpa Neo4j InfoGrid Sones GraphDB InfiniteGraph AllegroGraph MarkLogic Clustrix CouchDB Case Studies MongoDB Case Studies NoSQL at Adobe NoSQL at Facebook NoSQL at Twitter

NAVIGATE MAIN CATEGORIES

Close

HBase with Trillions Rows

Interesting question and answer on HBase mailing list:

[…] is it feasible to use HBase table in “read-mostly” mode with trillions of rows, each contains small structured record (~200 bytes, ~15 fields). Does anybody know a successful case when tables with such number of rows are used with HBase?

My follow up questions:

  • where is that data currently stored?
  • how will you migrate it?
  • if this is just what you estimate you’ll get, how soon will you reach these numbers?

Original title and link: HBase with Trillions Rows (NoSQL databases © myNoSQL)