ALL COVERED TOPICS

NoSQL Benchmarks NoSQL use cases NoSQL Videos NoSQL Hybrid Solutions NoSQL Presentations Big Data Hadoop MapReduce Pig Hive Flume Oozie Sqoop HDFS ZooKeeper Cascading Cascalog BigTable Cassandra HBase Hypertable Couchbase CouchDB MongoDB OrientDB RavenDB Jackrabbit Terrastore Amazon DynamoDB Redis Riak Project Voldemort Tokyo Cabinet Kyoto Cabinet memcached Amazon SimpleDB Datomic MemcacheDB M/DB GT.M Amazon Dynamo Dynomite Mnesia Yahoo! PNUTS/Sherpa Neo4j InfoGrid Sones GraphDB InfiniteGraph AllegroGraph MarkLogic Clustrix CouchDB Case Studies MongoDB Case Studies NoSQL at Adobe NoSQL at Facebook NoSQL at Twitter

NAVIGATE MAIN CATEGORIES

Close

BigData in Context

Mark Palmer:

[…] think of a bucket of sand. If every grain of sand in the bucket was 1 byte of data, then:

  • The entire work of Shakespeare fills just one bucket of sand (about 5MB)
  • A fast financial market data feed (OPRA) fills a beach of sand in 24 hours (about 5TB) 
  • Google processes all the sand in the world every week (about 100PB)
  • We generate 60% more sand every year

In these terms, all of Twitter generates only a sand castle of quality data a day.

This is the sand generated directly by humans. Then we have the machine generated data.

Original title and link: BigData in Context (NoSQL databases © myNoSQL)

via: http://streambase.typepad.com/streambase_stream_process/2011/05/on-big-data-real-time-computing-and-twitter.html