NoSQL Benchmarks NoSQL use cases NoSQL Videos NoSQL Hybrid Solutions NoSQL Presentations Big Data Hadoop MapReduce Pig Hive Flume Oozie Sqoop HDFS ZooKeeper Cascading Cascalog BigTable Cassandra HBase Hypertable Couchbase CouchDB MongoDB OrientDB RavenDB Jackrabbit Terrastore Amazon DynamoDB Redis Riak Project Voldemort Tokyo Cabinet Kyoto Cabinet memcached Amazon SimpleDB Datomic MemcacheDB M/DB GT.M Amazon Dynamo Dynomite Mnesia Yahoo! PNUTS/Sherpa Neo4j InfoGrid Sones GraphDB InfiniteGraph AllegroGraph MarkLogic Clustrix CouchDB Case Studies MongoDB Case Studies NoSQL at Adobe NoSQL at Facebook NoSQL at Twitter



linkedin: All content tagged as linkedin in NoSQL databases and polyglot persistence

Krati: A Persistent High-Performance Data Store

Krati is a simple persistent data store with very low latency and high throughput. It is designed for easy integration with read-write-intensive applications with little effort in tuning configuration, performance and JVM garbage collection.

Sounds a bit like Bitcask. Anyone can point out at least on the major differences?

From the project page:

  • supports varying-length data array
  • supports key-value data store access
  • performs append-only writes in batches
  • has write-ahead redo logs and periodic checkpointing
  • has automatic data compaction (i.e. garbage collection)
  • is memory-resident (or OS page cache resident) yet persistent
  • allows single-writer and multiple readers

Krati: A Persistent High-Performance Data Store originally posted on the NoSQL blog: myNoSQL


LinkedIn, Data Processing, and Pig

Probably one of the nicest taglines for Pig:

If Perl is the duct tape of the internet, and Hadoop is the kernel of the data center as computer, then Pig is the duct tape of Big Data.

And an advise on how to use Pig:

When I write Pig Latin code beyond a dozen lines, I check it in stages:

  • Write Pig Latin in TextMate (Saved in a git repo, otherwise I lose code)
  • Paste the code into the Grunt shell – Did it parse?
  • DESCRIBE the final output and each complex step – Did it still parse? Is the schema what I expected?
  • ILLUSTRATE the output – Does it still parse? Is the schema ok? Is the example data ok?
  • SAMPLE / LIMIT / DUMP the output – Does it still parse? Is the schema ok? Is the sampled/limited data sane?
  • STORE the final output and see if the job completes.
  • cat output_dir/part-00000 (followed by a quick ctrl-c to stop the flood) – Is the stored output on HDFS ok?