ALL COVERED TOPICS

NoSQL Benchmarks NoSQL use cases NoSQL Videos NoSQL Hybrid Solutions NoSQL Presentations Big Data Hadoop MapReduce Pig Hive Flume Oozie Sqoop HDFS ZooKeeper Cascading Cascalog BigTable Cassandra HBase Hypertable Couchbase CouchDB MongoDB OrientDB RavenDB Jackrabbit Terrastore Amazon DynamoDB Redis Riak Project Voldemort Tokyo Cabinet Kyoto Cabinet memcached Amazon SimpleDB Datomic MemcacheDB M/DB GT.M Amazon Dynamo Dynomite Mnesia Yahoo! PNUTS/Sherpa Neo4j InfoGrid Sones GraphDB InfiniteGraph AllegroGraph MarkLogic Clustrix CouchDB Case Studies MongoDB Case Studies NoSQL at Adobe NoSQL at Facebook NoSQL at Twitter

NAVIGATE MAIN CATEGORIES

Close

NoSQL case study: All content tagged as NoSQL case study in NoSQL databases and polyglot persistence

MongoDB at Traackr: Indexing and Ad-Hoc Querying

Traackr’s original model included three distinct data buckets: 1) Influencers, 2) Channels (sites for published content), and 3) Posts (tweets, blogs, etc.). Traackr creates influencer listings based on mining these data sets for keywords. If the information was only loosely coupled or if the wrong type of relationship festered for too long, inaccurate, inconsistent influencer rankings would result. In order to provide higher quality lists to clients, Traackr needed to build stronger associations within their data model, and HBase lacked indexing and ad-hoc querying capabilities to make this happen.

A few other MongoDB features are mentioned in the post, but the key is indexing and ad-hoc queries.

Original title and link: MongoDB at Traackr: Indexing and Ad-Hoc Querying (NoSQL database©myNoSQL)

via: http://www.10gen.com/customers/traackr


NoSQL Case Study: Riak for the Danish Healthcase System

InfoQ just published Kresten Krab Thorup presentation at GOTO conference, Riak on Drugs (and the Other Way Around), covering details about the Danish healthcare system built on top of Riak for high availability, scalability and to run off multiple data centers. Now we have both sides of the case study of building a nationwide healthcase system using Riak and Gigaspaces XAP.

Original title and link: NoSQL Case Study: Riak for the Danish Healthcase System (NoSQL database©myNoSQL)

via: http://www.infoq.com/presentations/Case-Study-Riak-on-Drugs


MongoDB at Shutterfly

We looked at just about everything we could think of. Lots of options like Cassandra made sense for a very narrow problem set, where MongoDB allowed for flexibility. We wanted that flexibility.

Video and slides, highlights, and an interview with the architect of Shutterfly, Inc.

Original title and link: MongoDB at Shutterfly (NoSQL databases © myNoSQL)


CouchDB Case Study: Web Based IRC

Another CouchDB case study this time from Anologue:

Initial goal: enable any number of people to view a web page that would serve as a sort of chat-room. Generate a link, share it with whomever you’d like to participate in the dialogue, type your name and text to add to the conversation.

I’d speculate that CouchDB was used due to its possibly simplified architecture of the web app and its document-based data model. Definitely not based on some “fake” or just plain wrong reasons.

Adding it to the list of CouchDB case studies.

via: http://www.couch.io/case-study-anologue


NoSQL Case Study: Migrating to HBase/Hadoop to Handle Firefox Crash Reports at Mozilla

What will you do if you’d have to process daily 2.5 million crash reports amounting to around 320Gb of data and you’d have an architecture as the one below?

This is exactly the scenario of handling Firefox crash reports at Mozilla. And according to their ☞ blog, all this will change pretty soon with the migration to HBase and Hadoop.

However, we are in the process of moving to Hadoop, and currently all our crashes are also being written to HBase. Soon this will become our main data storage, and we’ll be able to do a lot more interesting things with the data. We’ll also be able to process 100% of crashes.

Some of the steps involved to get to the new and simplified architecture depicted below:

[…] we will no longer be relying on NFS in production. All crash report submissions are already stored in HBase, but with Socorro 1.7, we will retrieve the data from HBase for processing and store the processed result back into HBase.

[…]

[…] we will migrate the processors and minidump_stackwalk instances to run on our Hadoop nodes, further distributing our architecture. This will give us the ability to scale up to the amount of data we have as it grows over time.

Nice!