ALL COVERED TOPICS

NoSQL Benchmarks NoSQL use cases NoSQL Videos NoSQL Hybrid Solutions NoSQL Presentations Big Data Hadoop MapReduce Pig Hive Flume Oozie Sqoop HDFS ZooKeeper Cascading Cascalog BigTable Cassandra HBase Hypertable Couchbase CouchDB MongoDB OrientDB RavenDB Jackrabbit Terrastore Amazon DynamoDB Redis Riak Project Voldemort Tokyo Cabinet Kyoto Cabinet memcached Amazon SimpleDB Datomic MemcacheDB M/DB GT.M Amazon Dynamo Dynomite Mnesia Yahoo! PNUTS/Sherpa Neo4j InfoGrid Sones GraphDB InfiniteGraph AllegroGraph MarkLogic Clustrix CouchDB Case Studies MongoDB Case Studies NoSQL at Adobe NoSQL at Facebook NoSQL at Twitter

NAVIGATE MAIN CATEGORIES

Close

document database: All content tagged as document database in NoSQL databases and polyglot persistence

NoSQL meets Bitcoin and brings down two exchanges

Most of Emin Gün Sirer’s posts end up linked here, as I usually enjoy the way he combines a real-life story with something technical, all that ending with a pitch for HyperDex.

The problem here stemmed from the broken-by-design interface and semantics offered by MongoDB. And the situation would not have been any different if we had used Cassandra or Riak. All of these first-generation NoSQL datastores were early because they are easy to build. When the datastore does not provide any tangible guarantees besides “best effort,” building it is simple. Any masters student in a top school can build an eventually consistent datastore over a weekend, and students in our courses at Cornell routinely do. What they don’t do is go from door to door in the valley, peddling the resulting code as if it could or should be deployed.

Unfortunately in this case, the jump from the real problem, which was caused only by the pure incompetence, to declaring “first-generation NoSQL databases” as being bad and pitching HyperDex’s features is both too quick and incorrect1.


  1. 1) ACID guarantees wouldn’t have solved the issue; 2) All 3 NoSQL databases mentioned, actually offer a solution for this particular scenario. 

Original title and link: NoSQL meets Bitcoin and brings down two exchanges (NoSQL database©myNoSQL)

via: http://hackingdistributed.com/2014/04/06/another-one-bites-the-dust-flexcoin/


When is MongoDB the Right Tool for the Job?

This puts me in a quandary, because my recent stint on the job market has shown that just about everybody is using MongoDB, and I’ve just never been in any situation that I have needed to use it.

I also can’t foresee any situation where there is a solid technical reason for choosing MongoDB over it’s competitors either, and the last thing I want to do is lead people astray or foist my preconceptions onto them.

2438326-laughing-hysterically

Then the top comment on reddit.

Original title and link: When is MongoDB the Right Tool for the Job? (NoSQL database©myNoSQL)

via: http://daemon.co.za/2014/04/when-is-mongodb-the-right-tool/


A practical comparison of Map-Reduce in MongoDB and RavenDB

Ben Foster looks at MongoDB’s Map-Reduce and aggregation framework and then compares them with RavenDB’s Map-Reduce:

I thought it would be interesting to do a practical comparison of Map-Reduce in both MongoDB and RavenDB.

There are more differences than similarities — I’m not referring to the API differences, but to fundamental differences to the ways they operate.

✚ RavenDB’s author has a follow up post in which he underlines another major difference: RavenDB’s Map-Reduce operates as an index, while MongoDB’s Map-Reduce is an online operation.

Original title and link: A practical comparison of Map-Reduce in MongoDB and RavenDB (NoSQL database©myNoSQL)

via: http://benfoster.io/blog/map-reduce-in-mongodb-and-ravendb


CouchDB - a short review

Pretty good summary of what’s good and what you need to pay attention to when using CouchDB:

During one of our last projects we had a small 2-year adventure with Apache CouchDB NoSQL database. Here, I’m going to briefly present its strong points as well as drawbacks. […] CouchDB was chosen based on requirements and assumptions in the project. Especially, easy multi-master replication seemed to be attractive in the context of the project, which was supposed to be a distributed document database without any relations and rather unstructured data. Unfortunately, as we were going deeper into the project those assumptions turned out not to be 100% correct, and sometimes using this technology was a bit painful.

✚ It’s been quite a while since I last read a post about CouchDB. I won’t conclude based on a single article that CouchDB is still doing well, but it was nice to see it mentioned again.

Original title and link: CouchDB - a short review (NoSQL database©myNoSQL)

via: http://www.future-processing.pl/blog/couchdb-short-review/


4 Reasons Perfect Market chose MongoDB

A team from Perfect Market about choosing MongoDB for their Digital Publishing Suite:

There are many NoSQL products out there, why did we bet on MongoDB? There are four major reasons: great performance, great features, ease of use and great support. Of course not every day with MongoDB is a sunshine day. Some tradeoffs we made are shared at the end of this post.

  1. I’m sure Perfect Market would get great support from almost every NoSQL database vendor — that’s what I’ve always heard in this market segment.
  2. By great performance I’ll assume Perfect Market got the numbers they needed. While presented as the top reason for choosing MongoDB, I think this was more in line with: “considering these other features, is MongoDB’s performance good enough for us?”.

    MongoDB is not the fastest NoSQL database.

  3. Great features and ease of use. Nobody can deny that, at least at the first glance, MongoDB’s feature set is very compelling. And they’ve absolutely nailed the user experience part.

    My hypothesis for MongoDB’s adoption rate has always been that it’s mostly due to it looking familiar to people with relational db experience and also removing most of the strict constraints of these. This is echoed in this post too:

    Althought MongoDB is a NoSQL document DBMS, it bears resemblance to RDBMS’s.

Original title and link: 4 Reasons Perfect Market chose MongoDB (NoSQL database©myNoSQL)

via: http://perfectmarket.com/four-reasons-perfect-market-bets-on-mongodb/


A Couchbase stack for under $1000

In this article we are going to look at how you can build an awesome cloud based solution with a lot of headroom and power for Couchbase for under $1000!

Getting 8 servers (2 reverse proxies, 2 app servers, 4 database nodes) for this money sounds like a sweet deal.

Original title and link: A Couchbase stack for under $1000 (NoSQL database©myNoSQL)

via: http://scalabilitysolved.com/build-a-kick-ass-couchbase-stack-for-under-1000/


Integrating D3 with CouchDB

A 4-part series by Mike Bostock describing various integrations paths of D3 and CouchDB:

  1. Part 1: saving a D3 app in CouchDB
  2. Part 2: storing D3 library in CouchDB and storing data in CouchDB
  3. Part 3: accessing CouchDB data from D3
  4. Part 4: data import

Original title and link: Integrating D3 with CouchDB (NoSQL database©myNoSQL)


From IBM to… IBM: The short, but complicated history of CouchDB, Cloudant, and a lot of other companies and projects

Damien Katz created CouchDB after working at IBM on Lotus Notes: CouchDB and Me. CouchDB went the Apache way. Then things got complicated…

On the West coast, Damien Katz and a team of committers created Couchio, later renamed to CouchOne, later merged with Membase to become Couchbase, which finally dropped CouchDB. Damien Katz left Couchbase.

A confusing history with a very complicated genealogy of projects (don’t worry, this goes on) and companies. And this was only West Coast.

East Coast, Cloudant took CouchDB and made it BigCouch. I thought that Cloudant will be the CouchDB company — and in a way it was. Cloudant put BigCouch on the cloud as a service and on GitHub as open source. BigCouch is supposed to get back into Apache CouchDB, but many months later this hasn’t materialized yet.

To complete the circle, today IBM announced signing an agreement to acquire Cloudant — news coverage on GigaOm, BostInno, TechCrunch. Which probably makes sense considering Cloudant’s relationship with SoftLayer and IBM’s $1 billion Platform-as-a-Service Investment, but less so if you consider the IBM and 10genMongoDB collaboration.

Anyways, the future of Apache CouchDB is bright. Yep.

Original title and link: From IBM to… IBM: The short, but complicated history of CouchDB, Cloudant, and a lot of other companies and projects (NoSQL database©myNoSQL)


How SQL-on-JSON analytics bolstered a business

Alex Woodie (Datanami) reporting about BitYota a SQL-based data warehouse on top of JSON:

BitYota says it designed its own hosted data warehouse from scratch, and that it’s differentiated by having a JSON access layer atop the data store. “We have some uniqueness where we operate SQL directly on JSON,” says BitYota CEO Dev Patel. “We don’t need to translate that data into a structured format like a CSV. We believe that if you transform the data, you will lose some of the data quality. And once that’s transformed, you won’t get it back.”

✚ BitYota’s tagline is Analytics for mongoDB, so I assume it’s safe to say the backend is mongoDB and they are building a SQL layer on top of it. What flavor and what’s the behavior for SQL’s quirks would be a very interesting story.

✚ This related to my earlier Do all roads lead back to SQL?

Original title and link: How SQL-on-JSON analytics bolstered a business (NoSQL database©myNoSQL)

via: http://www.datanami.com/datanami/2014-02-12/how_sql-on-json_analytics_bolstered_a_business.html


The birth and road ahead of TokuMX, the alternative MongoDB engine

While not a MongoDB user (or expert), I find Tokutek’s work on their alternative engine for MongoDB, TokuMX, quite interesting both for technical — what is currently broken in MongoDB — and business point of views — is the InnoDB model possible in the NoSQL space?, what are some possible outcomes of the alternative core technology for free products business model?, would a new product bringing together MongoDB’s missing features and combining them with MongoDB’s “friendliness” and product marketing still lead to a successful product?, etc.

Zardosht Kasheff’s post about the history of TokuMX and how the decision was made to pursue this direction brings some light to both these areas.

But really, the BIGGEST benefit to this approach was the following: we could innovate on more of the MongoDB core server stack in ways the other approaches would not allow. Prior to TokuMX 1.4, such innovations include (but are not limited to):

  • Document level locking
  • Multi-statement transactions (on non-sharded clusters)
  • MVCC snapshot query semantics
  • Clustering indexes (although, to be fair, this was possible in other approaches)
  • Dramatically reduced I/O utilization on secondaries (which we will elaborate on in a future post)
  • Fast bulk loading
  • Enterprise hot backup

For these reasons, we chose this option, and after some hard work, TokuMX was born.

Original title and link: The birth and road ahead of TokuMX, the alternative MongoDB engine (NoSQL database©myNoSQL)

via: http://www.tokutek.com/2014/02/how-tokumx-was-born/


The Hadoop as ETL part in migrating from MongoDB to Cassandra at FullContact

While I’ve found the whole post very educative — and very balanced considering the topic — the part that I’m linking to is about integrating MongoDB with Hadoop. After reading the story of integrating MongoDB and Hadoop at Foursquare, there were quite a few questions bugging me. This post doesn’t answer any of them, but it brings in some more details about existing tools, a completely different solution, and what seems to be an overarching theme when using Hadoop and MongoDB in the same phrase:

We’re big users of Hadoop MapReduce and tend to lean on it whenever we need to make large scale migrations, especially ones with lots of transformation. That fact along with our existing conversion project from before, we used 10gen’s mongo-hadoop project which has input and output formats for Hadoop. We immediately realized that the InputFormat which connected to a MongoDB cluster was ill-suited to our usage. We had 3TB of partially-overlapping data across 2 clusters. After calculating input splits for a few hours, it began pulling documents at an uncomfortably slow pace. It was slow enough, in fact, that we developed an alternative plan.

You’ll have to read the post to learn how they’ve accomplished their goal, but as a spoiler, it was once again more of an ETL process rather than an integration.

✚ The corresponding HN thread; it’s focused mostly on the from MongoDB to Cassandra parts.

Original title and link: The Hadoop as ETL part in migrating from MongoDB to Cassandra at FullContact (NoSQL database©myNoSQL)

via: http://www.fullcontact.com/blog/mongo-to-cassandra-migration/


Storage technologies at HipChat - CouchDB, ElasticSearch, Redis, RDS

As per the list below, HipChat’s storage solution is based on a couple of different solutions:

  • Hosting: AWS EC2 East with 75 Instance currently all Ubuntu 12.04 LTS
  • Database: CouchDB currently for Chat History, transitioning to ElasticSearch. MySQL-RDS for everything else
  • Caching: Redis
  • Search: ElasticSearch
  1. This post made me wonder what led HipChat team to use CouchDB in the first place. I’m tempted to say that it was the master-master replication and the early integration with Lucene.
  2. This is only the 2nd time in quite a while I’m reading an article mentioning CouchDB — after the February “no-releases-but-we’re-still-merging-BigCouch” report for ASF. And according to the story, CouchDB is on the way out.

Original title and link: Storage technologies at HipChat - CouchDB, ElasticSearch, Redis, RDS (NoSQL database©myNoSQL)

via: http://highscalability.com/blog/2014/1/6/how-hipchat-stores-and-indexes-billions-of-messages-using-el.html