ALL COVERED TOPICS

NoSQL Benchmarks NoSQL use cases NoSQL Videos NoSQL Hybrid Solutions NoSQL Presentations Big Data Hadoop MapReduce Pig Hive Flume Oozie Sqoop HDFS ZooKeeper Cascading Cascalog BigTable Cassandra HBase Hypertable Couchbase CouchDB MongoDB OrientDB RavenDB Jackrabbit Terrastore Amazon DynamoDB Redis Riak Project Voldemort Tokyo Cabinet Kyoto Cabinet memcached Amazon SimpleDB Datomic MemcacheDB M/DB GT.M Amazon Dynamo Dynomite Mnesia Yahoo! PNUTS/Sherpa Neo4j InfoGrid Sones GraphDB InfiniteGraph AllegroGraph MarkLogic Clustrix CouchDB Case Studies MongoDB Case Studies NoSQL at Adobe NoSQL at Facebook NoSQL at Twitter

NAVIGATE MAIN CATEGORIES

Close

document database: All content tagged as document database in NoSQL databases and polyglot persistence

Storage technologies at HipChat - CouchDB, ElasticSearch, Redis, RDS

As per the list below, HipChat’s storage solution is based on a couple of different solutions:

  • Hosting: AWS EC2 East with 75 Instance currently all Ubuntu 12.04 LTS
  • Database: CouchDB currently for Chat History, transitioning to ElasticSearch. MySQL-RDS for everything else
  • Caching: Redis
  • Search: ElasticSearch
  1. This post made me wonder what led HipChat team to use CouchDB in the first place. I’m tempted to say that it was the master-master replication and the early integration with Lucene.
  2. This is only the 2nd time in quite a while I’m reading an article mentioning CouchDB — after the February “no-releases-but-we’re-still-merging-BigCouch” report for ASF. And according to the story, CouchDB is on the way out.

Original title and link: Storage technologies at HipChat - CouchDB, ElasticSearch, Redis, RDS (NoSQL database©myNoSQL)

via: http://highscalability.com/blog/2014/1/6/how-hipchat-stores-and-indexes-billions-of-messages-using-el.html


Mapping relational databases terms and SQL to MongoDB

A tuts+ guide to MongoDB for people familiar with SQL and relational databases:

We will start with mapping the basic relational concepts like table, row, column, etc and move to discuss indexing and joins. We will then look over the SQL queries and discuss their corresponding MongoDB database queries.

By the end of it you’ll probably not be able to convert your app to MongoDB, but at the next meetup or hackaton you’ll have an idea of what those Mongo guys are talking about.

Original title and link: Mapping relational databases terms and SQL to MongoDB (NoSQL database©myNoSQL)

via: http://code.tutsplus.com/articles/mapping-relational-databases-and-sql-to-mongodb--net-35650


MongoDB data storage structure, dbStats, and managing disk space

Two great posts from mongolab covering details about the structure of MongoDB’s data on disk, how this is reflected in the results returned by the dbStats API, and last some attempts to recover disk space:

  1. How big is your MongoDB?
  2. Managing disk space in MongoDB

MongoDB data files

Original title and link: MongoDB data storage structure, dbStats, and managing disk space (NoSQL database©myNoSQL)


Top 5 syntactic weirdnesses to be aware of in MongoDB

Slava Kim, a developer using MongoDB on a daily basis:

This article is not one of those. While most of the posts focus on operations part, benchmarks and performance characteristics, I want to talk a little bit about MongoDB query interfaces. That’s right - programming interfaces, specifically about node.js native driver but those are nearly identical across different platform drivers and Mongo-shell.

You might consider some of these as corner cases. Or worse, things you’d get used with over time.

There is inherent complexity in developing a database. Adding such “quirks” — or allowing them to slip into your product — will just make things worse. And in case you think about all those products that cut corners — or the 80/20 principle — just to get to market sooner, I’ll let you answer if a database is the right place for applying these principles.

Original title and link: Top 5 syntactic weirdnesses to be aware of in MongoDB (NoSQL database©myNoSQL)

via: http://devblog.me/wtf-mongo


Partitioning MongoDB Data on the Fly

I’ve bookmarked this article initially because it was mentioning the same strategy for migrating data with zero downtime.

So during the update period, there could be writes sent to the old service, which is writing to the old, single MongoDB cluster. After updating, there’s a period of time where both servers are writing before the second machine is updated.

But then I’ve realized that this is all about MongoDB. Tony Tam of Reverb:

To partition your data with the standard MongoDB toolset, significant downtime is unavoidable. You’ll either need to write a bunch of application logic, or get creative with some third party tools. This is a problem that we’ve hit at Reverb more than once, and are the exact same tools + technique that we used to migrate across datacenters (see From the Cloud and Back).

Isn’t MongoDB’s autosharding supposed to address exactly this scenario? What am I missing?

Original title and link: Partitioning MongoDB Data on the Fly (NoSQL database©myNoSQL)

via: http://developers-blog.helloreverb.com/partitioning-mongodb-data-on-the-fly/


Quick links for how to backup different NoSQL databases

After re-reading HyperDex’s comparison of Cassandra, MongoDB, and Riak backups, I’ve realized there are no links to the corresponding docs. So here they are:

Cassandra backups

Cassandra backs up data by taking a snapshot of all on- disk data files (SSTable files) stored in the data directory.

You can take a snapshot of all keyspaces, a single keyspace, or a single table while the system is online. Using a parallel ssh tool (such as pssh), you can snapshot an entire cluster. This provides an eventually consistent backup. Although no one node is guaranteed to be consistent with its replica nodes at the time a snapshot is taken, a restored snapshot resumes consistency using Cassandra’s built-in consistency mechanisms.

After a system-wide snapshot is performed, you can enable incremental backups on each node to backup data that has changed since the last snapshot: each time an SSTable is flushed, a hard link is copied into a /backups subdirectory of the data directory (provided JNA is enabled).

MongoDB backups

Basically three are three ways to backup MongoDB:

  1. Using MMS
  2. Copying underlying files
  3. Using mongodump

Riak backups

Riak’s backup operations are pretty different for the two main storage backends, Bitcask and LevelDB, used by Riak:

Choosing your Riak backup strategy will largely depend on the backend configuration of your nodes. In many cases, Riak will conform to your already established backup methodologies. When backing up a node, it is important to backup both the ring and data directories that pertain to your configured backend.

Note: I’d be happy to update this entry with links to docs on what tools and solutions other NoSQL databases (HBase, Redis, Neo4j, CouchDB, Couchbase, RethinkDB) are providing.

✚ Considering that creating backups is as useful as making sure that these will actually work when trying to restore, I’m wondering why there are no tools that can validate a backup without forcing a complete restore. The two mechanisms are not equivalent, but for large size databases this might simplify a bit the process and increase the confidence of the users.

Original title and link: Quick links for how to backup different NoSQL databases (NoSQL database©myNoSQL)


Comparing NoSQL backup solutions

In a post introducing HyperDex backups, Robert Escriva compares the different backup solutions available in Cassandra, MongoDB, and Riak:

Cassandra: Cassandra’s backups are inconsistent, as they are taken at each server independently without coordination. Further, “Restoring from snapshots and incremental backups temporarily causes intensive CPU and I/O activity on the node being restored.”

MongoDB: MongoDB provides two backup strategies. The first strategy copies the data on backup, and re-inserts it on restore. This approach introduces high overhead because it copies the entire data set without opportunity for incremental backup.

The second approach is to use filesystem-provided snapshots to quickly backup the data of a mongod instance. This approach requires operating system support and will produce larger backup sizes.

Riak: Riak backups are inconsistent, as they are taken at each server independently without coordination, and require care when migrating between IP addresses. Further, Riak requires that each server be shut down before backing up LevelDB-powered backends.

How is HyperDex’s new backup described:

The HyperDex backup/restore process is strongly consistent, doesn’t require shutting down servers, and enables incremental backup support. Further, the process is quite efficient; it completes quickly, and does not consume CPU or I/O for extended periods of time.

The caveat is that HyperDex puts the cluster in read-only mode for backing up. That’s loss of availability. Considering both Cassandra and Riak promise is high availability, their choice was clear.

Update: This comment from Emin Gün Sirer makes me wonder if I missed something:

HyperDex quiesces the network, takes a snapshot, resumes. Whole operation takes sub-second latency.

The key point is that the system is online, available while the data copying is taking place.

Original title and link: Comparing NoSQL backup solutions (NoSQL database©myNoSQL)

via: http://hackingdistributed.com/2014/01/14/back-that-nosql-up/


MySQL is a great Open Source project. How about open source NoSQL databases?

In a post titled Some myths on Open Source, the way I see it, Anders Karlsson writes about MySQL:

As far as code, adoption and reaching out to create an SQL-based RDBMS that anyone can afford, MySQL / MariaDB has been immensely successful. But as an Open Source project, something being developed together with the community where everyone work on their end with their skills to create a great combined piece of work, MySQL has failed. This is sad, but on the other hand I’m not so sure that it would have as much influence and as wide adoption if the project would have been a “clean” Open Source project.

The article offers a very black-and-white perspective on open source versus commercial code. But that’s not why I’m linking to it.

The above paragraph made me think about how many of the most popular open source NoSQL databases would die without the companies (or people) that created them.

Here’s my list: MongoDB, Riak, Neo4j, Redis, Couchbase, etc. And I could continue for quite a while considering how many there are out there: RavenDB, RethinkDB, Voldemort, Tokyo, Titan.

Actually if you reverse the question, the list would get extremely short: Cassandra, CouchDB (still struggling though), HBase. All these were at some point driven by community. Probably the only special case could be LevelDB.

✚ As a follow up to Anders Karlsson post, Robert Hodges posted The Scale-Out Blog: Why I Love Open Source.

Original title and link: MySQL is a great Open Source project. How about open source NoSQL databases? (NoSQL database©myNoSQL)

via: http://karlssonondatabases.blogspot.com/2014/01/some-myths-on-open-source-way-i-see-it.html


Look how fast it is… actually it’s not, but who cares

This is how it goes:

  1. someone declares a solution being fast. It’s usually a micro benchmark presented with almost no context.
  2. then someone else shows better numbers from a competing product. It’s a similar micro benchmark performed with a completely different hardware. An apple-to-oranges comparison.
  3. the first person revists the topic and says that actually performance doesn’t matter.

What’s wrong with this?

  1. most of the readers will only see the first post. The attraction for numbers is irresistible.
  2. the very few people seeing the second type of post will already be segregated and dismiss the other results.

The bottom line is that we end up with 2 posts with irrelevant numbers that each group could use to claim theirs is bigger than others. And very few actually learn what’s so (completely) wrong about them.

Original title and link: Look how fast it is… actually it’s not, but who cares (NoSQL database©myNoSQL)


From MySQL to MongoDB and back - The world’s biggest biometrics database

The main subject of the article “Inside India’s Aadhar, The World’s Biggest Biometrics Database” published on TechCrunch is about possible information leaks, privacy issues, etc.. But I have found some interesting bits about the databases used towards its end:

Sudhir Narayana, assistant director general at Aadhar’s technology center, told me that MongoDB was among several database products, apart from MySQL, Hadoop and HBase, originally procured for running the database search. Unlike MySQL, which could only store demographic data, MongoDB was able to store pictures.

That’s the warning sign right there. You can already see what follows:

However, Aadhar has been slowly shifting most of its database related work to MySQL, after realizing that MongoDB was not being able to cope with massive chunks of data, millions of packets.

✚ You can see more details about Aadhaar’s complex database architecture in Big Data at Aadhaar With Hadoop, HBase, MongoDB, MySQL, and Solr

Original title and link: From MySQL to MongoDB and back - The world’s biggest biometrics database (NoSQL database©myNoSQL)

via: http://techcrunch.com/2013/12/06/inside-indias-aadhar-the-worlds-biggest-biometrics-database/


Why relationships are cool… Relationship in RDBMS vs graph databases

I have to agree with Patrick Durusau on this:

I have been trying to avoid graph “intro” slides and presentations.

There are only so many times you can stand to hear “…all the world is a graph…” as though that’s news. To anyone.

This presentation by Luca is different from the usual introduction to graphs presentation.

Original title and link: Why relationships are cool… Relationship in RDBMS vs graph databases (NoSQL database©myNoSQL)


TokuMX transactions for MongoDB

In two posts, the Tokutek guys are explaining how transactions work on TokuMX, the replacement engine they are proposing to MongoDB users—remember that Vadim Tkachenko (“MySQL Performance blog“) called TokuMX the InnoDB for MongoDB?:

  1. the what: Introducing TokuMX transactions for MongoDB applications

    • For each statement that tries to modify a TokuMX collection, either the entire statement is applied, or none of the statement is applied. A statement is never partially applied.
    • Commands beginTransaction, commitTransaction`, androllbackTransaction` have been added to allow users to perform multi-statement transactions.
    • TokuMX queries use multi-version concurrency control (MVCC). That is, queries operate on a snapshot of the system that does not change for the duration of the query. Concurrent inserts, updates, and deletes do not affect query results (note this does not include file operations like removing a collection).
  2. the benefits: Four benefits of TokuMX transactions for MongoDB applications:

    1. cursors represent a true snapshot of the system
    2. simpler to batch inserts together for performance
    3. simpler for applications to update multiple documents with a single statement
    4. no need to combine documents together for the purpose of atomicity

✚ I’d find TokuMX’s transactions even more interesting if they would work by default at a shard level instead of cluster level. Users would need to manually configure cluster-wise transaction thus remaining in control of the performance and availability.

✚ I still have my doubts about TokuMK’s positioning, but that’s a business & marketing story.

Original title and link: TokuMX transactions for MongoDB (NoSQL database©myNoSQL)