ALL COVERED TOPICS

NoSQL Benchmarks NoSQL use cases NoSQL Videos NoSQL Hybrid Solutions NoSQL Presentations Big Data Hadoop MapReduce Pig Hive Flume Oozie Sqoop HDFS ZooKeeper Cascading Cascalog BigTable Cassandra HBase Hypertable Couchbase CouchDB MongoDB OrientDB RavenDB Jackrabbit Terrastore Amazon DynamoDB Redis Riak Project Voldemort Tokyo Cabinet Kyoto Cabinet memcached Amazon SimpleDB Datomic MemcacheDB M/DB GT.M Amazon Dynamo Dynomite Mnesia Yahoo! PNUTS/Sherpa Neo4j InfoGrid Sones GraphDB InfiniteGraph AllegroGraph MarkLogic Clustrix CouchDB Case Studies MongoDB Case Studies NoSQL at Adobe NoSQL at Facebook NoSQL at Twitter

NAVIGATE MAIN CATEGORIES

Close

MongoDB: All content tagged as MongoDB in NoSQL databases and polyglot persistence

Memory-Mapped I/O in SQLite

Beginning with version 3.7.17, SQLite has the option of accessing disk content directly using memory-mapped I/O and the new xFetch() and xUnfetch() methods on sqlite3_io_methods.

As with the docs about atomic commits, this will be one of the best, succinct, and clear docs you’ll read about memory mapped files, the pros and cons, and how SQLite uses them.

If you are a MongoDB user you should read this.

✚ Check out the HN thread to see how many people love SQLite.

Original title and link: Memory-Mapped I/O in SQLite (NoSQL database©myNoSQL)

via: http://www.sqlite.org/mmap.html


NoSQL and Full Text Indexing: Two Trends

On one side:

  1. DataStax with Solr
  2. MapR with LucidWorks Search (nb: Solr)

and on the other side:

  1. Riak Searching: Solr-like but custom prioprietary implementation
  2. MongoDB text search: custom prioprietary implementation

I’m not going to argue about the pros and cons of each of these approaches, but I’m sure you already know which of these approaches I’m in favor of.

Original title and link: NoSQL and Full Text Indexing: Two Trends (NoSQL database©myNoSQL)


How to use MongoDB Redis-style: pure in-memory database

Antoine Girbal:

One sweet design choice of MongoDB is that it uses memory-mapped files to handle access to data files on disk. This means that MongoDB does not know the difference between RAM and disk, it just accesses bytes at offsets in giant arrays representing files and the OS takes care of the rest! It is this design decision that allows MongoDB to run in RAM with no modification.

No pun intended, but until MongoDB added journaling (on by default since 2.0), I’ve always looked at MongoDB as an in-memory database. And, I have to confess that even after that, considering all the recommendations and stories I’m reading about MongoDB, I still perceive it as a mostly in-memory database.

Original title and link: How to use MongoDB Redis-style: pure in-memory database (NoSQL database©myNoSQL)

via: http://edgystuff.tumblr.com/post/49304254688/how-to-use-mongodb-as-a-pure-in-memory-db-redis-style


The MEAN Stack: MongoDB, ExpressJS, AngularJS and Node.js

MongoDB, ExpressJS, AngularJS and Node.js the MEAN stack or as the first commenter on the post called it: “the hipster stack”:

A few weeks ago, a friend of mine asked me for help with PostgreSQL. As someone who’s been blissfully SQL-­free for a year, I was quite curious to find out why he wasn’t just using MongoDB instead.

It’s all roses on the way to MongoDB.

Original title and link: The MEAN Stack: MongoDB, ExpressJS, AngularJS and Node.js (NoSQL database©myNoSQL)

via: http://blog.mongodb.org/post/49262866911/the-mean-stack-mongodb-expressjs-angularjs-and


10 questions to ask when hosting your database on AWS

Dharshan Rangegowda, founder of Scalegrid, posted a list of 10 questions that should be answered before hosting your MongoDB on AWS. But these are generic enough to extend to any database-on-AWS solution. They cover aspects like HA, backup and restore, monitoring, and basic security. If you haven’t done this before, save them as a quick check list.

✚ Just because you set up HA and backups, it doesn’t mean they’ll actually work when you need them. Test them over and over again. Make it part of your regular procedures.

Original title and link: 10 questions to ask when hosting your database on AWS (NoSQL database©myNoSQL)

via: http://blog.mongodirector.com/10-questions-to-ask-and-answer-when-hosting-mongodb-on-aws/


MongoLab offers MongoDB on Google Cloud Platform

This was fast:

This week at Google I/O we are launching support for MongoLab‘s fifth cloud provider – Google Cloud Platform. You can now use MongoLab to provision and manage MongoDB deployments on Google Compute Engine (GCE)!

Good move for MongoLab and good win for MongoDB users. I’ve read a lot of good things about Google’s Cloud Platform.

Original title and link: MongoLab offers MongoDB on Google Cloud Platform (NoSQL database©myNoSQL)

via: http://blog.mongolab.com/2013/05/mongolab-now-supports-google-cloud-platform/


MetLife uses MongoDB

InformationWeek, in an article about MetLife migrating to MongoDB:

“We had 60 different teams working together as one group, and they were working nights and weekends not because they had to but because they were excited and wanted to,” says Gary Hoberman, MetLife’s senior VP and CIO of regional application development.

Just imagine how many nights and weekends and holidays these guys would put in if allowed to use an IDE. Like vim or emacs.

Original title and link: MetLife uses MongoDB (NoSQL database©myNoSQL)

via: http://www.informationweek.com/software/information-management/metlife-taps-nosql-for-customer-service/240154741?nomobile=1


MongoDB's TTL Collections in OpenStack's Marconi queuing service

Flavio Percoco describing some workaround OpenStack’s queing system is when using MongoDB’s TTL collections:

Even though it is a great feature, it wasn’t enough to cover Marconi’s needs since the later supports per message TTL. In order to cover this, one of the ideas was to implement something similar to Mongodb’s thread and have it running server-side but we didn’t want that for a couple of reasons: it needed a separated thread / process and it had a bigger impact in terms of performance.

This got me thinking it might be one of the (few) features missing from Redis.

✚ Redis supports timeouts for keys. Redis 2.6 brought the accuracy of expiring keys from 1 second to 1 millisecond.

✚ Redis has support for different data structures like lists, sets, and sorted sets. But it’s missing the combination of the two.

Original title and link: MongoDB’s TTL Collections in OpenStack’s Marconi queuing service (NoSQL database©myNoSQL)

via: http://blog.flaper87.org/post/517c3ea50f06d3497faffe5a/?buffer_share=d639c


MongoDB Pub/Sub With Capped Collections

Rick Copeland designs a MongoDB Pub/Sub system based on:

  • MongoDB’s capped collections,
  • tailable data-awaiting cursors,
  • sequences (using find_and_modify()),
  • a “poorly documented option” of capped collections: oplog_replay1.

If you’ve been following this blog for any length of time, you know that my NoSQL database of choice is MongoDB. One thing that MongoDB isn’t known for, however, is building a publish / subscribe system. Redis, on the other hand, is known for having a high-bandwith, low-latency pub/sub protocol. One thing I’ve always wondered is whether I can build a similar system atop MongoDB’s capped collections, and if so, what the performance would be. Read on to find out how it turned out…

The solution is definitely ingenious and it could probably work for systems with not so many requirements for their pub/sub. It’s definitely a good excercise in combining some interesting features of MongoDB (I like the capped collections and the tailable data-awaiting cursors).

✚ I’m wondering if the behavior of the tailable data-awaiting cursors is the one of the non-blocking polls.


  1. I don’t really understand how this works. 

Original title and link: MongoDB Pub/Sub With Capped Collections (NoSQL database©myNoSQL)

via: http://blog.pythonisito.com/2013/04/mongodb-pubsub-with-capped-collections.html


Counting in MongoDB Just Got Much Faster

Antoine Girbal about counts in MongoDB:

Doing counts in MongoDB has always been a slow operation even on an indexed field… until now. To do the count, it would iterate through every single element in the index and try to match the key, giving a response time of several seconds for just a million documents. It would be especially slow on values with high cardinality, meaning that the count is high.

A bug-fix and an optimization using MongoDB’s B-trees.

Original title and link: Counting in MongoDB Just Got Much Faster (NoSQL database©myNoSQL)

via: http://edgystuff.tumblr.com/post/47080433433/counting-in-mongodb-just-got-much-faster


Is MongoDB Still on Course?

Adrien Mogenet about his expectations on MongoDB’s evolution:

I was a real fan of MongoDB in it’s early days, while all of the current solutions were just emerging. […] To be clear, I’m definitely not against MongoDB. I just wanted through this article to point out the fact that they roughly changed their directions and lead to a project that I could not follow anymore;

Breadth vs depth.

Original title and link: Is MongoDB Still on Course? (NoSQL database©myNoSQL)

via: http://www.borntosegfault.com/2013/03/is-mongodb-still-on-course.html


MongoDB Transactions With TokuDB's Fractal Tree Indexes Engine

Interesting new direction of TokuDB pushing their storage engine based on Fractal Tree Indexes to MongoDB:

Running MongoDB with Fractal Tree Indexes (used today in the MySQL storage engine TokuDB) is fully transactional. Each statement is transactional. If an update is to modify ten rows, then either all rows are modified, or none are. Queries use multi-versioning concurrency control (MVCC) to return results from a snapshot of the system, thereby not being affected by write operations that may happen concurrently.

Original title and link: MongoDB Transactions With TokuDB’s Fractal Tree Indexes Engine (NoSQL database©myNoSQL)

via: http://www.tokutek.com/2013/04/mongodb-transactions-yes/#gsc.tab=0