ALL COVERED TOPICS

NoSQL Benchmarks NoSQL use cases NoSQL Videos NoSQL Hybrid Solutions NoSQL Presentations Big Data Hadoop MapReduce Pig Hive Flume Oozie Sqoop HDFS ZooKeeper Cascading Cascalog BigTable Cassandra HBase Hypertable Couchbase CouchDB MongoDB OrientDB RavenDB Jackrabbit Terrastore Amazon DynamoDB Redis Riak Project Voldemort Tokyo Cabinet Kyoto Cabinet memcached Amazon SimpleDB Datomic MemcacheDB M/DB GT.M Amazon Dynamo Dynomite Mnesia Yahoo! PNUTS/Sherpa Neo4j InfoGrid Sones GraphDB InfiniteGraph AllegroGraph MarkLogic Clustrix CouchDB Case Studies MongoDB Case Studies NoSQL at Adobe NoSQL at Facebook NoSQL at Twitter

NAVIGATE MAIN CATEGORIES

Close

bigtable: All content tagged as bigtable in NoSQL databases and polyglot persistence

5 Steps to Benchmarking Managed NoSQL - DynamoDB Vs Cassandra

Ben Bromhead (instaclustr) for High Scalability:

To determine the suitability of a provider, your first port of call is to benchmark. Choosing a service provider is often done in a number of stages. First is to shortlist providers based on capabilities and claimed performance, ruling out those that do not meet your application requirements. Second is to look for benchmarks conducted by third parties, if any. The final stage is to benchmark the service yourself.

Peter Bailis asks a very valid question: if it’s the default YCSB and it’s a benchmark, where are the results?”

✚ instaclustr offers a totally managed hosting solution for Cassandra. (Disclaimer: they’ve sponsored myNoSQL in the past)

Original title and link: 5 Steps to Benchmarking Managed NoSQL - DynamoDB Vs Cassandra (NoSQL database©myNoSQL)

via: http://highscalability.com/blog/2013/4/3/5-steps-to-benchmarking-managed-nosql-dynamodb-vs-cassandra.html


Improving Secondary Index Write Performance in Cassandra 1.2

Sam Tunnicliffe’s describes the old and new, optimized behavior of secondary indexes writes in Cassandra 1.2:

While secondary indexes can add a lot of flexibility to the way data is modelled and accessed, they do add complexity on the server side as the indexes need to be kept in sync with the primary data. Until recently, this has led to some significant trade offs in write throughput and IO utilisation as we always had to perform a read before the write in order to update any relevant secondary indexes. In Cassandra 1.2, this area has been substantially reworked to remove the need for read-before-write. New index entries are now written at the same time as the primary data is updated and old entries removed lazily at query time. Overall, this has lead to some decent performance improvements.

Original title and link: Improving Secondary Index Write Performance in Cassandra 1.2 (NoSQL database©myNoSQL)

via: http://www.datastax.com/dev/blog/improving-secondary-index-write-performance-in-1-2


Graph Based Recommendation Systems at eBay

Slidedeck from eBay explaining how they have implemented a graph based recommendation system based on,—surprise! not a graph database—Cassandra.

Original title and link: Graph Based Recommendation Systems at eBay (NoSQL database©myNoSQL)


HBase Compactions Q&A

Ted Yu summarizes some of the most frequent questions related to compactions in HBase:

On user mailing list, questions about compaction are probably the most frequently asked.

Original title and link: HBase Compactions Q&A (NoSQL database©myNoSQL)

via: http://zhihongyu.blogspot.com/2013/03/compactions-q.html


RSS Reader With Cassandra and Netflix OSS Tools

This RSS reader app from Netflix can be a very good excuse to use Cassandra, some of the open source projects from Netflix and why not create an alternative to Google’s Reader which is declared defunct or alive every couple of months:

Recipes_RSS_arch

Projects you’ll use: Cassandra with Astyanax, Archaius, Blitz4j, Eurka, Governator, Hystrix, Karyon, Ribbon, Servo. As for myself, I’ve already checked out the code.

Original title and link: RSS Reader With Cassandra and Netflix OSS Tools (NoSQL database©myNoSQL)

via: http://techblog.netflix.com/2013/03/introducing-first-netflixoss-recipe-rss.html


Cassandra at Adobe: The Profile Cache Servers

The team I know at Adobe has invested a lot into HBase and they are offering their services globally. But according to this PDF, in a true polyglot database manner, it looks like other parts of the Adobe business have opted for a different solution: Cassandra. The size of the cluster mentioned in the whitepaper is pretty small, 16 nodes, but what is interesting is that these are beafy servers using solid state drives:

The PCS is comprised of large servers using solid state drives (SSDs) for storage […] The PCS is basically Cassandra with a set of custom APIs built on top of it.

Original title and link: Cassandra at Adobe: The Profile Cache Servers (NoSQL database©myNoSQL)


Introduction to Apache HBase Snapshots

Matteo Bertozzi introduces HBase snapshots:

Prior to CDH 4.2, the only way to back-up or clone a table was to use Copy/Export Table, or after disabling the table, copy all the hfiles in HDFS. Copy/Export Table is a set of tools that uses MapReduce to scan and copy the table but with a direct impact on Region Server performance. Disabling the table stops all reads and writes, which will almost always be unacceptable.

In contrast, HBase snapshots allow an admin to clone a table without data copies and with minimal impact on Region Servers. Exporting the snapshot to another cluster does not directly affect any of the Region Servers; export is just a distcp with an extra bit of logic.

The part that made me really curious and that didn’t make too much sense when first reading the post is “clone a table without data copies”. But the post clarifies what the snapshot is:

A snapshot is a set of metadata information that allows an admin to get back to a previous state of the table. A snapshot is not a copy of the table; it’s just a list of file names and doesn’t copy the data. A full snapshot restore means that you get back to the previous “table schema” and you get back your previous data losing any changes made since the snapshot was taken.

What I still don’t understand is how snapshots are working after a major compaction (which drops deletes and expired cells).

Original title and link: Introduction to Apache HBase Snapshots (NoSQL database©myNoSQL)

via: http://blog.cloudera.com/blog/2013/03/introduction-to-apache-hbase-snapshots/


Adding Value Through Graph Analysis Using Titan and Faunus

Interesting slidedeck by Matthias Broecheler introducing 3 graph-related tools developed by Vadas Gintautas, Marko Rodriguez, Stephen Mallette and Daniel LaRocque:

  1. Titan: a massive scale property graph allowing real-time traversals and updates
  2. Faunus: for batch processing of large graphs using Hadoop
  3. Fulgora: for global running graph algorithms on large, compressed, in-memory graphs

The first couple of slides are also showing some possible use cases where these tools would prove their usefulness:

Original title and link: Adding Value Through Graph Analysis Using Titan and Faunus (NoSQL database©myNoSQL)


Simplifying HBase Schema Development With KijiSchema

Jon Natkins from WibiData:

When building an HBase application, you need to be aware of the intricacies and quirks of HBase. For example, your choice of names for column families, or columns themselves can have a drastic effect on the amount of disk space necessary to store your data. In this article, we’ll see how building HBase applications with KijiSchema can help you avoid inefficient disk utilization.

The recommendations related to the length of column names is a one of those subtle signs of how young the NoSQL space is1.


  1. This is not specific only to HBase, but also MongoDB, RethinkDB, etc. 

Original title and link: Simplifying HBase Schema Development With KijiSchema (NoSQL database©myNoSQL)

via: http://www.kiji.org/2012/03/01/using-disk-space-efficiently-with-kiji-schema


Brief Intro to Cassandra in 27 Slides

If you never looked into Apache Cassandra, Michaël Figuière’s slidedeck will give you a quick into Cassandra’s main concepts.

Apache Cassandra 1.2 introduces some new features such as a Binary Protocol and Collections datatype that together with the now finalized CQL3 query language provide a new interface to communicate with Cassandra that dramatically shrink its learning curve and simplify its daily use while still relying on its highly scalable architecture and storage engine. This presentation will iterate over all these new features including an overview of CQL3 query language, a look at the new client architecture, and an update on data modeling best practices. Then we’ll see how to implement an enterprise application using this new interface so that the audience can realize that a number of design principles are inspired from those commonly used with relational databases while some other entirely different, due to Cassandra partitioning approach.

Original title and link: Brief Intro to Cassandra in 27 Slides (NoSQL database©myNoSQL)


A Quick Tour of Internal Authentication and Authorization Security in DataStax Enterprise and Apache Cassandra

Robin Schumacher describes the new security features added to Apache Cassandra and DataStax Enterprise:

This article will concentrate on the new internal authentication and authorization (or permission management) features that are part of both open source Cassandra as well as DataStax Enterprise. Authentication deals with validating incoming user connections to a database cluster, whereas authorization concerns itself with what a logged in user can do inside a database.

I’m happy to see NoSQL databases entering the space of security as this would ease their way inside enterprises. But I fear a bit the moment when the marketing message will change from “it’s too early to provide security features” to “the first enterprise grade NoSQL database”.

Original title and link: A Quick Tour of Internal Authentication and Authorization Security in DataStax Enterprise and Apache Cassandra (NoSQL database©myNoSQL)

via: http://www.planetcassandra.org/blog/post/a-quick-tour-of-internal-authentication-and-authorization-security-in-datastax-enterprise-and-apache-cassandra


Project Rhino: Enhanced Data Protection for the Apache Hadoop Ecosystem

Avik Dey (Intel) sent the announcement of the new open source project from Intel to the Hadoop mailing list:

As the Apache Hadoop ecosystem extends into new markets and sees new use cases with security and compliance challenges, the benefits of processing sensitive and legally protected data with Hadoop must be coupled with protection for private information that limits performance impact. Project Rhino is our open source effort to enhance the existing data protection capabilities of the Hadoop ecosystem to address these challenges, and contribute the code back to Apache.

Project Rhino targets security at all levels: from encryption and key management, cell level ACLs to audit logging.

Original title and link: Project Rhino: Enhanced Data Protection for the Apache Hadoop Ecosystem (NoSQL database©myNoSQL)

via: http://mail-archives.apache.org/mod_mbox/hadoop-common-dev/201302.mbox/%3cCD5137E5.15610%25avik.dey@intel.com%3e