ALL COVERED TOPICS

NoSQL Benchmarks NoSQL use cases NoSQL Videos NoSQL Hybrid Solutions NoSQL Presentations Big Data Hadoop MapReduce Pig Hive Flume Oozie Sqoop HDFS ZooKeeper Cascading Cascalog BigTable Cassandra HBase Hypertable Couchbase CouchDB MongoDB OrientDB RavenDB Jackrabbit Terrastore Amazon DynamoDB Redis Riak Project Voldemort Tokyo Cabinet Kyoto Cabinet memcached Amazon SimpleDB Datomic MemcacheDB M/DB GT.M Amazon Dynamo Dynomite Mnesia Yahoo! PNUTS/Sherpa Neo4j InfoGrid Sones GraphDB InfiniteGraph AllegroGraph MarkLogic Clustrix CouchDB Case Studies MongoDB Case Studies NoSQL at Adobe NoSQL at Facebook NoSQL at Twitter

NAVIGATE MAIN CATEGORIES

Close

Amazon: All content tagged as Amazon in NoSQL databases and polyglot persistence

Basho Announces Riak-Based Multi-Tenant, Distributed, S3-Compatible Cloud Storage Platform

Coverage of the announcement of a new product from Basho: Riak CS: a multi-tenant, distributed, S3-compatible cloud storage platform:

My notes about Riak CS will follow shortly.

Original title and link: Basho Announces Riak-Based Multi-Tenant, Distributed, S3-Compatible Cloud Storage Platform (NoSQL database©myNoSQL)


DynamoDB Tutorial for .NET: Using Amazon DynamoDB Object Persistence Framework

The usual getting started guide for .NET developers:

The object persistence functionality in the AWS SDK for .NET enables you to easily map .NET classes to Amazon DynamoDB items. By using your own classes to store and retrieve Amazon DynamoDB data, you can use Amazon DynamoDB without worrying about data conversion or developing middle-layer solutions that interface with the Amazon DynamoDB service.

Original title and link: DynamoDB Tutorial for .NET: Using Amazon DynamoDB Object Persistence Framework (NoSQL database©myNoSQL)

via: http://aws.amazon.com/articles/2790257258340776


Amazon RDS: The Good and Bad of Hosted MySQL

Mostly is good:

RDS is pretty awesome — it’s basically a highly available MySQL setup with backups and optional goodness like read-slaves. RDS is one of the best services as far as Amazon Webservices are concerned: 90% of what anyone would need from RDS, Amazon allows you to do with a couple clicks.

but:

Aside from the monitoring and backup quirks, one of the real pain points of Amazon RDS is that a lot of the collective MySQL knowledge is not available to us. The knowledge which is manifested in books, blogs, various monitoring solutions and outstanding tools like Percona’s backup tools are not available to people who run Amazon RDS setups.

Most of the time is difficult to get access to your preferred tools from small service providers. Amazon cannot afford to include in their operations all available tools for MySQL. But I’m pretty sure they have a prioritized list of the most requested ones.

Original title and link: Amazon RDS: The Good and Bad of Hosted MySQL (NoSQL database©myNoSQL)

via: http://till.klampaeckel.de/blog/archives/179-Hosted-MySQL-Amazon-RDS-and-backups.html


Wordnik: Migrating From a Monolythic Platform to Micro Services

The story of how Wordnik changed a monolithic platform to one based on Micro Services and the implications at the data layer (MongoDB):

To address this, we made a significant architectural shift. We have split our application stack into something called Micro Services — a term that I first heard from the folks at Netflix. […] This translates to the data tier as well. We have low cost servers, and they work extremely well when they stay relatively small. Make them too big and things can go sour, quickly. So from the data tier, each service gets its own data cluster. This keeps services extremely focused, compact, and fast — there’s almost no fear that some other consumer of a shared data tier is going to perform some ridiculously slow operation which craters the runtime performance. Have you ever seen what happens when a BI tool is pointed at the runtime database? This is no different.

Original title and link: Wordnik: Migrating From a Monolythic Platform to Micro Services (NoSQL database©myNoSQL)

via: http://blog.wordnik.com/with-software-small-is-the-new-big


Quick Guide to Using Amazon DynamoDB and S3 With Rails

Daniel Lobato Garcia:

One of the hackathons I attended was about deploying a Ruby app that pushes to and retrieves records from DynamoDB, and uploading photos to S3. It involves several steps but thanks to Trevor Rowe (author of AWS SDK for Ruby) who helped me, I finally succeeded and created a implementation that works pretty nicely. […] I included some information on how you could replicate this functionality on your Rails app in this Github repository.

Just a very basic guide to how to use Amazon SDK for Ruby in a Rails app. Or as one Hacker News commenter said:

I’m not sure, but from reading the article this feels more like AWS-SDK’s seamless integration with Rails rather than Rails’ seamless integration with DynamoDB & S3. Please correct me if I misunderstood.

Original title and link: Quick Guide to Using Amazon DynamoDB and S3 With Rails (NoSQL database©myNoSQL)

via: http://blog.daniellobato.me/2012/03/rails-and-new-seamless-integration-with-amazons-dynamodb-and-s3/


Cross-Platform Global High-Score Using Amazon SimpleDB

Timo Fleisch provides a set of requirements for which using Amazon SimpleDB for storing mobile gaming data sound like a good solution:

In this post I am going to describe my solution for a simple global high score that works with WP7, iOS and Android. […] The preconditions for me for the global high score where:

  • It should be easy and fast to implement and maintain.
  • It should use standard web technologies.
  • It should be scaleable.
  • It should use the standard web http protocol.
  • It should be secure,
  • and it should be as cross platform as possible.

SimpleDB definitely fits the bill for these requirements. But there might be some other details that could lead to using a different approach or making things a tad more complex:

  • the Amazon SimpleDB latency
  • the always-connected to the internet requirement

Original title and link: Cross-Platform Global High-Score Using Amazon SimpleDB (NoSQL database©myNoSQL)

via: http://klutzgames.blogspot.com/2012/03/cross-platform-global-high-score-using.html


Asyncdynamo: Amazon DynamoDB Async Python Library by Bitly

Bitly’s new asynchronous Amazon DynamoDB Python client:

Asyncdynamo requires Boto and Tornado to be installed, and must be run with Python 2.7. It replaces Boto’s synchronous calls to Dynamo and to Amazon STS (to retrieve session tokens) with non-blocking Tornado calls. For the end user its interface seeks to mimic that of Boto Layer1, with each method now requiring an additional callback parameter.

Available on GitHub.

Original title and link: Asyncdynamo: Amazon DynamoDB Async Python Library by Bitly (NoSQL database©myNoSQL)

via: http://word.bitly.com/post/18861837158/introducing-asyncdynamo


Why DynamoDB Consistent Reads Cost Twice or What’s Wrong With Amazon’s DynamoDB Pricing?

Peter Bailis has posted an interesting article about the cost structure for Amazon DynamoDB reads— consistent reads are double the price of eventually consistent reads:

  1. The cost of strong consistency to Amazon is low, if not zero. To you? 2x.
  2. If you were to run your own distributed database, you wouldn’t incur this cost (although you’d have to factor in hardware and ops costs).
  3. Offering a “consistent write” option instead would save you money and latency.
  4. If Amazon provided SLAs so users knew how well eventual consistency worked, users could make more informed decisions about their app requirements and DynamoDB. However, Amazon probably wouldn’t be able to charge so much for strong consistency.

It is not the first time I’ve heard this discussion, but it is the first time I’ve found it in a detailed form. I have no reasons to defend Amazon’s DynamoDB pricing strategy, but:

  1. Comparing the costs of operating self hosted with managed highly available distributed databases seems to me to be out of place and cannot lead to a real conclusion.
  2. While consistent writes could be a solution for always having consistent reads, it would require Amazon to reposition the DynamoDB offer from a highly available database to something else. Considering Amazon has always explained their rationale for building highly available systems I find this difficult to believe it would happen.
  3. Getting back to the consistent vs eventually consistent reads, what one needs to account for is a combination of:

    • costs for cross data center access
    • costs for maintaining the request capacity SLA
    • costs for maintaining the request latency promise
    • penalty costs for not meeting the service commitment

    I agree thought it’s almost impossible to estimate each of these and decide if they lead or not to the increased consistent read price.

Original title and link: Why DynamoDB Consistent Reads Cost Twice or What’s Wrong With Amazon’s DynamoDB Pricing? (NoSQL database©myNoSQL)


A Tour of Amazon DynamoDB Features and API

Mathias Meyer’s walk through the DynamoDB features and API with commentary:

Sorted range keys, conditional updates, atomic counters, structured data and multi-valued data types, fetching and updating single attributes, strong consistency, and no explicit way to handle and resolve conflicts other than conditions. A lot of features DynamoDB has to offer remind me of everything that’s great about wide column stores like Cassandra, but even more so of HBase. This is great in my opinion, as Dynamo would probably not be well-suited for a customer-facing system. And indeed, Werner Vogel’s post on DynamoDB seems to suggest DynamoDB is a bastard child of Dynamo and SimpleDB, though with lots of sugar sprinkled on top.

Think of it as an extended, better articulated and closer to the API version of my notes about Amazon DynamoDB.

Original title and link: A Tour of Amazon DynamoDB Features and API (NoSQL database©myNoSQL)

via: http://www.paperplanes.de/2012/1/30/a-tour-of-amazons-dynamodb.html


Amazon DynamoDB Is Not Production Ready

Timothy Cardenas reports on his experience with Amazon DynamoDB and the Ruby SDK:

Problems i have had include:

  • Write capacity hanging in create mode for over an hour
  • Inability to simply count my records
  • Inablity to loop through records without huge read costs
  • No asyncronous support for writting
  • Can only double read/write capacity per update
  • Ruby SDK is written like a labyrinth with very little ability to extend without knowing every little detail about the rest of the library. I couldnt even understand how a request was created it was so convoluted.

Basically with the ruby client you can put data in but can’t get it out efficiently without paying a ton for beefed up read operations.

I think that only the lack of support for async writes and the complexity of the Ruby SDK are really Amazon DynamoDB related issues; I assume the first one has been a temporary issue. Everything else is DynamoDB’s documented behavior and so one is supposed to be aware of these when designing their applications.

As far as I know, Amazon DynamoDB has been in private beta for a while with real production users. But that doesn’t mean that DynamoDB will be the right solution for everyone. And that’s not equivalent with saying that DynamoDB is not production ready.

Original title and link: Amazon DynamoDB Is Not Production Ready (NoSQL database©myNoSQL)

via: http://timcardenas.com/amazons-dynamodb-is-not-production-ready


Step-by-Step Guide to Amazon DynamoDB for .NET Developers

This tutorial is meant for the .NET developers to get started with Amazon DynamoDB. I will show you how to create a Table and perform CRUD operations on it. Amazon DynamoDB provides a low-level API and an Object Persistence API for the .NET developers. In this tutorial, we will see how to use the Object Persistence API to talk to Amazon DynamoDB. We will model a component that represents a DVD Library with capabilities to add, modify, query and delete individual DVDs.

Looks like a lot of code just to demo some CRUD operations.

Original title and link: Step-by-Step Guide to Amazon DynamoDB for .NET Developers (NoSQL database©myNoSQL)

via: http://cloudstory.in/2012/02/step-by-step-guide-to-amazon-dynamodb-for-net-developers/


How Web giants store big data

An ArsTechnica, not very technical, overview of the storage engines developed and used by Google (Google File System, BigTable), Amazon (Dynamo), Microsoft (Azure DFS), plus the Hadoop Distributed File System (HDFS).

Original title and link: How Web giants store big data (NoSQL database©myNoSQL)

via: http://arstechnica.com/business/news/2012/01/the-big-disk-drive-in-the-sky-how-the-giants-of-the-web-store-big-data.ars/1