In that sense, DynamoDB is something of a curveball. It lets AWS users leverage the performance of SSDs, only as the underpinning of a new service rather than as a new IaaS feature alone.
Web developers use NoSQL databases more frequently than enterprise developers, and NoSQL requires solid-state performance.
I think Derrick got this mostly wrong this time. Developers do not care about SSDs per se. What good developers care about is performance. And great developers care about predictability of performance.
There are a couple of NoSQL databases that know this very well. To give you just a couple of examples, take a look at this benchmark of Riak and see what is it focusing on. Or check Riak’s Bitcask backend—here’s also a great explanation of the Bitcask paper—which guarantees a single disk seek per read. I assume you guessed the keyword behind both of these: predictability.
Amazon DynamoDB is using SSDs because:
- it wants to offer predictable low latency
- it wants to offer predictable throughput
- it wants to offer single-digit millisecond average service-side responses
- and it wants to do all these at any scale of dataset sizes and request rates
Hardware is a means to an end. And SSD or not, the aboves are all that matter.
Original title and link: Amazon’s DynamoDB Shows Hardware as Means to an End… Actually It’s All About Predictability ( ©myNoSQL)