cap: All content tagged as cap in NoSQL databases and polyglot persistence
The recent announcement of the Microsoft SQL Server 2012 release emphasized the high availability features added to this version. Here is what I could find after some digging through the documentation:
AlwaysOn Failover Cluster Instances: As part of the SQL Server AlwaysOn offering, AlwaysOn Failover Cluster Instances leverages Windows Server Failover Clustering (WSFC) functionality to provide local high availability through redundancy at the server-instance level—a failover cluster instance (FCI). An FCI is a single instance of SQL Server that is installed across Windows Server Failover Clustering (WSFC) nodes and, possibly, across multiple subnets. On the network, an FCI appears to be an instance of SQL Server running on a single computer, but the FCI provides failover from one WSFC node to another if the current node becomes unavailable.
This is explained in more detail on AlwaysOn Failover Cluster Instances (SQL Server).
AlwaysOn Availability Groups: The AlwaysOn Availability Groups feature is a high-availability and disaster-recovery solution that provides an enterprise-level alternative to database mirroring. Introduced in SQL Server 2012, AlwaysOn Availability Groups maximizes the availability of a set of user databases for an enterprise. An availability group supports a failover environment for a discrete set of user databases, known as availability databases, that fail over together. An availability group supports a set of read-write primary databases and one to four sets of corresponding secondary databases. Optionally, secondary databases can be made available for read-only access and/or some backup operations.
More documentation about AlwaysOn Availability groups can be found here.
Database mirroring: This feature will be removed in a future version of Microsoft SQL Server.
Log shipping: SQL Server Log shipping allows you to automatically send transaction log backups from a primary database on a primary server instance to one or more secondary databases on separate secondary server instances.
This is the well-known master-slave setup. More details can be found here.
Also worth checking the availability of these feature per SQL Server 2012 editions:
Original title and link: Microsoft SQL Server 2012 High Availability Solutions ( ©myNoSQL)
A reminder to those thinking that networks never fail and automation can solve everything. Christina Ilvento, on behalf of the App Engine team:
The root cause of the outage was a combination of two factors during a scheduled network maintenance in one of our datacenters. As part of the scheduled maintenance, network capacity to and from this datacenter was reduced. This alone was expected, and was not a problem. However, this maintenance exposed a previously existing misconfiguration in the system that manages network bandwidth capacity.
Ordinarily, the bandwidth management system helps isolate and prioritize traffic. When capacity is reduced because of maintenance, network failure, or due to an excess of normal traffic, the bandwidth management system keeps things running smoothly by throttling back the rate of low priority traffic. However, as mentioned, the bandwidth management system had a latent misconfiguration which did not show up until capacity was reduced due to the scheduled maintenance. This misconfiguration under-reported the available network capacity to and from the datacenter, causing the network modeler to believe that there was less overall capacity than actually existed.
The configuration error in the bandwidth management system, when combined with an expected reduction in capacity due to the scheduled maintenance, led the system to conclude that there was insufficient bandwidth available for current traffic demand to and from this datacenter. (In reality, there was more than sufficient excess capacity, as otherwise the maintenance would not have been allowed to go forward.) Because of this combination of misconfiguration and scheduled maintenance, a number of services were automatically blocked from sending network traffic. […]
The outage occurred because two independent systems failed at the same time, which resulted in mistakes in our usual escalation procedures which significantly impacted the duration of the outage.
Original title and link: Networks Never Fail ( ©myNoSQL)
- Dynamo (key-value)
- Voldemort (key-value)
- Tokyo Cabinet (key-value)
- KAI (key-value)
- Cassandra (column-oriented/tabular)
- CouchDB (document-oriented)
- SimpleDB (document-oriented)
- Riak (document-oriented)
A couple of clarifications to the list above:
- Dynamo has never been available to the public. On the other hand DynamoDB is not exactly Dynamo
- Tokyo Cabinet is not a distributed database so it shouldn’t be in this list
- CouchDB isn’t a distributed database either, but one could argue that with its peer-to-peer replication it sits right at the border. On the other hand there’s BigCouch.
Original title and link: Which NoSQL Databases Are Robust to Net-Splits? ( ©myNoSQL)
- The cost of strong consistency to Amazon is low, if not zero. To you? 2x.
- If you were to run your own distributed database, you wouldn’t incur this cost (although you’d have to factor in hardware and ops costs).
- Offering a “consistent write” option instead would save you money and latency.
- If Amazon provided SLAs so users knew how well eventual consistency worked, users could make more informed decisions about their app requirements and DynamoDB. However, Amazon probably wouldn’t be able to charge so much for strong consistency.
It is not the first time I’ve heard this discussion, but it is the first time I’ve found it in a detailed form. I have no reasons to defend Amazon’s DynamoDB pricing strategy, but:
- Comparing the costs of operating self hosted with managed highly available distributed databases seems to me to be out of place and cannot lead to a real conclusion.
- While consistent writes could be a solution for always having consistent reads, it would require Amazon to reposition the DynamoDB offer from a highly available database to something else. Considering Amazon has always explained their rationale for building highly available systems I find this difficult to believe it would happen.
Getting back to the consistent vs eventually consistent reads, what one needs to account for is a combination of:
- costs for cross data center access
- costs for maintaining the request capacity SLA
- costs for maintaining the request latency promise
- penalty costs for not meeting the service commitment
I agree thought it’s almost impossible to estimate each of these and decide if they lead or not to the increased consistent read price.
Original title and link: Why DynamoDB Consistent Reads Cost Twice or What’s Wrong With Amazon’s DynamoDB Pricing? ( ©myNoSQL)
Eventual and Strong Consistency, Sloppy and Strict Quorums, and Other Lessons and Thoughts on Distributed Systems
Anything I’d write would just steal from your time to read and think about the email Joseph Blomstedt posted to the Riak list.
Original title and link: Eventual and Strong Consistency, Sloppy and Strict Quorums, and Other Lessons and Thoughts on Distributed Systems ( ©myNoSQL)