An ACM article on consistency and replication:
Causal consistency ensures that operations appear in the order the user
intuitively expects. More precisely, it enforces a partial order over
operations that agrees with the notion of potential causality. If operation
A happens before operation B, then any data center that sees operation B
must see operation A first.
Three rules define potential causality:
Thread of execution. If a and b are two operations in a single thread of
execution, then a -> b if operation a happens before operation b.
Reads-from. If a is a write operation and b is a read operation that
returns the value written by a, then a -> b.
Transitivity. For operations a, b, and c, if a -> b and b -> c, then a ->
c. Thus, the causal relationship between operations is the transitive
closure of the first two rules.
Causal consistency ensures that operations appear in an order that agrees
with these rules. This makes users happy because their operations are
applied everywhere in the order they intended. It makes programmers happy
because they no longer have to reason about out-of-order operations.
I’ve read this article at a small hour and I’m still digesting it, but for now my impression is that it describes a single-client ordering of operations:
- Is it always necessary to have serializability of single thread operations? I think this is useful only if there is a non-empty intersection of the data sets affected
- For “Read-from” rule there must be more assumptions (e.g. both operations are originating from the same client)
- What’s the connection between these rules (dictating single client ordering of operations) and replication?
Original title and link: Don’t Settle for Eventual Consistency - Causal consistency