TL;DR — PACELC refines the classic CAP theorem by adding a latency dimension: when a network partition isn’t present, systems still choose between consistency and latency. Understanding the theorem helps engineers make explicit trade‑offs that match application requirements.

Distributed databases are rarely “one‑size‑fits‑all.” They must balance three competing forces—consistency, availability, and latency—and the PACELC theorem gives a concise framework for reasoning about those forces. In this article we unpack the theorem, compare it to CAP, examine real‑world systems that embody its trade‑offs, and walk through practical design decisions you can apply today.

The Origins of PACELC

The CAP theorem, coined by Eric Brewer in 2000 and formally proved by Gilbert and Lynch in 2002, states that a distributed system can provide at most two of the following guarantees during a network partition:

  • Consistency – all nodes see the same data at the same time.
  • Availability – every request receives a response (success or failure).
  • Partition tolerance – the system continues operating despite arbitrary message loss.

While CAP clarified the impossibility of achieving all three simultaneously during a partition, it left open what happens when no partition exists. In the real world, partitions are rare compared with the everyday latency costs of coordinating replicas. Recognizing this gap, Daniel Abadi introduced the PACELC theorem in 2012 (see the original paper on the ACM Digital Library). The acronym expands to:

  • P – Partition tolerance (same as CAP).
  • A – Availability.
  • C – Consistency.
  • E – Else (i.e., when there is no partition).
  • L – Latency.
  • C – Consistency.

Formally, PACELC says:

If a partition occurs, a system must choose between Availability and Consistency (the classic CAP trade‑off). Otherwise, it must choose between Latency and Consistency (the “EL” trade‑off).

The theorem forces architects to think about two binary decisions instead of a single one, making the design space clearer.

Breaking Down the Theorem

The P, A, C components

PropertyMeaning during a partitionTypical implementation
P – Partition toleranceThe system continues to operate despite lost messages.Replication across datacenters, quorum protocols.
A – AvailabilityEvery request receives a response, even if it may be stale.“Write‑anywhere, read‑anywhere” models, eventual consistency.
C – ConsistencyAll nodes return the same value for a given key.Strong consistency, linearizable reads/writes.

When a partition occurs, you must pick A or C. For instance, Cassandra opts for AP (availability) by allowing writes on any node and reconciling later, while Google Spanner chooses CP (consistency) by halting operations that cannot be safely coordinated.

The E, L, C components

PropertyMeaning when no partition existsTypical implementation
E – Else (no partition)The system is fully connected; the trade‑off shifts.Normal operation mode.
L – LatencyHow quickly a request completes.Synchronous quorum reads/writes, single‑leader round‑trips.
C – ConsistencySame as above, but now the decision is whether to sacrifice latency for stronger guarantees.Read‑after‑write guarantees, lock‑step replication.

In the “EL” scenario, we decide whether to favor low latency (L) or strong consistency (C). A system that always offers the strongest consistency will typically incur higher latency because it must coordinate across replicas before responding. Conversely, a system that optimizes for latency may return a value that is not yet fully replicated, introducing temporary inconsistency.

PACELC vs. CAP: A Visual Comparison

graph TD
    CAP[CAP] -->|Partition| AorC[Choose A or C]
    PACELC[PACELC] -->|Partition| AorC
    PACELC -->|No Partition| LorC[Choose L or C]

During a partition both theorems converge on the A vs. C decision.
When the network is healthy PACELC adds the L vs. C decision, which CAP silently assumes but never articulates.

Real‑World Systems and Their Choices

1. Amazon DynamoDB (AP‑EL)

  • Partition mode (P): DynamoDB is designed for high availability; writes are accepted on any replica.
  • Else mode (EL): It offers eventual consistency by default (low latency) but provides an optional strongly consistent read flag that adds a round‑trip to the leader, increasing latency.

DynamoDB’s design illustrates a classic AP‑EL system: it sacrifices consistency during partitions, and when the cluster is healthy it still leans toward low latency, only paying the latency cost when the client explicitly requests consistency.

2. Google Spanner (CP‑EC)

  • Partition mode (P): Spanner halts operations that cannot be safely coordinated across its globally synchronous replication, effectively choosing Consistency over Availability.
  • Else mode (EC): Even without partitions, Spanner enforces External Consistency using TrueTime, a globally synchronized clock. This adds measurable latency (typically 5‑10 ms per transaction) but guarantees serializable isolation.

Spanner embodies CP‑EC, favoring consistency at both levels. Its latency cost is justified for financial or inventory systems where stale data is unacceptable.

3. Apache Cassandra (AP‑EL)

  • Partition mode (P): Writes succeed on any node; read repair resolves conflicts later.
  • Else mode (EL): By default reads are low‑latency and may return older versions; you can request a QUORUM read to achieve stronger consistency at the expense of latency.

Cassandra’s tunable consistency lets developers slide along the EL axis by adjusting the read_repair_chance and quorum levels.

4. CockroachDB (CP‑EC)

  • Partition mode (P): Uses Raft consensus; if a quorum cannot be formed, the transaction aborts, preserving consistency.
  • Else mode (EC): Transactions are serializable by default; the system still incurs extra latency for the Raft log replication.

CockroachDB mirrors Spanner’s CP‑EC stance but with an open‑source stack.

Designing for Desired Consistency

When you start a new project, ask yourself three concrete questions that map directly to the PACELC axes:

  1. What is the cost of a stale read?
    If a user sees an outdated price, does it cause revenue loss?
  2. What is the maximum tolerable latency?
    Is 100 ms acceptable for a mobile UI?
  3. How likely are network partitions in your deployment?
    Are you running across multiple cloud regions with high inter‑datacenter latency?

Based on the answers, you can position your system on the PACELC diagram.

Decision Matrix

ScenarioPartition tolerance needed?Desired latencyConsistency requirementRecommended trade‑off
Real‑time gaming leaderboardLow (players in same region)< 20 msSlightly stale acceptableAP‑EL (e.g., Redis Cluster with eventual consistency)
Financial transaction processingHigh (regulatory)≤ 200 msStrong consistencyCP‑EC (e.g., Spanner, CockroachDB)
Social media feedMedium (global)< 100 msEventual consistency fineAP‑EL with optional strong reads
IoT sensor aggregationHigh (edge devices)VariableConsistency optionalAP‑EL with lightweight edge caches

Tuning Consistency in Practice

Many databases expose consistency levels that let you slide between the EL extremes at runtime. Below is a short Python snippet using the cassandra-driver to demonstrate a dynamic choice:

from cassandra.cluster import Cluster
from cassandra import ConsistencyLevel

cluster = Cluster(['127.0.0.1'])
session = cluster.connect('mykeyspace')

def write_item(key, value, strong=False):
    """
    Write with either QUORUM (strong) or ANY (weak) consistency.
    """
    level = ConsistencyLevel.QUORUM if strong else ConsistencyLevel.ANY
    session.execute(
        """
        INSERT INTO items (id, data) VALUES (%s, %s)
        """,
        (key, value),
        consistency_level=level
    )
    print(f'Wrote {"strongly" if strong else "weakly"} consistent data.')

def read_item(key, strong=False):
    level = ConsistencyLevel.QUORUM if strong else ConsistencyLevel.ONE
    row = session.execute(
        """
        SELECT data FROM items WHERE id=%s
        """,
        (key,),
        consistency_level=level
    ).one()
    return row.data if row else None

By toggling strong, the application explicitly decides whether to pay the latency cost of a quorum read/write (ELC) or accept a faster, potentially stale response (ELL).

Key Takeaways

  • PACELC extends CAP by adding a latency‑consistency trade‑off for the “else” case when no network partition exists.
  • During a partition you must choose between Availability and Consistency (the classic CAP choice).
  • When the network is healthy you must choose between Low latency and Strong consistency; this decision often dominates real‑world performance.
  • Real systems illustrate distinct points on the PACELC diagram: DynamoDB (AP‑EL), Spanner (CP‑EC), Cassandra (AP‑EL with tunable consistency), CockroachDB (CP‑EC).
  • Design decisions should be driven by concrete business requirements: tolerance for stale data, latency SLAs, and the expected frequency of partitions.
  • Many databases expose configurable consistency levels, allowing you to shift along the EL axis at runtime, which is a practical way to apply PACELC principles without rebuilding your stack.

Further Reading