TL;DR — Tuning the write‑ahead log (WAL) can increase transaction throughput by 2‑3× without sacrificing durability. Focus on batch size, sync policy, checkpoint intervals, and hardware‑aware settings, then validate with realistic load tests.

Modern databases rely on a write‑ahead log to guarantee atomicity and durability. When the WAL becomes a bottleneck, even the most sophisticated query optimizer cannot keep up. This article walks through the internals of WAL, the knobs you can turn, and the testing methodology you need to achieve high‑throughput performance while keeping data safe.

Understanding Write‑Ahead Logging

What is WAL?

A write‑ahead log records every change to the database before the data pages are flushed to their final location. The process is typically:

  1. Transaction modifies data pages in memory.
  2. Changes are appended to the WAL buffer.
  3. The buffer is flushed to durable storage according to the sync policy.
  4. Later, a checkpoint copies dirty pages from the buffer cache to the data files.

Because the log is sequential, most storage subsystems can write it much faster than random page updates. The guarantee is that, after a crash, the system can replay the log to reconstruct a consistent state.

Why WAL matters for performance

While WAL protects against loss, it also introduces latency:

  • fsync calls force the OS to flush buffers to the physical medium, often costing several milliseconds on spinning disks.
  • Checkpointing creates I/O bursts as dirty pages are written back.
  • Log size determines how much data must be read during recovery; a massive log slows down startup.

Optimizing these aspects can turn a WAL‑limited workload into a CPU‑bound one, unlocking the full potential of modern NVMe storage.

Core Parameters That Influence Throughput

Sync Policy (fsync, fdatasync, O_DSYNC)

The sync policy determines how often the OS is asked to persist the log. Common options:

PolicyGuaranteesTypical Latency
fsyncFull metadata + data flush3–8 ms on HDD, 0.5–1 ms on SSD
fdatasyncData flush only (metadata may be delayed)Slightly lower than fsync
O_DSYNC (open flag)Flush on every write() callSimilar to fsync but can be combined with batching

Practical tip: Use fdatasync or O_DSYNC together with a group commit strategy (see later) to reduce the number of syscalls.

# Example: mounting a filesystem with data=ordered (default) and barrier=1
mount -o data=ordered,barrier=1 /dev/nvme0n1 /var/lib/postgresql

Log Buffer Size and Batching

A larger in‑memory WAL buffer lets the database accumulate many transactions before issuing a flush. The trade‑off is memory consumption and potential data loss in a power failure. Most engines expose a wal_buffer_size (PostgreSQL) or innodb_log_buffer_size (MySQL) setting.

# PostgreSQL example (postgresql.conf)
wal_buffer_size = 64MB          # default 16MB
wal_writer_delay = 200ms        # how often the background writer flushes

Guideline: Set the buffer to 2–4 × the average transaction size. For a workload with 8 KB transactions, a 64 MB buffer can hold roughly 8 000 transactions before a flush, dramatically reducing sync frequency.

Checkpoint Frequency

Checkpoints force dirty pages to be written to the data files, which can stall the WAL if the checkpoint thread competes for I/O. Two parameters are key:

  • checkpoint_timeout – maximum time between checkpoints.
  • checkpoint_completion_target – fraction of the timeout during which the checkpoint should finish.
# PostgreSQL checkpoint tuning
checkpoint_timeout = 5min
checkpoint_completion_target = 0.9   # spread the work over 90% of the interval
max_wal_size = 4GB
min_wal_size = 1GB

By increasing checkpoint_timeout and lowering max_wal_size, you let the WAL grow larger, reducing the frequency of costly checkpoints. However, a very large WAL prolongs recovery time, so balance based on your SLA.

Hardware & OS Tuning

Disk Subsystem Choices (SSD, NVMe, RAID)

  • NVMe drives provide sub‑millisecond latency and high IOPS, making them ideal for WAL.
  • RAID 10 offers both redundancy and performance; RAID 5/6 can become a write bottleneck due to parity calculations.
  • Battery‑backed write cache (BBWC) on enterprise SSDs can absorb sync calls, but be sure the cache is protected against power loss.

Rule of thumb: Aim for at least 2 GB/s sequential write bandwidth for the WAL volume when targeting >100k TPS (transactions per second).

Filesystem Mount Options

Modern Linux filesystems have options that affect write ordering:

  • noatime – avoid updating access timestamps.
  • data=writeback – defers metadata writes, but may break crash consistency unless the DB handles it.
  • barrier=0 – disables write barriers; only safe with hardware that guarantees write ordering (e.g., BBWC).
# Example mount for an NVMe device dedicated to WAL
mount -o rw,noatime,barrier=1 /dev/nvme1n1 /var/lib/postgresql/wal

Kernel Write‑back Settings

Adjust the Linux VM dirty ratios to control when the kernel flushes dirty pages:

# /etc/sysctl.conf additions
vm.dirty_background_ratio = 5   # start flushing at 5% dirty
vm.dirty_ratio = 15            # force writeback at 15%
vm.dirty_writeback_centisecs = 500   # 5 seconds between writeback cycles

Lower ratios keep the writeback queue short, preventing large spikes that could compete with WAL writes.

Application‑Level Techniques

Group Commit

Group commit batches multiple transactions into a single fsync. The database’s background writer collects pending WAL records and issues a single sync call every wal_writer_delay milliseconds (configurable). A typical implementation looks like:

import time, os

def group_commit(wal_buffer, delay_ms=200):
    """Flush buffered WAL entries in batches."""
    while True:
        time.sleep(delay_ms / 1000.0)
        if wal_buffer:
            os.fsync(wal_buffer.fileno())
            wal_buffer.truncate(0)   # clear after sync

By increasing wal_writer_delay from the default 200 ms to 500 ms, you can halve the number of sync syscalls at the cost of a few extra milliseconds of latency—acceptable for bulk ingestion workloads.

Asynchronous Replication

If your architecture includes streaming replication, consider decoupling the primary’s WAL sync from the replica’s acknowledgment. PostgreSQL’s synchronous_commit = off on the primary allows the primary to acknowledge a transaction once the WAL is locally flushed, while replicas pull the log asynchronously.

Caution: Turning off synchronous replication increases the risk of losing the last few milliseconds of data on a primary crash. Use it only when the business can tolerate that window.

Compression and Encryption Trade‑offs

Compressing the WAL reduces disk bandwidth but adds CPU overhead. Modern CPUs with AES‑NI and SIMD can encrypt and compress at line rate, but you must benchmark:

# Enable LZ4 compression for PostgreSQL WAL (requires 13+)
wal_compression = lz4

Encryption (wal_encryption_mode = aes-256-cbc) is advisable for compliance but should be measured against the added latency.

Monitoring and Testing

Metrics to Track

MetricSourceDesired Range
wal_write_latencyDB stats< 2 ms
wal_sync_rateOS iostat> 10 GB/s on NVMe
checkpoint_durationDB logs< 1 s for 1 GB checkpoint
recovery_timePost‑restart logs< 30 s for 10 GB WAL

Prometheus exporters for PostgreSQL and MySQL expose these metrics out of the box.

Load‑Testing Tools

  • pgbench – native PostgreSQL benchmark, can be scripted to generate mixed read/write workloads.
  • sysbench – supports MySQL and can be tuned for transaction rate.
  • fio – synthetic I/O generator to stress the WAL device directly.

A typical test sequence:

# 1. Warm up the buffer cache
pgbench -c 10 -j 4 -T 60 mydb

# 2. Run the measured workload
pgbench -c 100 -j 16 -T 300 -N -S mydb > bench.log

# 3. Capture I/O stats
iostat -x 1 > iostat.log &

Collect the latency histogram from bench.log, compare against the baseline, and iterate on the configuration until you hit your target throughput.

Key Takeaways

  • Batching matters: Increase WAL buffer size and use group commit to reduce sync frequency.
  • Tune sync policy: Prefer fdatasync or O_DSYNC combined with delayed background flushes.
  • Space out checkpoints: Use longer checkpoint_timeout and a high checkpoint_completion_target to avoid I/O spikes.
  • Match hardware to workload: NVMe with low‑latency write cache delivers the best WAL performance; avoid RAID levels that penalize writes.
  • Monitor continuously: Track WAL latency, sync rate, and checkpoint duration; adjust settings before they become bottlenecks.

Further Reading