The surface code is the leading candidate for practical quantum error correction, favored for its high error threshold (approximately 1% per physical gate), requirement for only nearest-neighbor qubit connectivity (compatible with planar chip layouts), and relatively simple syndrome extraction circuits. It arranges physical qubits on a 2D square lattice, with alternating "data" qubits (storing the logical information) and "measure" qubits (used to detect errors through stabilizer measurements).

In the surface code, X-type and Z-type stabilizers are measured in alternating rounds. Each stabilizer checks the parity of a small group of data qubits — if an error flips a data qubit, the surrounding stabilizers will detect a change in their measurement outcomes. A classical decoder processes the syndrome data in real time to identify the most likely error pattern and apply corrections. The code distance d (roughly the size of the lattice) determines how many errors can be corrected: a distance-d surface code can correct up to (d-1)/2 errors.

The surface code's ~1% error threshold means that if physical qubit error rates are below 1%, increasing the code distance exponentially suppresses logical errors. Current leading processors (Google, IBM, Quantinuum) have two-qubit error rates of 0.1-0.5%, placing them below threshold and enabling functional error correction. The main drawbacks of the surface code are its high overhead (thousands of physical qubits per logical qubit for practical error rates) and the limited set of gates that can be applied transversally (only Clifford gates), requiring magic state distillation for universal computation.