Quantum Volume (QV) is a single-number benchmark introduced by IBM in 2019 to capture the overall capability of a quantum processor by simultaneously testing qubit count, gate fidelity, connectivity, and compiler quality. The test runs random circuits of width and depth equal to m on an m-qubit subset of the processor and checks whether the output distribution is statistically distinguishable from random. The quantum volume is 2^m for the largest m that passes the test with statistical confidence.
A processor with QV = 2^10 = 1,024 can reliably execute random circuits on 10 qubits with depth 10 — meaning it can perform meaningful computation across 10 qubits before errors overwhelm the signal. Leading systems have achieved QV in the thousands to millions: Quantinuum's H2 demonstrated QV of 2^20 (over 1 million), reflecting its exceptional gate fidelity and all-to-all connectivity. IBM's processors typically achieve QV of 2^7 to 2^10, limited by connectivity constraints in their heavy-hex topology.
Quantum Volume has limitations as a benchmark. It does not capture performance on structured circuits (real algorithms), does not test error correction capabilities, and becomes less meaningful as processors approach the fault-tolerant regime. Critics note that QV conflates multiple independent quality factors into a single number, making it hard to diagnose specific weaknesses. Alternative benchmarks like CLOPS (speed), algorithmic qubits (IonQ's metric), and application-specific benchmarks are increasingly used alongside QV to provide a more complete picture of processor capability.