NISQ (Noisy Intermediate-Scale Quantum) is a term coined by John Preskill in 2018 to describe the current generation of quantum processors: large enough to be beyond classical simulation in some regimes (50+ qubits) but too noisy to run the long, deep circuits required by textbook quantum algorithms like Shor's or Grover's. NISQ devices operate on physical qubits without full error correction, limiting practical circuit depth to roughly 100-1,000 layers before errors overwhelm the computation.
The NISQ era has driven development of specialized algorithms designed to work within these constraints. Variational algorithms like VQE and QAOA use shallow quantum circuits as parameterized function evaluators within a classical optimization loop, aiming to extract value from limited circuit depth. Quantum machine learning, quantum simulation of small molecular systems, and random circuit sampling (used for quantum advantage demonstrations) are other NISQ-era applications.
The practical utility of NISQ devices for commercially valuable problems remains debated. While Google demonstrated quantum computational advantage with random circuit sampling (2019 Sycamore, 2024 Willow) and IBM has shown evidence of quantum utility for certain physics simulations, no NISQ algorithm has yet solved a practical problem faster or better than the best classical alternative. The field is transitioning toward early fault-tolerant quantum computing, where partial error correction extends the useful circuit depth beyond NISQ limits without requiring the full overhead of complete fault tolerance.