The living QAOA reference

updated 2021-10-02

What is QAOA?

QAOA is short for “Quantum Approximate Optimization Algorithm”.

For deeper mathematical descriptions, see the paper or tutorials (here, here, here, here). Some important jargon: mixing terms, cost functions, circuit depth, and Trotterization.

See a list of papers tagged ‘QAOA’ on the arXiv.wiki.

Performance analysis at low depth

The QAOA at low depth is easiest to analyze. Here is a table of known results:

Problem QAOA depth Graph performance (satisfying fraction) best known algorithm? Papers
MAX-CUT 1 triangle-free \(1/2 + 0.30/\sqrt{D}\) No Wang+ 2018, Hastings 2019
MAX-CUT 1 any see formula ? (KM project in progress) Wang+ 2018, Hastings 2019
MAX-CUT 2 girth above 5 \(1/2 + 0.41/\sqrt{D}\) No Marwaha 2021
MAX-3-XOR (E3LIN2) 1 no overlapping constraints, or random signs \(1/2 + 0.33/\sqrt{D}\) No Farhi+ 2015, Barak+ 2015, Lin+ 2016, Marwaha+ 2021
MAX-3-XOR (E3LIN2) 1 any at least \(1/2 + \Theta(1/\sqrt{D log D})\) so far, No Farhi+ 2015, Barak+ 2015
MAX-k-XOR (EkLIN2) 1 no overlapping constraints, or random signs see formula So far, yes(!) Marwaha+ 2021
SK Model 1-12 infinite size see formula ? (No if there is no overlap gap) Farhi+ 2019; see also El Alaoui+ 2020
Max Independent Set 1 any at least \(\Theta(n/d)\) so far, No (KM project in progress) Farhi+ 2020

There are some other results on low-depth QAOA (perhaps could be added to the table above):

The QAOA at low depth has some known performance limitations. For example, see Farhi+ 2020a and Farhi+ 2020b, Bravyi+ 2019, and Chou+ 2021.

The graph structure may impact QAOA performance; see Herrman+ 2021 for a study on small graphs.

Choosing optimal parameters

At high depth, the algorithm’s success is dependent on choosing good parameters. There are different strategies to set good parameters, depending on your goal. (Note: Optimally tuning your parameters is a NP-hard problem!) Here is a table of different parameter-setting strategies:

Strategy Description Drawbacks References
brute force Pre-compute the optimal parameters. Then use optimal parameters to run the algorithm. Although the parameters are found, this takes time exponential with the graph size. Farhi+ 2014
variational Run the algorithm several times on a quantum computer, then use the results to choose new parameters. Repeat until you achieve convergence. This requires using a quantum computer many times to find the right parameters. Noise in quantum computers may make it more challenging to choose improved parameters at each step. Farhi+ 2014
interpolative or Fourier methods Solve optimal beta and gamma at small p, and use that to guess optimal betas and gammas at larger p. This does not guarantee that the optimal parameters are the best possible for QAOA. Zhou+ 2018
machine learning Use a classical optimization method to compute nearly-optimal parameters. The computed parameters may not be optimal, because of barren plateaus. many; for example Khairy+ 2019

Barren plateaus pose a challenge to numerically calculating optimal parameters. Parameter landscapes have many local optima and a lot of “flat” regions (“barren plateaus”). This means that local changes in parameters (e.g. gradient methods) may not improve the QAOA, making it very difficult to identify optimal parameters. See McClean+ 2018 and Wang+ 2020.

Some studies suggest that optimal QAOA parameters can transfer from graph instance to graph instance: see Lotshaw+ 2021 for study at small graphs, Galda+ 2021, and Wurtz+ 2021. This is also true on large graphs; see a theoretical result of instance independence in Brandao+ 2018.

For certain problems with instance independence you can get analytical expressions for the optimal QAOA parameters. This happens for local problems like MAX-CUT on degree-3 graphs and for the SK model. On these problems, one can classically find the universal angles that apply to any (large) problem instance.

QAOA might (and from simulations, appears to) perform well with good parameters, not necessarily optimal. This suggests (but doesn’t prove!) that aiming to find optimal parameters for large \(p\) is not only difficult, but also unnecessary.

There are different strategies to optimize parameters treating the QAOA optimization as a control problem; see Niu+ 2019, Dong+ 2019, Wiersema+ 2020.

QAOA performance under noise and errors

The quality of solutions found by QAOA (without error correction or mitigation) quickly decays under noise. Noise also makes it harder to find optimal parameters.

The rapid decay of the quality of QAOA due to noise has also been observed experimentally in Harrigan+ 2021.

There are also strategies to deal with errors:

Extensions to QAOA

Several adjustments to QAOA have been proposed. Some of these are new protocols that use QAOA as a subroutine; others add penalty terms, use thermal state based objectives, or adjust the mixing Hamiltonian. Here is a list:

Extension How it works References
ST-QAOA (Spanning Tree QAOA) Changes the mixer to a sum of \(X_i X_j\) terms on a spanning tree Wurtz+ 2021a
CD-QAOA (Counter-diabatic QAOA) Adds terms to the mixer to mimic corrections to adiabaticity Wurtz+ 2021b
Quantum Alternating Operator Ansatz Generalization of QAOA (parameterized unitaries and mixers; incorporating constraints into the mixers) Hadfield+ 2017, Hadfield+ 2021
RQAOA (Recursive QAOA) Recursively generates a more and more constrained problem. Uses QAOA (or some other quantum routine) to add constraints. Bravyi+ 2019
GM-QAOA (Grover Mixer QAOA) and Th-QAOA (Threshold-based QAOA) Change the mixers to “flip phase” for a marked element (GM) or above a certain threshold (Th); imitates Grover search operators Bärtschi+ 2020, Golden+ 2021
Warm-start QAOA Approximately solve the problem with a different method; then use that output as the initial state to QAOA Egger+ 2020, Tate+ 2020
Multi-angle QAOA Separate the cost Hamiltonian into individual terms, and use a different angle per term at every step Herrman+ 2021

Some extensions use alternative cost functions besides the cost Hamiltonian, including CVaR (Conditional Value-at-Risk) (Barkoutsos+ 2019), feedback-based approaches (Magann+ 2021), and Gibbs sampling. These aim to estimate higher moments of the distribution to improve the “best” (vs expected) performance of QAOA.

Open problems

Figure from Barak+ 2021:

A plot of conjectured performance of classical and quantum algorithms. It asks the question if NISQ Advantage for Optimization exists.

Myths about QAOA

Myth Explanation
MAX-CUT is clearly an interesting problem for quantum advantage Goemans-Williamson (SDP) is potentially optimal (if Unique Games Conjecture is true). There may be families of graphs where quantum computing can help; but other problems have much bigger gaps between best classical algorithm and limits of approximation.
VQE is the same as QAOA (This section could be improved) VQE is a routine for finding the minimum eigenvalue of a “quantum” Hamiltonian (e.g. electronic structure problem). The variational method is required because we can’t evaluate the Hamiltonian classically. In QAOA, by contrast, the Hamiltonian can be represented classically, but the best solution is not known. Stuart Hadfield refers to this distinction as non-diagonal eigenvalue problem vs diagonal eigenvalue problem. See more information about variational quantum algorithms here. However, one can use QAOA-like algorithms on quantum Hamiltonians; see Anshu+ 2020, Anshu+ 2021.
The expected performance is what matters You really want the highest value the QAOA returns; when you run the experiment many times, you take the best one, not the average one. But we use the average because it’s easier to calculate, and bound the chance of higher performance with tail bounds. (Caveat: at high enough \(n\) and \(p\), there may be concentration – basically every bitstring you sample will achieve exactly the expected performance.) The QAOA may also perform well even if the overlap with optimal value is low.
QAOA is just adiabatic computation Although there are many similarities, there are settings where QAOA improves upon adiabatic computation. See Zhou+ 2018 and also Brady+ 2021. Other comparisons bound the depth \(p\) for QAOA to approximate adiabatic computation, and compare the control strategies (Bapat+ 2018).
Using QAOA gives quantum advantage Although sampling from a \(p=1\) QAOA distribution is not possible classically (Farhi+ 2016), simply using QAOA does not necessarily mean you are calculating things classical computers cannot. In fact, there are no examples (yet?) of provable quantum advantage using QAOA on an optimization problem.
QAOA is a limited model of computation QAOA, with the right parameters, has been shown to be universal for quantum computation, even on a one-dimensional line of qubits (Lloyd 2018). How it’s done: QAOA can be used to implement “broadcast quantum cellular automaton” dynamics, which is universal for quantum computation.

Have more to add? Edit this page (button on the left) or email Kunal Marwaha at marwahaha@berkeley.edu.