The world’s leading publication for data science, AI, and ML professionals.

Google’s Willow Quantum Computing Chip: A Game Changer?

Suppressing Logical Errors Exponentially! For the First Time

A Curious Cat Holding a Quantum Chip (Generated with DALLE)
A Curious Cat Holding a Quantum Chip (Generated with DALLE)

Recently, Google unveiled their most advanced quantum chip, Willow, highlighting some ground-breaking results that could lead to relevant real-world applications of Quantum Computing, going beyond just a tool of curiosity and speculations for scientists and engineers.

If you are looking for hype about the multiverse and breaking the encryption of cryptos with quantum computing, this post is not the place.

Here, I will offer a layman’s perspective of quantum coherence and quantum error correction and what makes Google’s Willow chip a potential ground-breaking discovery. A few of the topics from Google’s Nature paper¹ that I would like to discuss here would be:

  • Coherence and Decoherence; T1 time
  • Pure and Mixed States; Density Matrix
  • Quantum Error Correction; Surface Code

Let’s begin!


Coherence in Qubits:

Coherence in very simple terms can be described as the ability of a qubit to maintain its quantum state without being disrupted by noise or interactions with its environment. Coherence is essential for quantum computation because quantum algorithms rely on the precise manipulation of qubits’ superposition and entanglement. So, how to think about superposition and entanglement?

A qubit state denoted by |ψ⟩ which is a superposition of state |0⟩ and |1⟩ can be denoted as below:

Eq. 1: A qubit which is in a superposition of 0 and 1 basis states
Eq. 1: A qubit which is in a superposition of 0 and 1 basis states

Trying to measure this qubit will collapse it to either ∣0⟩ or ∣1⟩ with probabilities |α|² and |β|², respectively.

On the other hand, entanglement refers to a quantum state involving two or more qubits such that their individual states cannot be described independently. The state of one qubit is correlated with the state of the other(s), no matter how far apart they are. I have discussed Bell States before in detail but here let’s see an example of an Entangled State:

An Entangled State (one of the 4 Bell States)
An Entangled State (one of the 4 Bell States)
  • The state ∣00⟩ means both qubits are in ∣0⟩ states.
  • The state ∣11⟩ means both qubits are in ∣1⟩ states.
  • If we measure the first qubit and find it in ∣0⟩, the second qubit will immediately collapse to∣0⟩, and the same holds for∣1⟩.

Quantum algorithms are based on these fundamental concepts of superposition and entanglement and hopefully, we can appreciate why a qubit must maintain its quantum state and not get corrupted by noise before we could perform some computations.

T1 or Relaxation Time: One of the ways to measure Coherence (a qubit maintaining its quantum state) is via T1 time or relaxation time (the other one is called T2 time or dephasing time). We can think of coherence time as some upper limit on how long one can reliably use a qubit for quantum operations. Compared to Google’s previous quantum chip Sycamore containing 53 qubits (Google claimed to achieve quantum supremacy with Sycamore chip in 2019)² where the T1 time was ~20μs (20 microseconds), Willow a 105 qubit processor has about ~5 times higher T1 time (~98μs).


Superconducting Qubits and T1 Time:

Both Sycamore and Willow processors are based on Superconducting Qubits. Even though it is possible to have trapped-ion qubit systems which usually have a higher T1 time, superconducting qubits are usually favored mostly for a few reasons:

  • Superconducting qubits have faster gate operation times (in the range of nanoseconds (10–100 ns)), compared to trapped ion qubits (microseconds to milliseconds). Faster operation time implies more calculations over a shorter time.
  • Superconducting qubits are built on solid-state circuits; more established semiconductor fabrication techniques compared to trapped-ion systems that require complex optical setups (lasers, vacuums etc.). This makes the trapped-ion systems harder to scale for larger qubit arrays.

At least one company I know which is actively working on trapped-ion qubit systems is Honeywell; different from superconducting qubit technologies used by IBM, and Google.

If the initial state of a qubit is |1⟩ (think of this as an excited state compared to the ground state |0⟩) and the T1 time is 30μs then the probability of being in state |1⟩ would drop exponentially with time;

Thermal noise and electromagnetic fluctuations shorten this T1 lifetime of qubits. That’s why you will read reports or watch videos on why maintaining temperatures close to zero kelvin (currently, we can reach and maintain about a few millikelvins) is necessary for quantum processors to operate. At these low temperatures, the thermal energy of photons: k_B T, (k_B is the Boltzmann constant and T is the temperature in kelvin) is much less than at room temperature, making these systems have very low thermal noise. Since qubits are very fragile and any small perturbations like thermal noise can lead to decoherence (where qubits fail to maintain their intended quantum state), higher coherence time is an important factor in building a robust and reliable quantum system.


Density Matrix: Pure to Mixed States; Coherence to Decoherence

Another way to think about coherence and decoherence is in terms of the Density Matrix. This part is for the geeks and if you want to avoid mathematics please skip to TLDR and then to the next section.

Before we have seen how to think of a superposition state simply, we can also write it in another more generic form:

Eq. 2: A pure quantum state
Eq. 2: A pure quantum state

Here θ is the phase difference between the complex coefficients. From this, we can write the well-known superposition states for different values of θ. For example, equal amplitudes for α, β and 0 (degree) phase difference would lead to:

Eq. 3: From Eq. 2, α = β and θ = 0
Eq. 3: From Eq. 2, α = β and θ = 0

Similarly, equal amplitudes but θ = 180, would lead to:

Eq. 4: From Eq. 2, α = β and θ = 180
Eq. 4: From Eq. 2, α = β and θ = 180

For θ = 90, we will obtain:

Eq. 5: From Eq. 2, α = β and θ = 90
Eq. 5: From Eq. 2, α = β and θ = 90

For all these cases, probabilities of measuring |0⟩, |1⟩ are the same; Given by |α|² = |β|² = 0.5.

The relative phase is not directly measurable because quantum mechanics only allows measurement probabilities. But it will have an effect when we apply some quantum gate or do the measurement on a different basis: For example, let’s apply a Hadamard gate (H) to a quantum state |ψ⟩ as in Eq. 3 (where θ = 0) for a generic θ,

Eq. 6: Applying the Hadamard Gate to the quantum state in Eq. 3
Eq. 6: Applying the Hadamard Gate to the quantum state in Eq. 3

(If you want to learn more about Hadamard gates, check my old post) Now, we see that this relative phase difference would affect the measurement probabilities in the {∣0⟩,∣1⟩} basis, demonstrating interference effects.

The quantum states in Eq. 3, 4, and 5 are examples of Pure States. The best way to think about Pure States would be in terms of the density matrix. We can write the density matrix for the state |ψ⟩ (Eq. 2) as ρ = |ψ⟩ ⟨ ψ|, which would be a 2×2 matrix as below:

Eq. 7: Density matrix for the pure state given in Eq. 2.
Eq. 7: Density matrix for the pure state given in Eq. 2.

The off-diagonal terms in the Density Matrix encode the coherence and we can see they are affected by this θ. A pure state will have this property Tr(_ρ_²)=1 satisfied and we think of this as a quantum state represented by a single wave function having no uncertainty about that state. This is in contrast with a Maximally Mixed state where the off-diagonal terms do not exist (=0), representing no coherence and maximum uncertainty about the state.

We usually think of this transition from pure to mixed states as decoherence. So a pure state for example represented by |ψ⟩ in Eq. 2 when transitions into a maximally mixed state, the density matrix becomes:

Eq. 8 Density matrix for a maximally mixed state
Eq. 8 Density matrix for a maximally mixed state

For a mixed state Tr(_ρ_²)<1, so once we know the Density Matrix for a state, we can verify whether it is Pure or Mixed.

If we prepare a quantum state with maximum certainty and over time due to noise and interaction with the environment, we become uncertain about which state the system is currently in. This transition from a Pure to Mixed state and loss of coherence can be thought of as the evolution of the quantum system (as a function of time) and decoherence.

TLDR: To know whether a quantum state is Pure or Mixed, we can check the Density Matrix; Transition from a Pure to a Mixed state due to interaction with the environment and noise can be thought of as a transition from coherence to decoherence as a function of time.

If you want to know how Density Matrix and Bloch Sphere are related, you can read about them here.


Quantum Error Correction:

The main result from the paper published in Nature¹ related to Google’s Willow processor is that the researchers showed that adding more qubits, resulted in reduced errors. Now that is a very simplified but also difficult-to-comprehend sentence. So to break down what exactly it means, first we need to know the distinctions between logical and physical qubits.

Logical Qubit:

  • A logical qubit is an error-protected qubit created by encoding one qubit of quantum information across multiple physical qubits.
  • This encoding allows the logical qubit to be resilient to errors, provided that the errors are detected and corrected before they accumulate too much.

To directly quote from the abstract of the Nature paper¹:

Quantum error correction provides a path to reach practical quantum computing by combining multiple physical qubits into a logical qubit, where the logical error rate is suppressed exponentially as more qubits are added.

For superconducting qubits (like in Google’s Sycamore and Willow processors), surface code is a common quantum error-correcting (QEC) code used to create logical qubits. So what is surface code? Since discussing a QEC algorithm would be a bit too much for this post, let’s have a simple overview.

Surface Code: This represents a type of quantum error-correcting code that encodes a logical qubit into an array of physical qubits arranged on a 2D grid. It protects quantum information by detecting and correcting errors through local measurements of groups of qubits (called stabilizers) without directly measuring the qubits that hold the logical information. Some common features in surface code are:

  • Mainly two types of qubits are arranged in a 2D grid; Data qubits hold the necessary quantum information for the calculation and Ancilla qubits are used to measure error syndromes.
  • The surface code uses stabilizer operators to detect errors without disturbing the quantum information. We can think of stabilizers as a group of qubits used to check bit-flip like ∣0⟩↔ ∣1⟩ (called X errors), phase flip ∣+⟩ ↔ ∣−⟩ (Z errors) or both (called Y errors).
  • A logical qubit is encoded across the entire grid of physical qubits. Logical operations (like X, Z, Hadamard) are implemented as large-scale operations across the grid.
  • Surface code has one of the highest error thresholds (physical errors, happening at the level of physical qubits) among different QEC codes and as long as the physical error rate is below this threshold we can add more qubits and this reduces the logical error rate exponentially.
  • Errors once detected by repeatedly measuring the stabilizers, an error-correcting algorithm is applied to restore the logical qubit to its intended quantum state.

The main point here is that adding more physical qubits increases protection against errors only if the physical error rate is sufficiently low and **** beyond a certain point, if the error density is too high, adding more qubits introduces more errors than the code can correct.

Logical Error Rate Suppression: As we add more physical qubits, it exponentially reduces the logical error rate (errors that affect the encoded quantum information) given the condition we discussed before and this can be expressed mathematically as:

Eq. 9: Logical Error Rate Suppression
Eq. 9: Logical Error Rate Suppression

Here, p and _εd are the physical and logical error rates respectively. _p__{thr} is the threshold error rate of the code, and d is the code distance indicating 2_d_²−1 physical qubits used per logical qubit.

With this relation, we can appreciate that:

  • If p < p_{thr}​, i.e. physical error rate below the critical threshold; logical error suppression improves as d increases.
  • When p> p_{thr}​, the logical error rate increases with d, as the system introduces more errors than the code can correct.

Increasing qubits provides better error suppression but the qubit requirement goes as ∼ 2_d_², making the hardware requirement higher.

In the Nature paper¹, the Google team also defined a quantity Λ that represents a reduction in error by increasing the code distance by 2 as below:

Eq. 10: Error suppression by increasing code distance by 2.
Eq. 10: Error suppression by increasing code distance by 2.

Just to give a feeling, When the physical error rate (p)​ is much below the threshold, the error suppression factor Λ gets very large.

  • For example, if p=0.1%, and p_{thr} =1% ; Λ = 1%/0.1 % ​= 10. This means that increasing the code distance by 2 reduces the logical error rate by a factor of 10.

The main point of the Willow chip is that for the first time, Google Quantum AI group showed that it is possible to build superconducting quantum processors where surface codes operate below this critical threshold, showing exponential logical error rate suppression with increasing qubits. This below-threshold operation could be maintained even during the decoding time.

Dr. Kelly highlighted this point if you have watched Google’s Willow video (link below) around the 3-minute mark.

As I wrote before, the physical error rate being below the critical threshold in Willow, paved the path to making larger and more complex quantum chips.

Testing against the most powerful classical supercomputer over the Random Circuit Sampling benchmark, Google showed that their quantum processor could perform a calculation in less than 5 minutes, which would take the supercomputer 10²⁵ years!

This is exciting and I could only appreciate the amazing hardware developments that have been done by the researchers and scientists to achieve this feat! Future of quantum computing could be very exciting because this type of processor should be the backbone of building large-scale error-corrected quantum computers. Google’s blog³ highlights that they want to push boundaries in medicines/drugs, batteries and fusion power by solving difficult optimization problems that are unsolvable by classical computers! Let’s see what the future holds!!

What do you think?


References:

[1] "Quantum Error Correction Below the Surface Code Threshold"; Google Quantum AI and Collaborators, Nature 2024.

[2] "Quantum Supremacy Using a Programmable Superconducting Processor"; F. Arute et al., Nature 2019.

[3] "Meet Willow, our state-of-the-art quantum chip"; H. Neven.


Related Articles