The Tanner graph of the Shor code. Physical qubits are represented by circles, X -checks are represented the squares at the bottom, and Z-checks are represented by the squares at the top. We chose a linearly dependent set of stabilizer checks to establish a connection to a geometric interpretation of Shor's code; see Fig. 3.

The Tanner graph of the Shor code. Physical qubits are represented by circles, X -checks are represented the squares at the bottom, and Z-checks are represented by the squares at the top. We chose a linearly dependent set of stabilizer checks to establish a connection to a geometric interpretation of Shor's code; see Fig. 3.

Source publication
Article
Full-text available
Quantum error correction is an indispensable ingredient for scalable quantum computing. In this Perspective we discuss a particular class of quantum codes called “quantum low-density parity-check (LDPC) codes.” The codes we discuss are alternatives to the surface code, which is currently the leading candidate to implement quantum fault tolerance. W...

Contexts in source publication

Context 1
... a quantum CSS code can be described by a Tanner graph with three layers, representing X -checks, physical qubits, and Z-checks: see Fig. 2. A check acts on the qubits incident in the Tanner ...
Context 2
... have weight 6. This should be compared with Fig. 2, which represents the same code by a Tanner graph. From both representations, one can immediately extract the parity-check matrices H X and H Z as incidence ...
Context 3
... which could benefit hardware implementations, as well as its versatility, as it can in principle be applied to arbitrary quantum LDPC codes. Generally, BP does not work well when applied to Tanner graphs that contain small loops, a feature quantum codes necessarily have due to the commutativity constraint that introduces loops of length 4 (see Fig. 2). Furthermore, when applied to quantum codes, BP tends to fail to converge as there are many equivalent solutions up to the application of stabilizers. These problems were addressed in Refs. [106][107][108][109][110][111]. In particular, Duclos et al. [112] combined BP with a renormalization decoder and Panteleev and Kalachev [69] ...

Similar publications

Article
Full-text available
Neutral-atom arrays have recently emerged as a promising platform for quantum information processing. One important remaining roadblock for the large-scale application of these systems is the ability to perform error-corrected quantum operations. To entangle the qubits in these systems, atoms are typically excited to Rydberg states, which could dec...

Citations

... Alternatively, the same number of Bell pairs can be used to implement a logical transversal CNOT across two modules, by teleporting CNOT gates between corresponding physical qubits in the code. The second approach can also be applied to [[n, k, d]] quantum low density parity check codes [76] with a transversal CNOT ⊗k gate to generate k entangled logical qubits between two modules by consuming n physical Bell pairs, allowing a higher rate 020363-7 of logical Bell pair generation than the surface code. The implementation of such codes for neutral atom qubits has been discussed in Ref. [74]. ...
Article
Full-text available
Quantum links between physically separated modules are important for scaling many quantum computing technologies. The key metrics are the generation rate and fidelity of remote Bell pairs. In this work, we propose an experimental protocol for generating remote entanglement between neutral ytterbium atom qubits using an optical cavity. By loading a large number of atoms into a single cavity, and controlling their coupling using only local light shifts, we amortize the cost of transporting and initializing atoms over many entanglement attempts, maximizing the entanglement generation rate. A twisted ring cavity geometry suppresses many sources of error, allowing high-fidelity entanglement generation. We estimate a spin-photon entanglement rate of 5 × 10 5 s − 1 , and a Bell pair rate approaching 10 5 s − 1 , with an average fidelity near 0.999 . Furthermore, we show that the photon detection times provide a significant amount of soft information about the location of errors, which may be used to improve the logical qubit performance. This approach provides a practical path to scalable modular quantum computing using neutral ytterbium atoms. Published by the American Physical Society 2024
... We hope that fault-tolerance researchers will find it useful for the translation between paradigms beyond the previous examples. This may include different microscopic models (i.e., crystal structures [27,29]), features (boundary conditions, logical blocks [14], transversal gates, etc.), as well as different fault-tolerance protocols based on color codes [30,31], low-density parity check codes [32] or other Clifford encoders [33]. We found the ZX calculus to be a versatile toolbox that can be used at all levels of fault tolerance, from the physical level of different models of quantum computation, to the structure of checks used in decoding [34], and the methodical construction of logical operations [35]. ...
Article
Full-text available
There are several models of quantum computation which exhibit shared fundamental fault-tolerance properties. This article makes commonalities explicit by presenting these different models in a unifying framework based on the ZX calculus. We focus on models of topological fault tolerance – specifically surface codes – including circuit-based, measurement-based and fusion-based quantum computation, as well as the recently introduced model of Floquet codes. We find that all of these models can be viewed as different flavors of the same underlying stabilizer fault-tolerance structure, and sustain this through a set of local equivalence transformations which allow mapping between flavors. We anticipate that this unifying perspective will pave the way to transferring progress among the different views of stabilizer fault-tolerance and help researchers familiar with one model easily understand others.
... For a specific quantum error-correcting code and a noise model, it is then left to prove and find error thresholds, with early works being Refs. [16][17][18][19]; this continues to be an active area of research [20][21][22][23]. ...
Article
Full-text available
In recent years, research in quantum computing has largely focused on two approaches: near-term intermediate-scale quantum (NISQ) computing and future fault-tolerant quantum computing (FTQC). A growing body of research into early fault-tolerant quantum computing (EFTQC) is exploring how to utilize quantum computers during the transition between these two eras. However, without agreed-upon characterizations of this transition, it is unclear how best to utilize EFTQC architectures. We argue for the perspective that this transition period will be characterized by a law of diminishing returns in quantum error correction (QEC), where the ability of the architecture to maintain quality operations at scale determines the point of diminishing returns. Two challenges emerge from this picture: how to model this phenomenon of diminishing return of QEC as the performance of devices is continually improving and how to design algorithms to make the most use of these devices. To address these challenges, we present models for the performance of EFTQC architectures, capturing the diminishing returns of QEC. We then use these models to elucidate the regimes in which algorithms suited to such architectures are advantageous. As a concrete example, we show that for the canonical task of phase estimation, in a regime of moderate scalability and using just over one million physical qubits, the “reach” of the quantum computer can be extended (compared to the standard approach) from 90-qubit instances to over 130-qubit instances using a simple early fault-tolerant quantum algorithm, which reduces the number of operations per circuit by a factor of 100 and increases the number of circuit repetitions by a factor of 10 000. This clarifies the role that such algorithms might play in the era of limited-scalability quantum computing. Published by the American Physical Society 2024
... While our approach works for any sufficiently symmetric CSS code, we believe it is particularly suited to implement logical gates in low-density parity-check (LDPC) quantum codes. These codes have recently attracted a lot of attention, see [3] for a recent review. This is partially due to the fact that constant overhead fault-tolerant quantum computation can be realized with these codes [4,5]. ...
... We explicitly construct fault transversal gates for a certain [ [30,8,3]] hyperbolic surface code that we call Bring's code. We show that we can generate a subgroup of the Clifford group C 8 using transversal circuits. ...
... Using the notation from Section 2.3 for Bring's code we define Γ equal to the normal closure in R + 5,5 of the group generated by (abcb) 3 . This single generator is enough to define the quotient of the hyperbolic disc D, see Figure 2iii. ...
Article
Full-text available
We generalize the concept of folding from surface codes to CSS codes by considering certain dualities within them. In particular, this gives a general method to implement logical operations in suitable LDPC quantum codes using transversal gates and qubit permutations only.To demonstrate our approach, we specifically consider a [[30, 8, 3]] hyperbolic quantum code called Bring's code. Further, we show that by restricting the logical subspace of Bring's code to four qubits, we can obtain the f u l l Clifford group on that subspace.
... The flexible qubit connectivity enabled by the use of propagating light pulses could also provide an avenue towards realizing the nonlocal interactions required by certain quantum low-density parity-check (LDPC) codes [91][92][93][94][95]. Quantum LDPC codes have shown promise for reducing the number of physical qubits required to encode a logical qubit compared to leading candidates like the surface code. ...
Article
Full-text available
Long range, multiqubit parity checks have applications in both quantum error correction and measurement-based entanglement generation. Such parity checks could be performed using qubit-state-dependent phase shifts on propagating pulses of light described by coherent states | α 〉 of the electromagnetic field. We consider “flying-cat” parity checks based on an entangling operation that is quantum nondemolition for Schrödinger's cat states | α 〉 ± | − α 〉 . This operation encodes parity information in the phase of maximally distinguishable coherent states | ± α 〉 , which can be read out using a phase-sensitive measurement of the electromagnetic field. In contrast to many implementations, where single-qubit errors and measurement errors can be treated as independent, photon loss during flying-cat parity checks introduces errors on physical qubits at a rate that is anticorrelated with the probability for measurement errors. We analyze this trade-off for three-qubit parity checks, which are a requirement for universal fault-tolerant quantum computing with the subsystem surface code. We further show how a six-qubit entangled “tetrahedron” state can be prepared using these three-qubit parity checks. The tetrahedron state can be used as a resource for controlled quantum teleportation of a two-qubit state or as a source of shared randomness with potential applications in three-party quantum key distribution. Finally, we provide conditions for performing high-quality flying-cat parity checks in a state-of-the-art circuit QED architecture, accounting for qubit decoherence, internal cavity losses, and finite-duration pulses, in addition to transmission losses. Published by the American Physical Society 2024
... More generally, it is known that these overheads cannot be substantially improved in planar architectures (codes) with only nearest-neighbor connectivity [11]. To avoid this fundamental challenge, quantum low-density parity-check (LDPC) codes [12][13][14][15][16][17][18][19] have been developed to significantly reduce the resource overheads for error correction [20,21]. To avoid the constraints of Ref. [11], these codes will necessarily require long-range connectivity [22,23]. ...
Preprint
Full-text available
Quantum error correction protects logical quantum information against environmental decoherence by encoding logical qubits into entangled states of physical qubits. One of the most important near-term challenges in building a scalable quantum computer is to reach the break-even point, where logical quantum circuits on error-corrected qubits achieve higher fidelity than equivalent circuits on uncorrected physical qubits. Using Quantinuum's H2 trapped-ion quantum processor, we encode the GHZ state in four logical qubits with fidelity $ 99.5 \pm 0.15 \% \le F \le 99.7 \pm 0.1\% $ (after postselecting on over 98% of outcomes). Using the same quantum processor, we can prepare an uncorrected GHZ state on four physical qubits with fidelity $97.8 \pm 0.2 \% \le F\le 98.7\pm 0.2\%$. The logical qubits are encoded in a $[\![ 25,4,3 ]\!]$ Tanner-transformed long-range-enhanced surface code. Logical entangling gates are implemented using simple swap operations. Our results are a first step towards realizing fault-tolerant quantum computation with logical qubits encoded in geometrically nonlocal quantum low-density parity check codes.
... A fully connected qubit graph allows for the implementation of any error correction code with finite degree and weight. In particular, our optical reconfigurability allows for non-planar operation, opening the possibility of using recently Article developed codes such as hyperproduct code 32 or other quantum low-density parity-checking codes 33 with a higher encoding rate. For practical applications, we consider the surface code 3,34,35 (shown in Fig. 1a) for its high error threshold and performances in decoding and magic state distillation 36 , as well as hashing-bound saturation in handling biased errors 37 . ...
Article
Full-text available
Colour centres in diamond have emerged as a leading solid-state platform for advancing quantum technologies, satisfying the DiVincenzo criteria¹ and recently achieving quantum advantage in secret key distribution². Blueprint studies3–5 indicate that general-purpose quantum computing using local quantum communication networks will require millions of physical qubits to encode thousands of logical qubits, presenting an open scalability challenge. Here we introduce a modular quantum system-on-chip (QSoC) architecture that integrates thousands of individually addressable tin-vacancy spin qubits in two-dimensional arrays of quantum microchiplets into an application-specific integrated circuit designed for cryogenic control. We demonstrate crucial fabrication steps and architectural subcomponents, including QSoC transfer by means of a ‘lock-and-release’ method for large-scale heterogeneous integration, high-throughput spin-qubit calibration and spectral tuning, and efficient spin state preparation and measurement. This QSoC architecture supports full connectivity for quantum memory arrays by spectral tuning across spin–photon frequency channels. Design studies building on these measurements indicate further scaling potential by means of increased qubit density, larger QSoC active regions and optical networking across QSoC modules.
... We find that the network can produce good quantum codes [42], meaning code families where the distance and number of logical qudits are both proportional to the number of physical qudits. However, these codes are not low-density parity check (LDPC) codes [43,44] since some of the stabilizers are high weight. We hypothesize that by further fine-tuning the gates, our network architecture could also yield encoding circuits for the recently discovered classes of good quantum LDPC codes [45][46][47][48][49][50]. ...
Article
Full-text available
Motivated by the ground state structure of quantum models with all-to-all interactions such as mean-field quantum spin glass models and the Sachdev-Ye-Kitaev (SYK) model, we propose a tensor network architecture which can accomodate volume law entanglement and a large ground state degeneracy. We call this architecture the non-local renormalization ansatz (NoRA) because it can be viewed as a generalization of MERA, DMERA, and branching MERA networks with the constraints of spatial locality removed. We argue that the architecture is potentially expressive enough to capture the entanglement and complexity of the ground space of the SYK model, thus making it a suitable variational ansatz, but we leave a detailed study of SYK to future work. We further explore the architecture in the special case in which the tensors are random Clifford gates. Here the architecture can be viewed as the encoding map of a random stabilizer code. We introduce a family of codes inspired by the SYK model which can be chosen to have constant rate and linear distance at the cost of some high weight stabilizers. We also comment on potential similarities between this code family and the approximate code formed from the SYK ground space.
... The most well-known example of a qubit stabiliser code is the toric code, in which qubits are embedded on the surface of a torus, and properties of the logical space are determined by the topology of the surface [18,31]. This is a basic example of a qubit Calderbank-Shor-Steane (CSS) code; there are several equivalent ways of defining CSS codes, but for our purposes we shall describe them as codes which are all homological in a suitable sense [5,7]. ...
Article
Full-text available
We define code maps between Calderbank-Shor-Steane (CSS) codes using maps between chain complexes, and describe code surgery between such codes using a specific colimit in the category of chain complexes. As well as describing a surgery operation, this gives a general recipe for new codes. As an application we describe how to `merge' and `split' along a shared X ¯ or Z ¯ operator between arbitrary CSS codes in a fault-tolerant manner, so long as certain technical conditions concerning gauge fixing and code distance are satisfied. We prove that such merges and splits on LDPC codes yield codes which are themselves LDPC.
... Quantum low-density parity-check (qLDPC) codes [1] have become one of the main candidates to implement the error-correction layer of a large-scale quantum computer architecture [2][3][4]. Compared to other families of quantum error correction codes, qLDPC codes may reduce the physical qubit overhead, while protecting a larger number of logical qubits, so higher code rates can be obtained with similar or better error-correction performance [5][6][7][8][9]. Yet, for qLDPC codes to work on a real system, a larger number of physical qubits than those available in today's noisy intermediate-scale quantum systems is required [3], [10]. ...
... Compared to other families of quantum error correction codes, qLDPC codes may reduce the physical qubit overhead, while protecting a larger number of logical qubits, so higher code rates can be obtained with similar or better error-correction performance [5][6][7][8][9]. Yet, for qLDPC codes to work on a real system, a larger number of physical qubits than those available in today's noisy intermediate-scale quantum systems is required [3], [10]. Nonetheless, before large-scale quantum technology becomes available, two important problems need to be addressed from the qLDPC decoding perspective: i) devising new decoding algorithms that overcome or mitigate the effect of degeneracy [11], thus providing increased error correction capabilities, and ii) developing hardware designs that meet latency and power constraints imposed by the quantum system (e.g., latency values within the decoherence time of the qubits to be protected, or power limitations for qubit technologies requiring cryogenic cooling, when the decoder is implemented within the lowtemperature layers), a topic that only got attention recently [12][13][14]. ...
... Only |Q| extra multiplexors are required to choose between γ ′ q = γ q or γ ′ q = 0, and |C| extra multiplexors are required to decide which syndromes belong to s |N k , depending on the check c k . This approach of reusing hardware yields higher latency, but maybe interesting for a quantum computer with time constraints close to microseconds, e.g., based on trapped ion technology [3]. ...
Article
Full-text available
The inherent degeneracy of quantum low-density parity-check codes poses a challenge to their decoding, as it significantly degrades the error-correction performance of classical message-passing decoders. To improve their performance, a post-processing algorithm is usually employed. To narrow the gap between algorithmic solutions and hardware limitations, we introduce a new post-processing algorithm with a hardware-friendly orientation, providing error correction performance competitive to the state-of-the-art techniques. The proposed post-processing, referred to as check-agnosia, is inspired by stabilizer-inactivation, while considerably reducing the required hardware resources, and providing enough flexibility to allow different message-passing schedules and hardware architectures. We carry out a detailed analysis for a set of Pareto architectures with different tradeoffs between latency and power consumption, derived from the results of implemented designs on an FPGA board. We show that latency values close to one microsecond can be obtained on the FPGA board, and provide evidence that much lower latency values can be obtained for ASIC implementations. In the process, we also demonstrate the practical implications of the recently introduced t-covering layers and random-order layered scheduling.