scholarly journals Long-range coupling and scalable architecture for superconducting flux qubits

2008 ◽  
Vol 86 (4) ◽  
pp. 533-540
Author(s):  
A G Fowler ◽  
W F Thompson ◽  
Z Yan ◽  
A M Stephens ◽  
B L.T. Plourde ◽  
...  

Constructing a fault-tolerant quantum computer is a daunting task. Given any design, it is possible to determine the maximum error rate of each type of component that can be tolerated while still permitting arbitrarily large-scale quantum computation. It is an under-appreciated fact that including an appropriately designed mechanism enabling long-range qubit coupling or transport substantially increases the maximum tolerable error rates of all components. With this thought in mind, we take the superconducting flux qubit coupling mechanism described in Plourde et al. (Phys. Rev. B, 70, 140501(R) (2004)) and extend it to allow approximately 500~MHz coupling of square flux qubits, 50 µm a side, at a distance of up to several mm. This mechanism is then used as the basis of two scalable architectures for flux qubits taking into account crosstalk and fault-tolerant considerations such as permitting a universal set of logical gates, parallelism, measurement and initialization, and data mobility.PACS No.: 03.67.Lx

2003 ◽  
Vol 01 (01) ◽  
pp. 1-23 ◽  
Author(s):  
VLATKO VEDRAL

In the first part of this review we introduce the basics theory behind geometric phases and emphasize their importance in quantum theory. The subject is presented in a general way so as to illustrate its wide applicability, but we also introduce a number of examples that will help the reader understand the basic issues involved. In the second part we show how to perform a universal quantum computation using only geometric effects appearing in quantum phases. It is then finally discussed how this geometric way of performing quantum gates can lead to a stable, large scale, intrinsically fault-tolerant quantum computer.


2021 ◽  
Author(s):  
Zi-Jun Quan ◽  
Si-Ang Li ◽  
Zhi-Xue Yang ◽  
Juan-Juan Zhao ◽  
Guo-Hua Li ◽  
...  

To achieve the enormous potential of gene-editing technology in clinical therapies, both the on-target and unintended editing consequences need to be thoroughly evaluated. However, there is a lack of a comprehensive, pipelined, large-scale and economical workflow for detecting genome editing outcomes, in particular insertion or deletion of a large fragment. Here, we describe an approach for efficient and accurate detection of multiple genetic changes after CRISPR-Cas9 editing by pooled nanopore sequencing of barcoded long-range PCR products. To overcome the high error rates and indels of nanopore sequencing, we developed a pipeline to capture the barcoded sequences by grepping reads of nanopore amplicon sequencing (GREPore-seq). GREPore-seq can detect NHEJ-mediated double-stranded oligodeoxynucleotide (dsODN) insertions with comparable accuracy to Illumina next-generation sequencing (NGS). GREPore-seq also identifies HDR-mediated large gene knock-in, which excellently correlates with FACS analysis data. Low-level plasmid backbone insertion after HDR editing was also detected. We have established a practical workflow to identify genetic changes, including quantifying dsODN insertions, knock-ins, plasmid backbone insertions, and large fragment deletions after CRISPR editing. This toolkit for nanopore sequencing of pooled long amplicons should have broad applications in assessing on-target HDR editing and inadvertent large indels of over 1 kb. GREPore-seq is freely available at GitHub (https://github.com/lisiang/GREPore-seq).


2010 ◽  
Vol 08 (01n02) ◽  
pp. 295-323 ◽  
Author(s):  
RODNEY VAN METER ◽  
THADDEUS D. LADD ◽  
AUSTIN G. FOWLER ◽  
YOSHIHISA YAMAMOTO

In a large-scale quantum computer, the cost of communications will dominate the performance and resource requirements, place many severe demands on the technology, and constrain the architecture. Unfortunately, fault-tolerant computers based entirely on photons with probabilistic gates, though equipped with "built-in" communication, have very large resource overheads; likewise, computers with reliable probabilistic gates between photons or quantum memories may lack sufficient communication resources in the presence of realistic optical losses. Here, we consider a compromise architecture, in which semiconductor spin qubits are coupled by bright laser pulses through nanophotonic waveguides and cavities using a combination of frequent probabilistic and sparse determinstic entanglement mechanisms. The large photonic resource requirements incurred by the use of probabilistic gates for quantum communication are mitigated in part by the potential high-speed operation of the semiconductor nanophotonic hardware. The system employs topological cluster-state quantum error correction for achieving fault-tolerance. Our results suggest that such an architecture/technology combination has the potential to scale to a system capable of attacking classically intractable computational problems.


1975 ◽  
Vol 14 (01) ◽  
pp. 32-34
Author(s):  
Elisabeth Schach

Data reporting the experience with an optical mark page reader is presented (IBM 1231Ν1). Information from 52,000 persons was gathered in seven countries, decentrally coded and centrally processed. Reader performance rates (i.e. sheets read per hour, sheet rejection rates, reading error rates) and costs (coding, verification, reading, etc.) are given.


Nature ◽  
2021 ◽  
Vol 595 (7867) ◽  
pp. 383-387
Author(s):  
◽  
Zijun Chen ◽  
Kevin J. Satzinger ◽  
Juan Atalaya ◽  
Alexander N. Korotkov ◽  
...  

AbstractRealizing the potential of quantum computing requires sufficiently low logical error rates1. Many applications call for error rates as low as 10−15 (refs. 2–9), but state-of-the-art quantum platforms typically have physical error rates near 10−3 (refs. 10–14). Quantum error correction15–17 promises to bridge this divide by distributing quantum logical information across many physical qubits in such a way that errors can be detected and corrected. Errors on the encoded logical qubit state can be exponentially suppressed as the number of physical qubits grows, provided that the physical error rates are below a certain threshold and stable over the course of a computation. Here we implement one-dimensional repetition codes embedded in a two-dimensional grid of superconducting qubits that demonstrate exponential suppression of bit-flip or phase-flip errors, reducing logical error per round more than 100-fold when increasing the number of qubits from 5 to 21. Crucially, this error suppression is stable over 50 rounds of error correction. We also introduce a method for analysing error correlations with high precision, allowing us to characterize error locality while performing quantum error correction. Finally, we perform error detection with a small logical qubit using the 2D surface code on the same device18,19 and show that the results from both one- and two-dimensional codes agree with numerical simulations that use a simple depolarizing error model. These experimental demonstrations provide a foundation for building a scalable fault-tolerant quantum computer with superconducting qubits.


2021 ◽  
Vol 9 (1) ◽  
Author(s):  
Moritz Mercker ◽  
Philipp Schwemmer ◽  
Verena Peschko ◽  
Leonie Enners ◽  
Stefan Garthe

Abstract Background New wildlife telemetry and tracking technologies have become available in the last decade, leading to a large increase in the volume and resolution of animal tracking data. These technical developments have been accompanied by various statistical tools aimed at analysing the data obtained by these methods. Methods We used simulated habitat and tracking data to compare some of the different statistical methods frequently used to infer local resource selection and large-scale attraction/avoidance from tracking data. Notably, we compared spatial logistic regression models (SLRMs), spatio-temporal point process models (ST-PPMs), step selection models (SSMs), and integrated step selection models (iSSMs) and their interplay with habitat and animal movement properties in terms of statistical hypothesis testing. Results We demonstrated that only iSSMs and ST-PPMs showed nominal type I error rates in all studied cases, whereas SSMs may slightly and SLRMs may frequently and strongly exceed these levels. iSSMs appeared to have on average a more robust and higher statistical power than ST-PPMs. Conclusions Based on our results, we recommend the use of iSSMs to infer habitat selection or large-scale attraction/avoidance from animal tracking data. Further advantages over other approaches include short computation times, predictive capacity, and the possibility of deriving mechanistic movement models.


Diagnostics ◽  
2021 ◽  
Vol 11 (8) ◽  
pp. 1384
Author(s):  
Yin Dai ◽  
Yifan Gao ◽  
Fayu Liu

Over the past decade, convolutional neural networks (CNN) have shown very competitive performance in medical image analysis tasks, such as disease classification, tumor segmentation, and lesion detection. CNN has great advantages in extracting local features of images. However, due to the locality of convolution operation, it cannot deal with long-range relationships well. Recently, transformers have been applied to computer vision and achieved remarkable success in large-scale datasets. Compared with natural images, multi-modal medical images have explicit and important long-range dependencies, and effective multi-modal fusion strategies can greatly improve the performance of deep models. This prompts us to study transformer-based structures and apply them to multi-modal medical images. Existing transformer-based network architectures require large-scale datasets to achieve better performance. However, medical imaging datasets are relatively small, which makes it difficult to apply pure transformers to medical image analysis. Therefore, we propose TransMed for multi-modal medical image classification. TransMed combines the advantages of CNN and transformer to efficiently extract low-level features of images and establish long-range dependencies between modalities. We evaluated our model on two datasets, parotid gland tumors classification and knee injury classification. Combining our contributions, we achieve an improvement of 10.1% and 1.9% in average accuracy, respectively, outperforming other state-of-the-art CNN-based models. The results of the proposed method are promising and have tremendous potential to be applied to a large number of medical image analysis tasks. To our best knowledge, this is the first work to apply transformers to multi-modal medical image classification.


2021 ◽  
pp. 115738
Author(s):  
KyoHoon Jin ◽  
JeongA Wi ◽  
EunJu Lee ◽  
ShinJin Kang ◽  
SooKyun Kim ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document