scholarly journals Approximate low-rank factorization with structured factors

2010 ◽  
Vol 54 (12) ◽  
pp. 3411-3420 ◽  
Author(s):  
Ivan Markovsky ◽  
Mahesan Niranjan
Keyword(s):  
Low Rank ◽  
2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Mario Motta ◽  
Erika Ye ◽  
Jarrod R. McClean ◽  
Zhendong Li ◽  
Austin J. Minnich ◽  
...  

AbstractThe quantum simulation of quantum chemistry is a promising application of quantum computers. However, for N molecular orbitals, the $${\mathcal{O}}({N}^{4})$$ O ( N 4 ) gate complexity of performing Hamiltonian and unitary Coupled Cluster Trotter steps makes simulation based on such primitives challenging. We substantially reduce the gate complexity of such primitives through a two-step low-rank factorization of the Hamiltonian and cluster operator, accompanied by truncation of small terms. Using truncations that incur errors below chemical accuracy allow one to perform Trotter steps of the arbitrary basis electronic structure Hamiltonian with $${\mathcal{O}}({N}^{3})$$ O ( N 3 ) gate complexity in small simulations, which reduces to $${\mathcal{O}}({N}^{2})$$ O ( N 2 ) gate complexity in the asymptotic regime; and unitary Coupled Cluster Trotter steps with $${\mathcal{O}}({N}^{3})$$ O ( N 3 ) gate complexity as a function of increasing basis size for a given molecule. In the case of the Hamiltonian Trotter step, these circuits have $${\mathcal{O}}({N}^{2})$$ O ( N 2 ) depth on a linearly connected array, an improvement over the $${\mathcal{O}}({N}^{3})$$ O ( N 3 ) scaling assuming no truncation. As a practical example, we show that a chemically accurate Hamiltonian Trotter step for a 50 qubit molecular simulation can be carried out in the molecular orbital basis with as few as 4000 layers of parallel nearest-neighbor two-qubit gates, consisting of fewer than 105 non-Clifford rotations. We also apply our algorithm to iron–sulfur clusters relevant for elucidating the mode of action of metalloenzymes.


Quantum ◽  
2019 ◽  
Vol 3 ◽  
pp. 208 ◽  
Author(s):  
Dominic W. Berry ◽  
Craig Gidney ◽  
Mario Motta ◽  
Jarrod R. McClean ◽  
Ryan Babbush

Recent work has dramatically reduced the gate complexity required to quantum simulate chemistry by using linear combinations of unitaries based methods to exploit structure in the plane wave basis Coulomb operator. Here, we show that one can achieve similar scaling even for arbitrary basis sets (which can be hundreds of times more compact than plane waves) by using qubitized quantum walks in a fashion that takes advantage of structure in the Coulomb operator, either by directly exploiting sparseness, or via a low rank tensor factorization. We provide circuits for several variants of our algorithm (which all improve over the scaling of prior methods) including one with O~(N3/2λ) T complexity, where N is number of orbitals and λ is the 1-norm of the chemistry Hamiltonian. We deploy our algorithms to simulate the FeMoco molecule (relevant to Nitrogen fixation) and obtain circuits requiring about seven hundred times less surface code spacetime volume than prior quantum algorithms for this system, despite us using a larger and more accurate active space.


2019 ◽  
Vol 12 (1) ◽  
pp. 72
Author(s):  
Sergey Voronin

We describe a simple approach useful for improving noisy, blurred images. Our approach is based on the use of a parallel block-based low rank factorization technique for projection based reduction of matrix dimensions and on a customized iteratively reweighted CG approach followed by the use of a Fourier Wiener filter. The regularization scheme with a transform basis offers variable residual penalty and increased per-iteration performance. The outlined approach is particularly aimed at high blur and noise settings.


2018 ◽  
Vol 97 (1) ◽  
Author(s):  
Lucas Kohn ◽  
Ferdinand Tschirsich ◽  
Maximilian Keck ◽  
Martin B. Plenio ◽  
Dario Tamascelli ◽  
...  

2021 ◽  
Vol 68 (1) ◽  
Author(s):  
Dina Tantawy ◽  
Mohamed Zahran ◽  
Amr Wassal

AbstractSince its invention, generative adversarial networks (GANs) have shown outstanding results in many applications. GANs are powerful, yet resource-hungry deep learning models. The main difference between GANs and ordinary deep learning models is the nature of their output and training instability. For example, GANs output can be a whole image versus other models detecting objects or classifying images. Thus, the architecture and numeric precision of the network affect the quality and speed of the solution. Hence, accelerating GANs is pivotal. Data transfer is considered the main source of energy consumption, that is why memory compression is a very efficient technique to accelerate and optimize GANs. Two main types of memory compression exist: lossless and lossy ones. Lossless compression techniques are general among all models; thus, we will focus in this paper on lossy techniques. Lossy compression techniques are further classified into (a) pruning, (b) knowledge distillation, (c) low-rank factorization, (d) lowering numeric precision, and (e) encoding. In this paper, we survey lossy compression techniques for CNN-based GANs. Our findings showed the superiority of knowledge distillation over pruning alone and the gaps in the research field that needs to be explored like encoding and different combination of compression techniques.


Sign in / Sign up

Export Citation Format

Share Document