scholarly journals Deep neural networks for inverse problems in mesoscopic physics: Characterization of the disorder configuration from quantum transport properties

2021 ◽  
Vol 104 (7) ◽  
Author(s):  
Gaëtan J. Percebois ◽  
Dietmar Weinmann
2019 ◽  
Author(s):  
David Beniaguev ◽  
Idan Segev ◽  
Michael London

AbstractWe introduce a novel approach to study neurons as sophisticated I/O information processing units by utilizing recent advances in the field of machine learning. We trained deep neural networks (DNNs) to mimic the I/O behavior of a detailed nonlinear model of a layer 5 cortical pyramidal cell, receiving rich spatio-temporal patterns of input synapse activations. A Temporally Convolutional DNN (TCN) with seven layers was required to accurately, and very efficiently, capture the I/O of this neuron at the millisecond resolution. This complexity primarily arises from local NMDA-based nonlinear dendritic conductances. The weight matrices of the DNN provide new insights into the I/O function of cortical pyramidal neurons, and the approach presented can provide a systematic characterization of the functional complexity of different neuron types. Our results demonstrate that cortical neurons can be conceptualized as multi-layered “deep” processing units, implying that the cortical networks they form have a non-classical architecture and are potentially more computationally powerful than previously assumed.


2021 ◽  
pp. 1-60
Author(s):  
Khashayar Filom ◽  
Roozbeh Farhoodi ◽  
Konrad Paul Kording

Abstract Neural networks are versatile tools for computation, having the ability to approximate a broad range of functions. An important problem in the theory of deep neural networks is expressivity; that is, we want to understand the functions that are computable by a given network. We study real, infinitely differentiable (smooth) hierarchical functions implemented by feedforward neural networks via composing simpler functions in two cases: (1) each constituent function of the composition has fewer in puts than the resulting function and (2) constituent functions are in the more specific yet prevalent form of a nonlinear univariate function (e.g., tanh) applied to a linear multivariate function. We establish that in each of these regimes, there exist nontrivial algebraic partial differential equations (PDEs) that are satisfied by the computed functions. These PDEs are purely in terms of the partial derivatives and are dependent only on the topology of the network. Conversely, we conjecture that such PDE constraints, once accompanied by appropriate nonsingularity conditions and perhaps certain inequalities involving partial derivatives, guarantee that the smooth function under consideration can be represented by the network. The conjecture is verified in numerous examples, including the case of tree architectures, which are of neuroscientific interest. Our approach is a step toward formulating an algebraic description of functional spaces associated with specific neural networks, and may provide useful new tools for constructing neural networks.


2019 ◽  
Vol 62 (3) ◽  
pp. 445-455
Author(s):  
Johannes Schwab ◽  
Stephan Antholzer ◽  
Markus Haltmeier

Abstract Deep learning and (deep) neural networks are emerging tools to address inverse problems and image reconstruction tasks. Despite outstanding performance, the mathematical analysis for solving inverse problems by neural networks is mostly missing. In this paper, we introduce and rigorously analyze families of deep regularizing neural networks (RegNets) of the form $$\mathbf {B}_\alpha + \mathbf {N}_{\theta (\alpha )} \mathbf {B}_\alpha $$Bα+Nθ(α)Bα, where $$\mathbf {B}_\alpha $$Bα is a classical regularization and the network $$\mathbf {N}_{\theta (\alpha )} \mathbf {B}_\alpha $$Nθ(α)Bα is trained to recover the missing part $${\text {Id}}_X - \mathbf {B}_\alpha $$IdX-Bα not found by the classical regularization. We show that these regularizing networks yield a convergent regularization method for solving inverse problems. Additionally, we derive convergence rates (quantitative error estimates) assuming a sufficient decay of the associated distance function. We demonstrate that our results recover existing convergence and convergence rates results for filter-based regularization methods as well as the recently introduced null space network as special cases. Numerical results are presented for a tomographic sparse data problem, which clearly demonstrate that the proposed RegNets improve classical regularization as well as the null space network.


2021 ◽  
Vol 14 (2) ◽  
pp. 470-505
Author(s):  
Tatiana A. Bubba ◽  
Mathilde Galinier ◽  
Matti Lassas ◽  
Marco Prato ◽  
Luca Ratti ◽  
...  

2018 ◽  
Vol 35 (1) ◽  
pp. 20-36 ◽  
Author(s):  
Alice Lucas ◽  
Michael Iliadis ◽  
Rafael Molina ◽  
Aggelos K. Katsaggelos

Sign in / Sign up

Export Citation Format

Share Document