scholarly journals Analytic continuation of noisy data using Adams Bashforth residual neural network

2021 ◽  
Vol 0 (0) ◽  
pp. 0
Author(s):  
Xuping Xie ◽  
Feng Bao ◽  
Thomas Maier ◽  
Clayton Webster

<p style='text-indent:20px;'>We propose a data-driven learning framework for the analytic continuation problem in numerical quantum many-body physics. Designing an accurate and efficient framework for the analytic continuation of imaginary time using computational data is a grand challenge that has hindered meaningful links with experimental data. The standard Maximum Entropy (MaxEnt)-based method is limited by the quality of the computational data and the availability of prior information. Also, the MaxEnt is not able to solve the inversion problem under high level of noise in the data. Here we introduce a novel learning model for the analytic continuation problem using a Adams-Bashforth residual neural network (AB-ResNet). The advantage of this deep learning network is that it is model independent and, therefore, does not require prior information concerning the quantity of interest given by the spectral function. More importantly, the ResNet-based model achieves higher accuracy than MaxEnt for data with higher level of noise. Finally, numerical examples show that the developed AB-ResNet is able to recover the spectral function with accuracy comparable to MaxEnt where the noise level is relatively small.</p>

Mathematics ◽  
2021 ◽  
Vol 9 (18) ◽  
pp. 2255
Author(s):  
Xuemin Xue ◽  
Xiangtuan Xiong

In this paper, the numerical analytic continuation problem is addressed and a fractional Tikhonov regularization method is proposed. The fractional Tikhonov regularization not only overcomes the difficulty of analyzing the ill-posedness of the continuation problem but also obtains a more accurate numerical result for the discontinuity of solution. This article mainly discusses the a posteriori parameter selection rules of the fractional Tikhonov regularization method, and an error estimate is given. Furthermore, numerical results show that the proposed method works effectively.


Electronics ◽  
2019 ◽  
Vol 8 (12) ◽  
pp. 1544
Author(s):  
Yu Wang ◽  
Shuyang Ma ◽  
Xuanjing Shen

In order to reduce the computational consumption of the training and the testing phases of video face recognition methods based on a global statistical method and a deep learning network, a novel video face verification algorithm based on a three-patch local binary pattern (TPLBP) and the 3D Siamese convolutional neural network is proposed in this paper. The proposed method takes the TPLBP texture feature which has excellent performance in face analysis as the input of the network. In order to extract the inter-frame information of the video, the texture feature maps of the multi-frames are stacked, and then a shallow Siamese 3D convolutional neural network is used to realize dimension reduction. The similarity of high-level features of the video pair is solved by the shallow Siamese 3D convolutional neural network, and then mapped to the interval of 0 to 1 by linear transformation. The classification result can be obtained with the threshold of 0.5. Through an experiment on the YouTube Face database, the proposed algorithm got higher accuracy with less computational consumption than baseline methods and deep learning methods.


Author(s):  
D Tamil Priya ◽  
J Divya Udayan

Nowadays, deep learning technique becomes the most popular fast-growing machine learning method in an Artificial Neural Network. The Convolution Neural Network (CNN) is one of the deep learning architecture that has been applied in the field of image analysis and image classification. In this paper, we proposed a novel emotion learning model with a deep learning network. The aim of the learning model is to reduce the affective gap, that extracts the objects and background features of an image semantically, such as high-level and low-level features. These extracted features accompanied with few others and it is more effective in emotion prediction model based on visual concepts of image, that leads to better emotion recognition performance. For training and testing, the experiment is conducted on IAPS (International Affective Picture System) dataset, the Artistic Photos, and the Emotion-Image dataset. An experimental result shows that the proposed model combines visual-content and low-level features of the image that provides promising results for Affective Emotion Classification task.


Sign in / Sign up

Export Citation Format

Share Document