scholarly journals Critical Temperature Prediction of Superconductors Based on Atomic Vectors and Deep Learning

Symmetry ◽  
2020 ◽  
Vol 12 (2) ◽  
pp. 262 ◽  
Author(s):  
Shaobo Li ◽  
Yabo Dan ◽  
Xiang Li ◽  
Tiantian Hu ◽  
Rongzhi Dong ◽  
...  

In this paper, a hybrid neural network (HNN) that combines a convolutional neural network (CNN) and long short-term memory neural network (LSTM) is proposed to extract the high-level characteristics of materials for critical temperature (Tc) prediction of superconductors. Firstly, by obtaining 73,452 inorganic compounds from the Materials Project (MP) database and building an atomic environment matrix, we obtained a vector representation (atomic vector) of 87 atoms by singular value decomposition (SVD) of the atomic environment matrix. Then, the obtained atom vector was used to implement the coded representation of the superconductors in the order of the atoms in the chemical formula of the superconductor. The experimental results of the HNN model trained with 12,413 superconductors were compared with three benchmark neural network algorithms and multiple machine learning algorithms using two commonly used material characterization methods. The experimental results show that the HNN method proposed in this paper can effectively extract the characteristic relationships between the atoms of superconductors, and it has high accuracy in predicting the Tc.

2021 ◽  
Vol 11 (4) ◽  
pp. 1829
Author(s):  
Davide Grande ◽  
Catherine A. Harris ◽  
Giles Thomas ◽  
Enrico Anderlini

Recurrent Neural Networks (RNNs) are increasingly being used for model identification, forecasting and control. When identifying physical models with unknown mathematical knowledge of the system, Nonlinear AutoRegressive models with eXogenous inputs (NARX) or Nonlinear AutoRegressive Moving-Average models with eXogenous inputs (NARMAX) methods are typically used. In the context of data-driven control, machine learning algorithms are proven to have comparable performances to advanced control techniques, but lack the properties of the traditional stability theory. This paper illustrates a method to prove a posteriori the stability of a generic neural network, showing its application to the state-of-the-art RNN architecture. The presented method relies on identifying the poles associated with the network designed starting from the input/output data. Providing a framework to guarantee the stability of any neural network architecture combined with the generalisability properties and applicability to different fields can significantly broaden their use in dynamic systems modelling and control.


2021 ◽  
Author(s):  
Minseop Park ◽  
Hyeok Choi ◽  
Hee-Sung Ahn ◽  
Hee-Ju Kang ◽  
Saehoon Kim ◽  
...  

BACKGROUND A pressure ulcer (PU) is a localized cutaneous injury caused by pressure or shear, which usually occurs in the region of a bony prominence. PUs are common in hospitalized patients and cause complications including infection. OBJECTIVE This study aimed to build a recurrent neural network-based algorithm to predict PUs 24 hours before their occurrence. METHODS This study analyzed a freely accessible intensive care unit (ICU) dataset, MIMIC- III. Deep learning and machine learning algorithms including long short-term memory (LSTM), multilayer perceptron (MLP), and XGBoost were applied to 37 dynamic features (including the Braden scale, vital signs and laboratory results, and interventions to reduce the risk of PUs) and 35 static features (including the length of time spent in the ICU, demographics, and comorbidities). Their outcomes were compared in terms of the area under the receiver operating characteristic (AUROC) and the area under the precision-recall curve (AUPRC). RESULTS A total of 1,048 cases of PUs (10.0%) and 9,402 controls (90.0%) without PUs satisfied the inclusion criteria for analysis. The LSTM + MLP model (AUROC: 0.7929 ± 0.0095, AUPRC: 0.4819 ± 0.0109) outperformed the other models, namely: MLP model (AUROC: 0.7777 ± 0.0083, AUPRC: 0.4527 ± 0.0195) and XGBoost (AUROC: 0.7465 ± 0.0087, AUPRC: 0.4052 ± 0.0087). Various features, including the length of time spent in the ICU, Glasgow coma scale, and the Braden scale, contributed to the prediction model. CONCLUSIONS This study suggests that recurrent neural network-based algorithms such as LSTM can be applied to evaluate the risk of PUs in ICU patients.


Water ◽  
2020 ◽  
Vol 12 (10) ◽  
pp. 2927
Author(s):  
Jiyeong Hong ◽  
Seoro Lee ◽  
Joo Hyun Bae ◽  
Jimin Lee ◽  
Woon Ji Park ◽  
...  

Predicting dam inflow is necessary for effective water management. This study created machine learning algorithms to predict the amount of inflow into the Soyang River Dam in South Korea, using weather and dam inflow data for 40 years. A total of six algorithms were used, as follows: decision tree (DT), multilayer perceptron (MLP), random forest (RF), gradient boosting (GB), recurrent neural network–long short-term memory (RNN–LSTM), and convolutional neural network–LSTM (CNN–LSTM). Among these models, the multilayer perceptron model showed the best results in predicting dam inflow, with the Nash–Sutcliffe efficiency (NSE) value of 0.812, root mean squared errors (RMSE) of 77.218 m3/s, mean absolute error (MAE) of 29.034 m3/s, correlation coefficient (R) of 0.924, and determination coefficient (R2) of 0.817. However, when the amount of dam inflow is below 100 m3/s, the ensemble models (random forest and gradient boosting models) performed better than MLP for the prediction of dam inflow. Therefore, two combined machine learning (CombML) models (RF_MLP and GB_MLP) were developed for the prediction of the dam inflow using the ensemble methods (RF and GB) at precipitation below 16 mm, and the MLP at precipitation above 16 mm. The precipitation of 16 mm is the average daily precipitation at the inflow of 100 m3/s or more. The results show the accuracy verification results of NSE 0.857, RMSE 68.417 m3/s, MAE 18.063 m3/s, R 0.927, and R2 0.859 in RF_MLP, and NSE 0.829, RMSE 73.918 m3/s, MAE 18.093 m3/s, R 0.912, and R2 0.831 in GB_MLP, which infers that the combination of the models predicts the dam inflow the most accurately. CombML algorithms showed that it is possible to predict inflow through inflow learning, considering flow characteristics such as flow regimes, by combining several machine learning algorithms.


Kybernetes ◽  
2019 ◽  
Vol 49 (9) ◽  
pp. 2335-2348 ◽  
Author(s):  
Milad Yousefi ◽  
Moslem Yousefi ◽  
Masood Fathi ◽  
Flavio S. Fogliatto

Purpose This study aims to investigate the factors affecting daily demand in an emergency department (ED) and to provide a forecasting tool in a public hospital for horizons of up to seven days. Design/methodology/approach In this study, first, the important factors to influence the demand in EDs were extracted from literature then the relevant factors to the study are selected. Then, a deep neural network is applied to constructing a reliable predictor. Findings Although many statistical approaches have been proposed for tackling this issue, better forecasts are viable by using the abilities of machine learning algorithms. Results indicate that the proposed approach outperforms statistical alternatives available in the literature such as multiple linear regression, autoregressive integrated moving average, support vector regression, generalized linear models, generalized estimating equations, seasonal ARIMA and combined ARIMA and linear regression. Research limitations/implications The authors applied this study in a single ED to forecast patient visits. Applying the same method in different EDs may give a better understanding of the performance of the model to the authors. The same approach can be applied in any other demand forecasting after some minor modifications. Originality/value To the best of the knowledge, this is the first study to propose the use of long short-term memory for constructing a predictor of the number of patient visits in EDs.


2020 ◽  
Vol 2020 ◽  
pp. 1-17
Author(s):  
Fujun Ma ◽  
Fanghao Song ◽  
Yan Liu ◽  
Jiahui Niu

The fatigue energy consumption of independent gestures can be obtained by calculating the power spectrum of surface electromyography (sEMG) signals. The existing research studies focus on the fatigue of independent gestures, while the research studies on integrated gestures are few. However, the actual gesture operation mode is usually integrated by multiple independent gestures, so the fatigue degree of integrated gestures can be predicted by training neural network of independent gestures. Three natural gestures including browsing information, playing games, and typing are divided into nine independent gestures in this paper, and the predicted model is established and trained by calculating the energy consumption of independent gestures. The artificial neural networks (ANNs) including backpropagation (BP) neural network, recurrent neural network (RNN), and long short-term memory (LSTM) are used to predict the fatigue of gesture. The support vector machine (SVM) is used to assist verification. Mean square error (MSE), root mean square error (RMSE), and mean absolute error (MAE) are utilized to evaluate the optimal prediction model. Furthermore, the different datasets of the processed sEMG signal and its decomposed wavelet coefficients are trained, respectively, and the changes of error functions of them are compared. The experimental results show that LSTM model is more suitable for gesture fatigue prediction. The processed sEMG signals are appropriate for using as the training set the fatigue degree of one-handed gesture. It is better to use wavelet decomposition coefficients as datasets to predict the high-dimensional sEMG signals of two-handed gestures. The experimental results can be applied to predict the fatigue degree of complex human-machine interactive gestures, help to avoid unreasonable gestures, and improve the user’s interactive experience.


Sensors ◽  
2020 ◽  
Vol 20 (9) ◽  
pp. 2458 ◽  
Author(s):  
Zhuozheng Wang ◽  
Yingjie Dong ◽  
Wei Liu ◽  
Zhuo Ma

The safety of an Internet Data Center (IDC) is directly determined by the reliability and stability of its chiller system. Thus, combined with deep learning technology, an innovative hybrid fault diagnosis approach (1D-CNN_GRU) based on the time-series sequences is proposed in this study for the chiller system using 1-Dimensional Convolutional Neural Network (1D-CNN) and Gated Recurrent Unit (GRU). Firstly, 1D-CNN is applied to automatically extract the local abstract features of the sensor sequence data. Secondly, GRU with long and short term memory characteristics is applied to capture the global features, as well as the dynamic information of the sequence. Moreover, batch normalization and dropout are introduced to accelerate network training and address the overfitting issue. The effectiveness and reliability of the proposed hybrid algorithm are assessed on the RP-1043 dataset; based on the experimental results, 1D-CNN_GRU displays the best performance compared with the other state-of-the-art algorithms. Further, the experimental results reveal that 1D-CNN_GRU has a superior identification rate for minor faults.


Images are the fastest growing content, they contribute significantly to the amount of data generated on the internet every day. Image classification is a challenging problem that social media companies work on vigorously to enhance the user’s experience with the interface. The recent advances in the field of machine learning and computer vision enables personalized suggestions and automatic tagging of images. Convolutional neural network is a hot research topic these days in the field of machine learning. With the help of immensely dense labelled data available on the internet the networks can be trained to recognize the differentiating features among images under the same label. New neural network algorithms are developed frequently that outperform the state-of-art machine learning algorithms. Recent algorithms have managed to produce error rates as low as 3.1%. In this paper the architecture of important CNN algorithms that have gained attention are discussed, analyzed and compared and the concept of transfer learning is used to classify different breeds of dogs..


2019 ◽  
Vol 9 (7) ◽  
pp. 1441 ◽  
Author(s):  
Wahyu Wiratama ◽  
Donggyu Sim

This paper proposes a fusion network for detecting changes between two high-resolution panchromatic images. The proposed fusion network consists of front- and back-end neural network architectures to generate dual outputs for change detection. Two networks for change detection were applied to handle image- and high-level changes of information, respectively. The fusion network employs single-path and dual-path networks to accomplish low-level and high-level differential detection, respectively. Based on two dual outputs, a two-stage decision algorithm was proposed to efficiently yield the final change detection results. The dual outputs were incorporated into the two-stage decision by operating logical operations. The proposed algorithm was designed to incorporate not only dual network outputs but also neighboring information. In this paper, a new fused loss function was presented to estimate the errors and optimize the proposed network during the learning stage. Based on our experimental evaluation, the proposed method yields a better detection performance than conventional neural network algorithms, with an average area under the curve of 0.9709, percentage correct classification of 99%, and Kappa of 75 for many test datasets.


In the recent few years, text analyses with neural models have become more popular due its versatile usages in different software applications. In order to improve the performance of text analytics, there is a huge collection of methods that have been identified and justified by the researchers. Most of these techniques have been efficiently used for text categorization, text generation, text summarization, query formulation, query answering, sentiment analysis and etc. In this review paper, we consolidate a recent literature along with the technical survey on different neural models such as Neural Language Model (NLM), sequence to sequence model (seq2seq), text generation, Bidirectional Encoder Representations from Transformers (BERT), machine translation model (MT), transformation model, attention model from the perception of applying deep machine learning algorithms for text analysis. Applied extensive experiments were conducted on the deep learning model such as Recurrent Neural Network (RNN) / Long Short-Term Memory (LSTM) / Convolutional Neural Network (CNN) and Attentive Transformation model to examine the efficacy of different neural models with the implementation using tensor flow and keras.


2020 ◽  
Vol 10 (6) ◽  
pp. 1265-1273
Author(s):  
Lili Chen ◽  
Huoyao Xu

Sleep apnea (SA) is a common sleep disorders affecting the sleep quality. Therefore the automatic SA detection has far-reaching implications for patients and physicians. In this paper, a novel approach is developed based on deep neural network (DNN) for automatic diagnosis SA. To this end, five features are extracted from electrocardiogram (ECG) signals through wavelet decomposition and sample entropy. The deep neural network is constructed by two-layer stacked sparse autoencoder (SSAE) network and one softmax layer. The softmax layer is added at the top of the SSAE network for diagnosing SA. Afterwards, the SSAE network can get more effective high-level features from raw features. The experimental results reveal that the performance of deep neural network can accomplish an accuracy of 96.66%, a sensitivity of 96.25%, and a specificity of 97%. In addition, the performance of deep neural network outperforms the comparison models including support vector machine (SVM), random forest (RF), and extreme learning machine (ELM). Finally, the experimental results reveal that the proposed method can be valid applied to automatic SA event detection.


Sign in / Sign up

Export Citation Format

Share Document