scholarly journals Ensemble Empirical Mode Decomposition with Adaptive Noise with Convolution Based Gated Recurrent Neural Network: A New Deep Learning Model for South Asian High Intensity Forecasting

Symmetry ◽  
2021 ◽  
Vol 13 (6) ◽  
pp. 931
Author(s):  
Kecheng Peng ◽  
Xiaoqun Cao ◽  
Bainian Liu ◽  
Yanan Guo ◽  
Wenlong Tian

The intensity variation of the South Asian high (SAH) plays an important role in the formation and extinction of many kinds of mesoscale systems, including tropical cyclones, southwest vortices in the Asian summer monsoon (ASM) region, and the precipitation in the whole Asia Europe region, and the SAH has a vortex symmetrical structure; its dynamic field also has the symmetry form. Not enough previous studies focus on the variation of SAH daily intensity. The purpose of this study is to establish a day-to-day prediction model of the SAH intensity, which can accurately predict not only the interannual variation but also the day-to-day variation of the SAH. Focusing on the summer period when the SAH is the strongest, this paper selects the geopotential height data between 1948 and 2020 from NCEP to construct the SAH intensity datasets. Compared with the classical deep learning methods of various kinds of efficient time series prediction model, we ultimately combine the Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN) method, which has the ability to deal with the nonlinear and unstable single system, with the Permutation Entropy (PE) method, which can extract the SAH intensity feature of IMF decomposed by CEEMDAN, and the Convolution-based Gated Recurrent Neural Network (ConvGRU) model is used to train, test, and predict the intensity of the SAH. The prediction results show that the combination of CEEMDAN and ConvGRU can have a higher accuracy and more stable prediction ability than the traditional deep learning model. After removing the redundant features in the time series, the prediction accuracy of the SAH intensity is higher than that of the classical model, which proves that the method has good applicability for the prediction of nonlinear systems in the atmosphere.

Author(s):  
Surenthiran Krishnan ◽  
Pritheega Magalingam ◽  
Roslina Ibrahim

<span>This paper proposes a new hybrid deep learning model for heart disease prediction using recurrent neural network (RNN) with the combination of multiple gated recurrent units (GRU), long short-term memory (LSTM) and Adam optimizer. This proposed model resulted in an outstanding accuracy of 98.6876% which is the highest in the existing model of RNN. The model was developed in Python 3.7 by integrating RNN in multiple GRU that operates in Keras and Tensorflow as the backend for deep learning process, supported by various Python libraries. The recent existing models using RNN have reached an accuracy of 98.23% and deep neural network (DNN) has reached 98.5%. The common drawbacks of the existing models are low accuracy due to the complex build-up of the neural network, high number of neurons with redundancy in the neural network model and imbalance datasets of Cleveland. Experiments were conducted with various customized model, where results showed that the proposed model using RNN and multiple GRU with synthetic minority oversampling technique (SMOTe) has reached the best performance level. This is the highest accuracy result for RNN using Cleveland datasets and much promising for making an early heart disease prediction for the patients.</span>


Symmetry ◽  
2020 ◽  
Vol 12 (6) ◽  
pp. 893
Author(s):  
Yanan Guo ◽  
Xiaoqun Cao ◽  
Bainian Liu ◽  
Kecheng Peng

El Niño is an important quasi-cyclical climate phenomenon that can have a significant impact on ecosystems and societies. Due to the chaotic nature of the atmosphere and ocean systems, traditional methods (such as statistical methods) are difficult to provide accurate El Niño index predictions. The latest research shows that Ensemble Empirical Mode Decomposition (EEMD) is suitable for analyzing non-linear and non-stationary signal sequences, Convolutional Neural Network (CNN) is good at local feature extraction, and Recurrent Neural Network (RNN) can capture the overall information of the sequence. As a special RNN, Long Short-Term Memory (LSTM) has significant advantages in processing and predicting long, complex time series. In this paper, to predict the El Niño index more accurately, we propose a new hybrid neural network model, EEMD-CNN-LSTM, which combines EEMD, CNN, and LSTM. In this hybrid model, the original El Niño index sequence is first decomposed into several Intrinsic Mode Functions (IMFs) using the EEMD method. Next, we filter the IMFs by setting a threshold, and we use the filtered IMFs to reconstruct the new El Niño data. The reconstructed time series then serves as input data for CNN and LSTM. The above data preprocessing method, which first decomposes the time series and then reconstructs the time series, uses the idea of symmetry. With this symmetric operation, we extract valid information about the time series and then make predictions based on the reconstructed time series. To evaluate the performance of the EEMD-CNN-LSTM model, the proposed model is compared with four methods including the traditional statistical model, machine learning model, and other deep neural network models. The experimental results show that the prediction results of EEMD-CNN-LSTM are not only more accurate but also more stable and reliable than the general neural network model.


2021 ◽  
Author(s):  
Yuanjun Li ◽  
Satomi Suzuki ◽  
Roland Horne

Abstract Knowledge of well connectivity in a reservoir is crucial, especially for early-stage field development and water injection management. However, traditional interference tests can often take several weeks or even longer depending on the distance between wells and the hydraulic diffusivity of the reservoir. Therefore, instead of physically shutting in production wells, we can take advantage of deep learning methods to perform virtual interference tests. In this study, we first used the historical field data to train the deep learning model, a modified Long- and Short-term Time-series network (LSTNet). This model combines the Convolution Neural Network (CNN) to extract short-term local dependency patterns, the Recurrent Neural Network (RNN) to discover long-term patterns for time series trends, and a traditional autoregressive model to alleviate the scale insensitive problem. To address the time-lag issue in signal propagation, we employed a skip-recurrent structure that extends the existing RNN structure by connecting a current state with a previous state when the flow rate signal from an adjacent well starts to impact the observation well. In addition, we found that wells connected to the same manifold usually have similar liquid production patterns, which can lead to false causation of subsurface pressure communication. Thus we enhanced the model performance by using external feature differences to remove the surface connection in the data, thereby reducing input similarity. This enhancement can also amplify the weak signal and thus distinguish input signals. To examine the deep learning model, we used the datasets generated from Norne Field with two different geological settings: sealing and nonsealing cases. The production wells are placed at two sides of the fault to test the false-negative prediction. With these improvements and with parameter tuning, the modified LSTNet model could successfully indicate the well connectivity for the nonsealing cases and reveal the sealing structures in the sealing cases based on the historical data. The deep learning method we employed in this work can predict well pressure without using hand-crafted features, which are usually formed based on flow patterns and geological settings. Thus, this method should be applicable to general cases and more intuitive. Furthermore, this virtual interference test with a deep learning framework can avoid production loss.


2021 ◽  
Vol 11 (12) ◽  
pp. 3199-3208
Author(s):  
K. Ganapriya ◽  
N. Uma Maheswari ◽  
R. Venkatesh

Prediction of occurrence of a seizure would be of greater help to make necessary precaution for taking care of the patient. A Deep learning model, recurrent neural network (RNN), is designed for predicting the upcoming values in the EEG values. A deep data analysis is made to find the parameter that could best differentiate the normal values and seizure values. Next a recurrent neural network model is built for predicting the values earlier. Four different variants of recurrent neural networks are designed in terms of number of time stamps and the number of LSTM layers and the best model is identified. The best identified RNN model is used for predicting the values. The performance of the model is evaluated in terms of explained variance score and R2 score. The model founds to perform well number of elements in the test dataset is minimal and so this model can predict the seizure values only a few seconds earlier.


Author(s):  
Hojun Lee ◽  
Donghwan Yun ◽  
Jayeon Yoo ◽  
Kiyoon Yoo ◽  
Yong Chul Kim ◽  
...  

Background and objectivesIntradialytic hypotension has high clinical significance. However, predicting it using conventional statistical models may be difficult because several factors have interactive and complex effects on the risk. Herein, we applied a deep learning model (recurrent neural network) to predict the risk of intradialytic hypotension using a timestamp-bearing dataset.Design, setting, participants, & measurementsWe obtained 261,647 hemodialysis sessions with 1,600,531 independent timestamps (i.e., time-varying vital signs) and randomly divided them into training (70%), validation (5%), calibration (5%), and testing (20%) sets. Intradialytic hypotension was defined when nadir systolic BP was <90 mm Hg (termed intradialytic hypotension 1) or when a decrease in systolic BP ≥20 mm Hg and/or a decrease in mean arterial pressure ≥10 mm Hg on the basis of the initial BPs (termed intradialytic hypotension 2) or prediction time BPs (termed intradialytic hypotension 3) occurred within 1 hour. The area under the receiver operating characteristic curves, the area under the precision-recall curves, and F1 scores obtained using the recurrent neural network model were compared with those obtained using multilayer perceptron, Light Gradient Boosting Machine, and logistic regression models.ResultsThe recurrent neural network model for predicting intradialytic hypotension 1 achieved an area under the receiver operating characteristic curve of 0.94 (95% confidence intervals, 0.94 to 0.94), which was higher than those obtained using the other models (P<0.001). The recurrent neural network model for predicting intradialytic hypotension 2 and intradialytic hypotension 3 achieved area under the receiver operating characteristic curves of 0.87 (interquartile range, 0.87–0.87) and 0.79 (interquartile range, 0.79–0.79), respectively, which were also higher than those obtained using the other models (P≤0.001). The area under the precision-recall curve and F1 score were higher using the recurrent neural network model than they were using the other models. The recurrent neural network models for intradialytic hypotension were highly calibrated.ConclusionsOur deep learning model can be used to predict the real-time risk of intradialytic hypotension.


2020 ◽  
Vol 12 (4) ◽  
pp. 146-159
Author(s):  
Murillo A. S. Torres ◽  
Mateus S. Marinho ◽  
Dany S. Dominguez ◽  
Dárcio R. Silva ◽  
Hélder Conceição Almeida

2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Jinlai Zhang ◽  
Yanmei Meng ◽  
Jin Wei ◽  
Jie Chen ◽  
Johnny Qin

Sugar price forecasting has attracted extensive attention from policymakers due to its significant impact on people’s daily lives and markets. In this paper, we present a novel hybrid deep learning model that utilizes the merit of a time series decomposition technology empirical mode decomposition (EMD) and a hyperparameter optimization algorithm Tree of Parzen Estimators (TPEs) for sugar price forecasting. The effectiveness of the proposed model was implemented in a case study with the price of London Sugar Futures. Two experiments are conducted to verify the superiority of the EMD and TPE. Moreover, the specific effects of EMD and TPE are analyzed by the DM test and improvement percentage. Finally, empirical results demonstrate that the proposed hybrid model outperforms other models.


2021 ◽  
Author(s):  
Wenchuan Wang ◽  
Yu-jin Du ◽  
Kwok-wing Chau ◽  
Chun-Tian Cheng ◽  
Dong-mei Xu ◽  
...  

Abstract The optimal planning and management of modern water resources depends highly on reliable and accurate runoff forecasting. Data preprocessing technology can provide new possibilities for improving the accuracy of runoff forecasting, when basic physical relationships cannot be captured using a single prediction model. Yet, few researches evaluated the performances of various data preprocessing technology in predicting monthly runoff time series so far. In order to fill this research gap, this paper investigates the potential of five data preprocessing techniques based on gated recurrent unit network (GRU) model in monthly runoff prediction, namely variational mode decomposition (VMD), wavelet packet decomposition (WPD), complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN), extreme-point symmetric mode decomposition (ESMD), and singular spectrum analysis (SSA). In this study, the original monthly runoff data is first decomposed into a set of subcomponents using five data preprocessing methods; second, each component is predicted by developing an appropriate GRU model; finally, the forecasting results of different two-stage hybrid models are obtained by aggregating of forecast results of the corresponding subcomponents. Four performance metrics are employed as the evaluation benchmarks. The experimental results from two hydropower stations in China show that five data preprocessing techniques can attain satisfying prediction results, while VMD and WPD methods can yield better performance than CEEMDAN, ESMD and SSA in both training and testing periods in terms of four indexes. Indeed, it is significantly important to carefully determine an appropriate data preprocessing method according to the actual characteristics of the study area.


Sensors ◽  
2021 ◽  
Vol 21 (24) ◽  
pp. 8270
Author(s):  
Taehwan Kim ◽  
Jeongho Park ◽  
Juwon Lee ◽  
Jooyoung Park

The global adoption of smartphone technology affords many conveniences, and not surprisingly, healthcare applications using wearable sensors like smartphones have received much attention. Among the various potential applications and research related to healthcare, recent studies have been conducted on recognizing human activities and characterizing human motions, often with wearable sensors, and with sensor signals that generally operate in the form of time series. In most studies, these sensor signals are used after pre-processing, e.g., by converting them into an image format rather than directly using the sensor signals themselves. Several methods have been used for converting time series data to image formats, such as spectrograms, raw plots, and recurrence plots. In this paper, we deal with the health care task of predicting human motion signals obtained from sensors attached to persons. We convert the motion signals into image formats with the recurrence plot method, and use it as an input into a deep learning model. For predicting subsequent motion signals, we utilize a recently introduced deep learning model combining neural networks and the Fourier transform, the Fourier neural operator. The model can be viewed as a Fourier-transform-based extension of a convolution neural network, and in these experiments, we compare the results of the model to the convolution neural network (CNN) model. The results of the proposed method in this paper show better performance than the results of the CNN model and, furthermore, we confirm that it can be utilized for detecting potential accidental falls more quickly via predicted motion signals.


2021 ◽  
Vol 2021 ◽  
pp. 1-14
Author(s):  
Yang Cao ◽  
Xiaokang Zhou ◽  
Ke Yan

Monitoring and prediction of ground settlement during tunnel construction are of great significance to ensure the safe and reliable operation of urban tunnel systems. Data-driven techniques combining artificial intelligence (AI) and sensor networks are popular methods in the field, which have several advantages, including high prediction accuracy, efficiency, and low cost. Deep learning, as one of the advanced techniques in AI, is demanded for the tunnel settlement forecasting problem. However, deep neural networks often require a large amount of training data. Due to the tunnel construction, the available training data samples are limited, and the data are univariate (i.e., containing only the settlement data). In response to the above problems, this research proposes a deep learning model that only requires limited number of training data for short-period prediction of the tunnel surface settlement. In the proposed complete ensemble empirical mode decomposition with adaptive noise long short term memory (CEEMDAN-LSTM model), single-dimensional data is divided into multidimensional data by CEEMDAN through the complete ensemble empirical mode decomposition. Each component is then predicted by a LSTM neural network and superimposed for obtaining the final prediction result. Experimental results show that, compared with existing machine learning techniques and algorithms, this deep learning method has higher prediction accuracy and acceptable computational efficiency. In the case of small samples, this method can significantly improve the accuracy of time series forecasting.


Sign in / Sign up

Export Citation Format

Share Document