scholarly journals Altimeter Observation-Based Eddy Nowcasting Using an Improved Conv-LSTM Network

2019 ◽  
Vol 11 (7) ◽  
pp. 783 ◽  
Author(s):  
Chunyong Ma ◽  
Siqing Li ◽  
Anni Wang ◽  
Jie Yang ◽  
Ge Chen

Eddies can be identified and tracked based on satellite altimeter data. However, few studies have focused on nowcasting the evolution of eddies using remote sensing data. In this paper, an improved Convolutional Long Short-Term Memory (Conv-LSTM) network named Prednet is used for eddy nowcasting. Prednet, which uses a deep, recurrent convolutional network with both bottom-up and top-down connects, has the ability to learn the temporal and spatial relationships associated with time series data. The network can effectively simulate and reconstruct the spatiotemporal characteristics of the future sea level anomaly (SLA) data. Based on the SLA data products provided by Archiving, Validation, and Interpretation of Satellite Oceanographic (AVISO) from 1993 to 2018, combined with an SLA-based eddy detection algorithm, seven-day eddy nowcasting experiments are conducted on the eddies in South China Sea. The matching ratio is defined as the percentage of true eddies that can be successfully predicted by Conv-LSTM network. On the first day of the nowcasting, matching ratio for eddies with diameters greater than 100 km is 95%, and the average matching ratio of the seven-day nowcasting is approximately 60%. In order to verify the performance of nowcasting method, two experiments were set up. A typical anticyclonic eddy shedding from Kuroshio in January 2017 was used to verify this nowcasting algorithm’s performance on single eddy, with the mean eddy center error is 11.2 km. Moreover, compared with the eddies detected in the Hybrid Coordinate Ocean Model data set (HYCOM), the eddies predicted with Conv-LSTM networks are closer to the eddies detected in the AVISO SLA data set, indicating that deep learning method can effectively nowcast eddies.

Algorithms ◽  
2021 ◽  
Vol 14 (8) ◽  
pp. 243
Author(s):  
Shun-Chieh Hsieh

The need for accurate tourism demand forecasting is widely recognized. The unreliability of traditional methods makes tourism demand forecasting still challenging. Using deep learning approaches, this study aims to adapt Long Short-Term Memory (LSTM), Bidirectional LSTM (Bi-LSTM), and Gated Recurrent Unit networks (GRU), which are straightforward and efficient, to improve Taiwan’s tourism demand forecasting. The networks are able to seize the dependence of visitor arrival time series data. The Adam optimization algorithm with adaptive learning rate is used to optimize the basic setup of the models. The results show that the proposed models outperform previous studies undertaken during the Severe Acute Respiratory Syndrome (SARS) events of 2002–2003. This article also examines the effects of the current COVID-19 outbreak to tourist arrivals to Taiwan. The results show that the use of the LSTM network and its variants can perform satisfactorily for tourism demand forecasting.


2020 ◽  
Vol 12 (01) ◽  
pp. 2050001
Author(s):  
Yadigar N. Imamverdiyev ◽  
Fargana J. Abdullayeva

In this paper, a fault prediction method for oil well equipment based on the analysis of time series data obtained from multiple sensors is proposed. The proposed method is based on deep learning (DL). For this purpose, comparative analysis of single-layer long short-term memory (LSTM) with the convolutional neural network (CNN) and stacked LSTM methods is provided. To demonstrate the efficacy of the proposed method, some experiments are conducted on the real data set obtained from eight sensors installed in oil wells. In this paper, compared to the single-layer LSTM model, the CNN and stacked LSTM predicted the faulty time series with a minimal loss.


Author(s):  
Sawsan Morkos Gharghory

An enhanced architecture of recurrent neural network based on Long Short-Term Memory (LSTM) is suggested in this paper for predicting the microclimate inside the greenhouse through its time series data. The microclimate inside the greenhouse largely affected by the external weather variations and it has a great impact on the greenhouse crops and its production. Therefore, it is a massive importance to predict the microclimate inside greenhouse as a preceding stage for accurate design of a control system that could fulfill the requirements of suitable environment for the plants and crop managing. The LSTM network is trained and tested by the temperatures and relative humidity data measured inside the greenhouse utilizing the mathematical greenhouse model with the outside weather data over 27 days. To evaluate the prediction accuracy of the suggested LSTM network, different measurements, such as Root Mean Square Error (RMSE) and Mean Absolute Error (MAE), are calculated and compared to those of conventional networks in references. The simulation results of LSTM network for forecasting the temperature and relative humidity inside greenhouse outperform over those of the traditional methods. The prediction results of temperature and humidity inside greenhouse in terms of RMSE approximately are 0.16 and 0.62 and in terms of MAE are 0.11 and 0.4, respectively, for both of them.


Author(s):  
Baoquan Wang ◽  
Tonghai Jiang ◽  
Xi Zhou ◽  
Bo Ma ◽  
Fan Zhao ◽  
...  

For abnormal detection of time series data, the supervised anomaly detection methods require labeled data. While the range of outlier factors used by the existing semi-supervised methods varies with data, model and time, the threshold for determining abnormality is difficult to obtain, in addition, the computational cost of the way to calculate outlier factors from other data points in the data set is also very large. These make such methods difficult to practically apply. This paper proposes a framework named LSTM-VE which uses clustering combined with visualization method to roughly label normal data, and then uses the normal data to train long short-term memory (LSTM) neural network for semi-supervised anomaly detection. The variance error (VE) of the normal data category classification probability sequence is used as outlier factor. The framework enables anomaly detection based on deep learning to be practically applied and using VE avoids the shortcomings of existing outlier factors and gains a better performance. In addition, the framework is easy to expand because the LSTM neural network can be replaced with other classification models. Experiments on the labeled and real unlabeled data sets prove that the framework is better than replicator neural networks with reconstruction error (RNN-RS) and has good scalability as well as practicability.


Electronics ◽  
2019 ◽  
Vol 8 (8) ◽  
pp. 876 ◽  
Author(s):  
Renzhuo Wan ◽  
Shuping Mei ◽  
Jun Wang ◽  
Min Liu ◽  
Fan Yang

Multivariable time series prediction has been widely studied in power energy, aerology, meteorology, finance, transportation, etc. Traditional modeling methods have complex patterns and are inefficient to capture long-term multivariate dependencies of data for desired forecasting accuracy. To address such concerns, various deep learning models based on Recurrent Neural Network (RNN) and Convolutional Neural Network (CNN) methods are proposed. To improve the prediction accuracy and minimize the multivariate time series data dependence for aperiodic data, in this article, Beijing PM2.5 and ISO-NE Dataset are analyzed by a novel Multivariate Temporal Convolution Network (M-TCN) model. In this model, multi-variable time series prediction is constructed as a sequence-to-sequence scenario for non-periodic datasets. The multichannel residual blocks in parallel with asymmetric structure based on deep convolution neural network is proposed. The results are compared with rich competitive algorithms of long short term memory (LSTM), convolutional LSTM (ConvLSTM), Temporal Convolution Network (TCN) and Multivariate Attention LSTM-FCN (MALSTM-FCN), which indicate significant improvement of prediction accuracy, robust and generalization of our model.


2020 ◽  
Vol 12 (3) ◽  
pp. 454 ◽  
Author(s):  
Zhengxin Zeng ◽  
Moeness G. Amin ◽  
Tao Shan

Hand and arm gesture recognition using radio frequency (RF) sensing modality proves valuable in man–machine interfaces and smart environments. In this paper, we use the time-series analysis method to accurately measure the similarity of the micro-Doppler (MD) signatures between the training and test data, thus providing improved gesture classification. We characterize the MD signatures by the maximum instantaneous Doppler frequencies depicted in the spectrograms. In particular, we apply two machine learning (ML) techniques, namely, the dynamic time warping (DTW) method and the long short-term memory (LSTM) network. Both methods take into account the values as well as the temporal evolution and characteristics of the time-series data. It is shown that the DTW method achieves high gesture classification rates and is robust to time misalignment.


2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Xinyi Hu ◽  
Chunxiang Gu ◽  
Fushan Wei

The development of the Internet has led to the complexity of network encrypted traffic. Identifying the specific classes of network encryption traffic is an important part of maintaining information security. The traditional traffic classification based on machine learning largely requires expert experience. As an end-to-end model, deep neural networks can minimize human intervention. This paper proposes the CLD-Net model, which can effectively distinguish network encrypted traffic. By segmenting and recombining the packet payload of the raw flow, it can automatically extract the features related to the packet payload, and by changing the expression of the packet interval, it integrates the packet interval information into the model. We use the ability of Convolutional Neural Network (CNN) to distinguish image classes, learn and classify the grayscale images that the raw flow has been preprocessed into, and then use the effectiveness of Long Short-Term Memory (LSTM) network on time series data to further enhance the model’s ability to classify. Finally, through feature reduction, the high-dimensional features learned by the neural network are converted into 8 dimensions to distinguish 8 different classes of network encrypted traffic. In order to verify the effectiveness of the CLD-Net model, we use the ISCX public dataset to conduct experiments. The results show that our proposed model can distinguish whether the unknown network traffic uses Virtual Private Network (VPN) with an accuracy of 98% and can accurately identify the specific traffic (chats, audio, or file) of Facebook and Skype applications with an accuracy of 92.89%.


2019 ◽  
Author(s):  
Frederik Kratzert ◽  
Daniel Klotz ◽  
Guy Shalev ◽  
Günter Klambauer ◽  
Sepp Hochreiter ◽  
...  

Abstract. Regional rainfall-runoff modeling is an old but still mostly out-standing problem in Hydrological Sciences. The problem currently is that traditional hydrological models degrade significantly in performance when calibrated for multiple basins together instead of for a single basin alone. In this paper, we propose a novel, data-driven approach using Long Short-Term Memory networks (LSTMs), and demonstrate that under a big data paradigm, this is not necessarily the case. By training a single LSTM model on 531 basins from the CAMELS data set using meteorological time series data and static catchment attributes, we were able to significantly improve performance compared to a set of several different hydrological benchmark models. Our proposed approach not only significantly outperforms hydrological models that were calibrated regionally but also achieves better performance than hydrological models that were calibrated for each basin individually. Furthermore, we propose an adaption to the standard LSTM architecture, which we call an Entity-Aware-LSTM (EA-LSTM), that allows for learning, and embedding as a feature layer in a deep learning model, catchment similarities. We show that this learned catchment similarity corresponds well with what we would expect from prior hydrological understanding.


Author(s):  
A. Kala ◽  
S. Ganesh Vaidyanathan

Rainfall forecasting is the most critical and challenging task because of its dependence on different climatic and weather parameters. Hence, robust and accurate rainfall forecasting models need to be created by applying various machine learning and deep learning approaches. Several automatic systems were created to predict the weather, but it depends on the type of weather pattern, season and location, which leads in maximizing the processing time. Therefore, in this work, significant artificial algae long short-term memory (LSTM) deep learning network is introduced to forecast the monthly rainfall. During this process, Homogeneous Indian Monthly Rainfall Data Set (1871–2016) is utilized to collect the rainfall information. The gathered information is computed with the help of an LSTM approach, which is able to process the time series data and predict the dependency between the data effectively. The most challenging phase of LSTM training process is finding optimal network parameters such as weight and bias. For obtaining the optimal parameters, one of the Meta heuristic bio-inspired algorithms called Artificial Algae Algorithm (AAA) is used. The forecasted rainfall for the testing dataset is compared with the existing models. The forecasted results exhibit superiority of our model over the state-of-the-art models for forecasting Indian Monsoon rainfall. The LSTM model combined with AAA predicts the monsoon from June–September accurately.


2020 ◽  
Vol 2020 ◽  
pp. 1-9
Author(s):  
Mu Qiao ◽  
Zixuan Cheng

Time series data are an extremely important type of data in the real world. Time series data gradually accumulate over time. Due to the dynamic growth in time series data, they tend to have higher dimensions and large data scales. When performing cluster analysis on this type of data, there are shortcomings in using traditional feature extraction methods for processing. To improve the clustering performance on time series data, this study uses a recurrent neural network (RNN) to train the input data. First, an RNN called the long short-term memory (LSTM) network is used to extract the features of time series data. Second, pooling technology is used to reduce the dimensionality of the output features in the last layer of the LSTM network. Due to the long time series, the hidden layer in the LSTM network cannot remember the information at all times. As a result, it is difficult to obtain a compressed representation of the global information in the last layer. Therefore, it is necessary to combine the information from the previous hidden unit to supplement all of the data. By stacking all the hidden unit information and performing a pooling operation, a dimensionality reduction effect of the hidden unit information is achieved. In this way, the memory loss caused by an excessively long sequence is compensated. Finally, considering that many time series data are unbalanced data, the unbalanced K-means (UK-means) algorithm is used to cluster the features after dimensionality reduction. The experiments were conducted on multiple publicly available time series datasets. The experimental results show that LSTM-based feature extraction combined with the dimensionality reduction processing of the pooling technology and cluster processing for imbalanced data used in this study has a good effect on the processing of time series data.


Sign in / Sign up

Export Citation Format

Share Document