scholarly journals Single Layer & Multi-layer Long Short-Term Memory (LSTM) Model with Intermediate Variables for Weather Forecasting

2018 ◽  
Vol 135 ◽  
pp. 89-98 ◽  
Author(s):  
Afan Galih Salman ◽  
Yaya Heryadi ◽  
Edi Abdurahman ◽  
Wayan Suparta
2018 ◽  
Vol 7 (3) ◽  
pp. 377-385 ◽  
Author(s):  
Afan Galih Salman ◽  
Yaya Heryadi ◽  
Edi Abdurahman ◽  
Wayan Suparta

Over decades, weather forecasting has attracted researchers from worldwide communities due to itssignificant effect to global human life ranging from agriculture, air trafic control to public security. Although formal study on weather forecasting has been started since 19th century, research attention to weather forecasting tasks increased significantly after weather big data are widely available. This paper proposed merged-Long Short-term Memory for forecasting ground visibility at the airpot using timeseries of predictor variable combined with another variable as moderating variable. The proposed models were tested using weather timeseries data at Hang Nadim Airport, Batam. The experiment results showedthe best average accuracy for forecasting visibility using merged Long Short-term Memory model and temperature and dew point as a moderating variable was (88.6%); whilst, using basic Long Short-term Memory without moderating variablewasonly (83.8%) respectively (increased by 4.8%).


Atmosphere ◽  
2021 ◽  
Vol 12 (11) ◽  
pp. 1479
Author(s):  
Xinyi Wu ◽  
Zhixin Liu ◽  
Lirong Yin ◽  
Wenfeng Zheng ◽  
Lihong Song ◽  
...  

Air pollution with fluidity can influence a large area for a long time and can be harmful to the ecological environment and human health. Haze, one form of air pollution, has been a critical problem since the industrial revolution. Though the actual cause of haze could be various and complicated, in this paper, we have found out that many gases’ distributions and wind power or temperature are related to PM2.5/10’s concentration. Thus, based on the correlation between PM2.5/PM10 and other gaseous pollutants and the timing continuity of PM2.5/PM10, we propose a multilayer long short-term memory haze prediction model. This model utilizes the concentration of O3, CO, NO2, SO2, and PM2.5/PM10 in the last 24 h as inputs to predict PM2.5/PM10 concentrations in the future. Besides pre-processing the data, the primary approach to boost the prediction performance is adding layers above a single-layer long short-term memory model. Moreover, it is proved that by doing so, we could let the network make predictions more accurately and efficiently. Furthermore, by comparison, in general, we have obtained a more accurate prediction.


2017 ◽  
Vol 2017 ◽  
pp. 1-7 ◽  
Author(s):  
YuKang Jia ◽  
Zhicheng Wu ◽  
Yanyan Xu ◽  
Dengfeng Ke ◽  
Kaile Su

Long Short-Term Memory (LSTM) is a kind of Recurrent Neural Networks (RNN) relating to time series, which has achieved good performance in speech recogniton and image recognition. Long Short-Term Memory Projection (LSTMP) is a variant of LSTM to further optimize speed and performance of LSTM by adding a projection layer. As LSTM and LSTMP have performed well in pattern recognition, in this paper, we combine them with Connectionist Temporal Classification (CTC) to study piano’s continuous note recognition for robotics. Based on the Beijing Forestry University music library, we conduct experiments to show recognition rates and numbers of iterations of LSTM with a single layer, LSTMP with a single layer, and Deep LSTM (DLSTM, LSTM with multilayers). As a result, the single layer LSTMP proves performing much better than the single layer LSTM in both time and the recognition rate; that is, LSTMP has fewer parameters and therefore reduces the training time, and, moreover, benefiting from the projection layer, LSTMP has better performance, too. The best recognition rate of LSTMP is 99.8%. As for DLSTM, the recognition rate can reach 100% because of the effectiveness of the deep structure, but compared with the single layer LSTMP, DLSTM needs more training time.


2020 ◽  
Author(s):  
Abdolreza Nazemi ◽  
Johannes Jakubik ◽  
Andreas Geyer-Schulz ◽  
Frank J. Fabozzi

Sign in / Sign up

Export Citation Format

Share Document