scholarly journals Machine learning methods for modelling and analysis of time series signals in geoinformatics

2021 ◽  
Author(s):  
Μαρία Κασελίμη

The analysis of experimental data that have been observed at different points in time leads to new and unique problems in statistical modeling and inference. The obvious correlation introduced by the sampling of adjacent points in time can severely restrict the applicability of the many conventional statistical methods traditionally dependent on the assumption that these adjacent observations are independent andidentically distributed. The systematic approach by which one goes about answering the mathematical and statistical questions posed by these time correlations is commonly referred to as time series analysis (TSA).Time series modeling (TSM) plays a key role in a wide range of real-life problems that have a temporal component. Modern time series problems often pose significant challenges for the existing techniques both in terms of their complexity, structure and size. While traditional methods have focused on parametric models informed by domain expertise, modern machine learning (ML) methods provide a means to learn temporal dynamics in a purely data-driven manner. With the increasing data availability and computing power in recent times, machine learning has become a vital part of the next generation of time series models. Thus, there is both a great need and an exciting opportunity for the machine learning community to develop theory, models and algorithms specifically for the purpose of processing and analyzing time series data.The impact of time series modelling and analysis on scientific applications can be partially documented by analysing problems of various diverse fields in which important time series problems may arise. Modern time series problems are characterized by complexity. Also, since real-world systems often evolve under transient conditions, the signals/time series tend to exhibit various forms of non-stationarity. As far as mathematical models are concerned, they can be categorized in many different ways. They can be linear or non-linear, static or dynamic, continuous distinct in time, deterministic or contemplative. The proper model selection to accurately describe a system depends on the system under study, on whether the operation of the system is a-priory known or not, as well as on the purpose of the implementation. This dissertation presents developments in nonlinear and non-static time series models under a machine learning framework, comparing their performance in real-life application scenarios related to geoinformatics as well as environmental applications.In this dissertation is provided a comparative analysis that evaluates the performance of several deep learning (DL) architectures on a large number of time series datasets of different nature and for different applications. Two main fruitful research fields are discussed here which were strategically chosen in order to address current cross-disciplinary research priorities attracting the interest of geoinformatics communities. The first problem is related to ionospheric Total Electron Content (TEC) modeling which is an important issue in many real-time Global Navigation System Satellites (GNSS) applications. Reliable and fast knowledge about ionospheric variations becomes increasingly important. GNSS users of single-frequency receivers and satellite navigation systems need accurate corrections to remove signal degradation effects caused by the ionosphere. Ionospheric modeling using signal-processing techniques is the subject of discussion in the present contribution. The next problem under discussion is energy disaggregation which is an important issue for energy efficiency and energy consumption awareness. Reliable and fast knowledge about residential energy consumption at appliance level becomes increasingly important nowadays and it is an important mitigation measure to prevent energy wastage. Energy disaggregation or Non-intrusive load monitoring (NILM) is a single channel blind source separation problem where the task is to estimate the consumption of each electrical appliance given the total energy consumption. For both problems various deep learning models (DL) are proposed that cover various aspects of the problem under study, whereas experimental results indicate the proposed methods' superiority compared to the current state of the art.

2020 ◽  
Author(s):  
Pathikkumar Patel ◽  
Bhargav Lad ◽  
Jinan Fiaidhi

During the last few years, RNN models have been extensively used and they have proven to be better for sequence and text data. RNNs have achieved state-of-the-art performance levels in several applications such as text classification, sequence to sequence modelling and time series forecasting. In this article we will review different Machine Learning and Deep Learning based approaches for text data and look at the results obtained from these methods. This work also explores the use of transfer learning in NLP and how it affects the performance of models on a specific application of sentiment analysis.


2021 ◽  
Vol 13 (3) ◽  
pp. 67
Author(s):  
Eric Hitimana ◽  
Gaurav Bajpai ◽  
Richard Musabe ◽  
Louis Sibomana ◽  
Jayavel Kayalvizhi

Many countries worldwide face challenges in controlling building incidence prevention measures for fire disasters. The most critical issues are the localization, identification, detection of the room occupant. Internet of Things (IoT) along with machine learning proved the increase of the smartness of the building by providing real-time data acquisition using sensors and actuators for prediction mechanisms. This paper proposes the implementation of an IoT framework to capture indoor environmental parameters for occupancy multivariate time-series data. The application of the Long Short Term Memory (LSTM) Deep Learning algorithm is used to infer the knowledge of the presence of human beings. An experiment is conducted in an office room using multivariate time-series as predictors in the regression forecasting problem. The results obtained demonstrate that with the developed system it is possible to obtain, process, and store environmental information. The information collected was applied to the LSTM algorithm and compared with other machine learning algorithms. The compared algorithms are Support Vector Machine, Naïve Bayes Network, and Multilayer Perceptron Feed-Forward Network. The outcomes based on the parametric calibrations demonstrate that LSTM performs better in the context of the proposed application.


2020 ◽  
Vol 14 ◽  
Author(s):  
Yaqing Zhang ◽  
Jinling Chen ◽  
Jen Hong Tan ◽  
Yuxuan Chen ◽  
Yunyi Chen ◽  
...  

Emotion is the human brain reacting to objective things. In real life, human emotions are complex and changeable, so research into emotion recognition is of great significance in real life applications. Recently, many deep learning and machine learning methods have been widely applied in emotion recognition based on EEG signals. However, the traditional machine learning method has a major disadvantage in that the feature extraction process is usually cumbersome, which relies heavily on human experts. Then, end-to-end deep learning methods emerged as an effective method to address this disadvantage with the help of raw signal features and time-frequency spectrums. Here, we investigated the application of several deep learning models to the research field of EEG-based emotion recognition, including deep neural networks (DNN), convolutional neural networks (CNN), long short-term memory (LSTM), and a hybrid model of CNN and LSTM (CNN-LSTM). The experiments were carried on the well-known DEAP dataset. Experimental results show that the CNN and CNN-LSTM models had high classification performance in EEG-based emotion recognition, and their accurate extraction rate of RAW data reached 90.12 and 94.17%, respectively. The performance of the DNN model was not as accurate as other models, but the training speed was fast. The LSTM model was not as stable as the CNN and CNN-LSTM models. Moreover, with the same number of parameters, the training speed of the LSTM was much slower and it was difficult to achieve convergence. Additional parameter comparison experiments with other models, including epoch, learning rate, and dropout probability, were also conducted in the paper. Comparison results prove that the DNN model converged to optimal with fewer epochs and a higher learning rate. In contrast, the CNN model needed more epochs to learn. As for dropout probability, reducing the parameters by ~50% each time was appropriate.


2021 ◽  
Author(s):  
Jan Wolff ◽  
Ansgar Klimke ◽  
Michael Marschollek ◽  
Tim Kacprowski

Introduction The COVID-19 pandemic has strong effects on most health care systems and individual services providers. Forecasting of admissions can help for the efficient organisation of hospital care. We aimed to forecast the number of admissions to psychiatric hospitals before and during the COVID-19 pandemic and we compared the performance of machine learning models and time series models. This would eventually allow to support timely resource allocation for optimal treatment of patients. Methods We used admission data from 9 psychiatric hospitals in Germany between 2017 and 2020. We compared machine learning models with time series models in weekly, monthly and yearly forecasting before and during the COVID-19 pandemic. Our models were trained and validated with data from the first two years and tested in prospectively sliding time-windows in the last two years. Results A total of 90,686 admissions were analysed. The models explained up to 90% of variance in hospital admissions in 2019 and 75% in 2020 with the effects of the COVID-19 pandemic. The best models substantially outperformed a one-step seasonal naive forecast (seasonal mean absolute scaled error (sMASE) 2019: 0.59, 2020: 0.76). The best model in 2019 was a machine learning model (elastic net, mean absolute error (MAE): 7.25). The best model in 2020 was a time series model (exponential smoothing state space model with Box-Cox transformation, ARMA errors and trend and seasonal components, MAE: 10.44), which adjusted more quickly to the shock effects of the COVID-19 pandemic. Models forecasting admissions one week in advance did not perform better than monthly and yearly models in 2019 but they did in 2020. The most important features for the machine learning models were calendrical variables. Conclusion Model performance did not vary much between different modelling approaches before the COVID-19 pandemic and established forecasts were substantially better than one-step seasonal naive forecasts. However, weekly time series models adjusted quicker to the COVID-19 related shock effects. In practice, different forecast horizons could be used simultaneously to allow both early planning and quick adjustments to external effects.


Sign in / Sign up

Export Citation Format

Share Document