scholarly journals Prediction of Building’s Thermal Performance Using LSTM and MLP Neural Networks

2020 ◽  
Vol 10 (21) ◽  
pp. 7439 ◽  
Author(s):  
Miguel Martínez Comesaña ◽  
Lara Febrero-Garrido ◽  
Francisco Troncoso-Pastoriza ◽  
Javier Martínez-Torres

Accurate prediction of building indoor temperatures and thermal demand is of great help to control and optimize the energy performance of a building. However, building thermal inertia and lag lead to complex nonlinear systems is difficult to model. In this context, the application of artificial neural networks (ANNs) in buildings has grown considerably in recent years. The aim of this work is to study the thermal inertia of a building by developing an innovative methodology using multi-layered perceptron (MLP) and long short-term memory (LSTM) neural networks. This approach was applied to a public library building located in the north of Spain. A comparison between the prediction errors according to the number of time lags introduced in the models has been carried out. Moreover, the accuracy of the models was measured using the CV(RMSE) as advised by AHSRAE. The main novelty of this work lies in the analysis of the building inertia, through machine learning algorithms, observing the information provided by the input of time lags in the models. The results of the study prove that the best models are those that consider the thermal lag. Errors below 15% for thermal demand and below 2% for indoor temperatures were achieved with the proposed methodology.

Energies ◽  
2021 ◽  
Vol 14 (16) ◽  
pp. 5188
Author(s):  
Martín Pensado-Mariño ◽  
Lara Febrero-Garrido ◽  
Estibaliz Pérez-Iribarren ◽  
Pablo Eguía Oller ◽  
Enrique Granada-Álvarez

Accurate forecasting of a building thermal performance can help to optimize its energy consumption. In addition, obtaining the Heat Loss Coefficient (HLC) allows characterizing the thermal envelope of the building under conditions of use. The aim of this work is to study the thermal inertia of a building developing a new methodology based on Long Short-Term Memory (LSTM) neural networks. This approach was applied to the Rectorate building of the University of Basque Country (UPV/EHU), located in the north of Spain. A comparison of different time-lags selected to catch the thermal inertia has been carried out using the CV(RMSE) and the MBE errors, as advised by ASHRAE. The main contribution of this work lies in the analysis of thermal inertia detection and its influence on the thermal behavior of the building, obtaining a model capable of predicting the thermal demand with an error between 12 and 21%. Moreover, the viability of LSTM neural networks to estimate the HLC of an in-use building with an error below 4% was demonstrated.


2020 ◽  
Vol 6 ◽  
pp. e279
Author(s):  
Nicola Uras ◽  
Lodovica Marchesi ◽  
Michele Marchesi ◽  
Roberto Tonelli

In this article we forecast daily closing price series of Bitcoin, Litecoin and Ethereum cryptocurrencies, using data on prices and volumes of prior days. Cryptocurrencies price behaviour is still largely unexplored, presenting new opportunities for researchers and economists to highlight similarities and differences with standard financial prices. We compared our results with various benchmarks: one recent work on Bitcoin prices forecasting that follows different approaches, a well-known paper that uses Intel, National Bank shares and Microsoft daily NASDAQ closing prices spanning a 3-year interval and another, more recent paper which gives quantitative results on stock market index predictions. We followed different approaches in parallel, implementing both statistical techniques and machine learning algorithms: the Simple Linear Regression (SLR) model for uni-variate series forecast using only closing prices, and the Multiple Linear Regression (MLR) model for multivariate series using both price and volume data. We used two artificial neural networks as well: Multilayer Perceptron (MLP) and Long short-term memory (LSTM). While the entire time series resulted to be indistinguishable from a random walk, the partitioning of datasets into shorter sequences, representing different price “regimes”, allows to obtain precise forecast as evaluated in terms of Mean Absolute Percentage Error(MAPE) and relative Root Mean Square Error (relativeRMSE). In this case the best results are obtained using more than one previous price, thus confirming the existence of time regimes different from random walks. Our models perform well also in terms of time complexity, and provide overall results better than those obtained in the benchmark studies, improving the state-of-the-art.


2020 ◽  
Vol 10 (19) ◽  
pp. 6755
Author(s):  
Carlos Iturrino Garcia ◽  
Francesco Grasso ◽  
Antonio Luchetta ◽  
Maria Cristina Piccirilli ◽  
Libero Paolucci ◽  
...  

The use of electronic loads has improved many aspects of everyday life, permitting more efficient, precise and automated process. As a drawback, the nonlinear behavior of these systems entails the injection of electrical disturbances on the power grid that can cause distortion of voltage and current. In order to adopt countermeasures, it is important to detect and classify these disturbances. To do this, several Machine Learning Algorithms are currently exploited. Among them, for the present work, the Long Short Term Memory (LSTM), the Convolutional Neural Networks (CNN), the Convolutional Neural Networks Long Short Term Memory (CNN-LSTM) and the CNN-LSTM with adjusted hyperparameters are compared. As a preliminary stage of the research, the voltage and current time signals are simulated using MATLAB Simulink. Thanks to the simulation results, it is possible to acquire a current and voltage dataset with which the identification algorithms are trained, validated and tested. These datasets include simulations of several disturbances such as Sag, Swell, Harmonics, Transient, Notch and Interruption. Data Augmentation techniques are used in order to increase the variability of the training and validation dataset in order to obtain a generalized result. After that, the networks are fed with an experimental dataset of voltage and current field measurements containing the disturbances mentioned above. The networks have been compared, resulting in a 79.14% correct classification rate with the LSTM network versus a 84.58% for the CNN, 84.76% for the CNN-LSTM and a 83.66% for the CNN-LSTM with adjusted hyperparameters. All of these networks are tested using real measurements.


Author(s):  
R Vinayakumar ◽  
K.P. Soman ◽  
Prabaharan Poornachandran

This article describes how sequential data modeling is a relevant task in Cybersecurity. Sequences are attributed temporal characteristics either explicitly or implicitly. Recurrent neural networks (RNNs) are a subset of artificial neural networks (ANNs) which have appeared as a powerful, principle approach to learn dynamic temporal behaviors in an arbitrary length of large-scale sequence data. Furthermore, stacked recurrent neural networks (S-RNNs) have the potential to learn complex temporal behaviors quickly, including sparse representations. To leverage this, the authors model network traffic as a time series, particularly transmission control protocol / internet protocol (TCP/IP) packets in a predefined time range with a supervised learning method, using millions of known good and bad network connections. To find out the best architecture, the authors complete a comprehensive review of various RNN architectures with its network parameters and network structures. Ideally, as a test bed, they use the existing benchmark Defense Advanced Research Projects Agency / Knowledge Discovery and Data Mining (DARPA) / (KDD) Cup ‘99' intrusion detection (ID) contest data set to show the efficacy of these various RNN architectures. All the experiments of deep learning architectures are run up to 1000 epochs with a learning rate in the range [0.01-0.5] on a GPU-enabled TensorFlow and experiments of traditional machine learning algorithms are done using Scikit-learn. Experiments of families of RNN architecture achieved a low false positive rate in comparison to the traditional machine learning classifiers. The primary reason is that RNN architectures are able to store information for long-term dependencies over time-lags and to adjust with successive connection sequence information. In addition, the effectiveness of RNN architectures are shown for the UNSW-NB15 data set.


Energies ◽  
2019 ◽  
Vol 12 (1) ◽  
pp. 149 ◽  
Author(s):  
Salah Bouktif ◽  
Ali Fiaz ◽  
Ali Ouni ◽  
Mohamed Adel Serhani

Time series analysis using long short term memory (LSTM) deep learning is a very attractive strategy to achieve accurate electric load forecasting. Although it outperforms most machine learning approaches, the LSTM forecasting model still reveals a lack of validity because it neglects several characteristics of the electric load exhibited by time series. In this work, we propose a load-forecasting model based on enhanced-LSTM that explicitly considers the periodicity characteristic of the electric load by using multiple sequences of inputs time lags. An autoregressive model is developed together with an autocorrelation function (ACF) to regress consumption and identify the most relevant time lags to feed the multi-sequence LSTM. Two variations of deep neural networks, LSTM and gated recurrent unit (GRU) are developed for both single and multi-sequence time-lagged features. These models are compared to each other and to a spectrum of data mining benchmark techniques including artificial neural networks (ANN), boosting, and bagging ensemble trees. France Metropolitan’s electricity consumption data is used to train and validate our models. The obtained results show that GRU- and LSTM-based deep learning model with multi-sequence time lags achieve higher performance than other alternatives including the single-sequence LSTM. It is demonstrated that the new models can capture critical characteristics of complex time series (i.e., periodicity) by encompassing past information from multiple timescale sequences. These models subsequently achieve predictions that are more accurate.


2020 ◽  
Vol 17 (3) ◽  
pp. 705-715
Author(s):  
Chuyue Zhang ◽  
Xiaofan Zhao ◽  
Manchun Cai ◽  
Dawei Wang ◽  
Luzhe Cao

In this paper, we propose a new model to predict the age and number of suspects through the feature modeling of historical data. We discrete the case information into values of 20 dimensions. After feature selection, we use 9 machine learning algorithms and Deep Neural Networks to extract the numerical features. In addition, we use Convolutional Neural Networks and Long Short- Term Memory to extract the text features of case description. These two types of features are fused and fed into fully connected layer and softmax layer. This work is an extension of our short conference proceeding paper. The experimental results show that the new model improved accuracy by 3% in predicting the number of suspects and improved accuracy by 12% in predicting the number of suspects. To the best of our knowledge, it is the first time to combine machine learning and deep learning in crime prediction.


Author(s):  
Nikita Laptev ◽  
Vladislav Laptev ◽  
Olga Gerget ◽  
Dmitriy Kolpashchikov

The article describes a feasibility study to assess the use of neural networks and traditional machine learning algorithms to solve various problems including image processing. A brief description of some algorithms of traditional machine learning, as well as anautomated service for choosing the best method for a specific task, is given. The authors also describe the features of artificial neural networks and the most popular places for theirapplication. An algorithm for solving the problem of detecting fire hazardous objects andlocalizing a fire source in a forest using video sequence frames is presented. The article compares the characteristics of artificial neural network models according to the followingcriteria: underlying architecture, the number of analyzed frames, the size of the input image, the transfer learning model used as a feature vector composing network. Acomparative analysis of traditional machine learning algorithms and neural networks withlong short-term memory in the problem of classification of forest fire hazards is made. A solution to localization of the source of fire based on clustering is described. A hybrid algorithm for finding a fire source in a forest is developed and illustrated.


Author(s):  
Sangeetha Rajesh ◽  
N. J. Nalini

Singer identification is a challenging task in music information retrieval because of the combined instrumental music with the singing voice. The previous approaches focus on identification of singers based on individual features extracted from the music clips. The objective of this work is to combine Mel Frequency Cepstral Coefficients (MFCC) and Chroma DCT-reduced Pitch (CRP) features for singer identification system (SID) using machine learning techniques. The proposed system has mainly two phases. In the feature extraction phase, MFCC, [Formula: see text]MFCC, [Formula: see text]MFCC and CRP features are extracted from the music clips. In the identification phase, extracted features are trained with Bidirectional Long Short-Term Memory (BLSTM)-based Recurrent Neural Networks (RNN) and Convolution Neural Networks (CNN) and tested to identify different singer classes. The identification accuracy and Equal Error Rate (EER) are used as performance measures. Further, the experiments also demonstrate the effectiveness of score level fusion of MFCC and CRP feature in the singer identification system. Also, the experimental results are compared with the baseline system using support vector machines (SVM).


Atmosphere ◽  
2020 ◽  
Vol 11 (12) ◽  
pp. 1305
Author(s):  
Arthur H. Essenfelder ◽  
Francesca Larosa ◽  
Paolo Mazzoli ◽  
Stefano Bagli ◽  
Davide Broccoli ◽  
...  

This study proposes a climate service named Smart Climate Hydropower Tool (SCHT) and designed as a hybrid forecast system for supporting decision-making in a context of hydropower production. SCHT is technically designed to make use of information from state-of-art seasonal forecasts provided by the Copernicus Climate Data Store (CDS) combined with a range of different machine learning algorithms to perform the seasonal forecast of the accumulated inflow discharges to the reservoir of hydropower plants. The machine learning algorithms considered include support vector regression, Gaussian processes, long short-term memory, non-linear autoregressive neural networks with exogenous inputs, and a deep-learning neural networks model. Each machine learning model is trained over past decades datasets of recorded data, and forecast performances are validated and evaluated using separate test sets with reference to the historical average of discharge values and simpler multiparametric regressions. Final results are presented to the users through a user-friendly web interface developed from a tied connection with end-users in an effective co-design process. Methods are tested for forecasting the accumulated seasonal river discharges up to six months in advance for two catchments in Colombia, South America. Results indicate that the machine learning algorithms that make use of a complex and/or recurrent architecture can better simulate the temporal dynamic behaviour of the accumulated river discharge inflow to both case study reservoirs, thus rendering SCHT a useful tool in providing information for water resource managers in better planning the allocation of water resources for different users and for hydropower plant managers when negotiating power purchase contracts in competitive energy markets.


2020 ◽  
Vol 12 (11) ◽  
pp. 4471 ◽  
Author(s):  
Jack Ngarambe ◽  
Amina Irakoze ◽  
Geun Young Yun ◽  
Gon Kim

The performance of machine learning (ML) algorithms depends on the nature of the problem at hand. ML-based modeling, therefore, should employ suitable algorithms where optimum results are desired. The purpose of the current study was to explore the potential applications of ML algorithms in modeling daylight in indoor spaces and ultimately identify the optimum algorithm. We thus developed and compared the performance of four common ML algorithms: generalized linear models, deep neural networks, random forest, and gradient boosting models in predicting the distribution of indoor daylight illuminances. We found that deep neural networks, which showed a determination of coefficient (R2) of 0.99, outperformed the other algorithms. Additionally, we explored the use of long short-term memory to forecast the distribution of daylight at a particular future time. Our results show that long short-term memory is accurate and reliable (R2 = 0.92). Our findings provide a basis for discussions on ML algorithms’ use in modeling daylight in indoor spaces, which may ultimately result in efficient tools for estimating daylight performance in the primary stages of building design and daylight control schemes for energy efficiency.


Sign in / Sign up

Export Citation Format

Share Document