Electric load monitoring of residential buildings using goodness of fit and multi-layer perceptron neural networks

Author(s):  
Aaron R. Rababaah ◽  
Eniye Tebekaemi
Energies ◽  
2020 ◽  
Vol 13 (3) ◽  
pp. 571 ◽  
Author(s):  
Azadeh Sadeghi ◽  
Roohollah Younes Sinaki ◽  
William A. Young ◽  
Gary R. Weckman

As the level of greenhouse gas emissions increases, so does the importance of the energy performance of buildings (EPB). One of the main factors to measure EPB is a structure’s heating load (HL) and cooling load (CL). HLs and CLs depend on several variables, such as relative compactness, surface area, wall area, roof area, overall height, orientation, glazing area, and glazing area distribution. This research uses deep neural networks (DNNs) to forecast HLs and CLs for a variety of structures. The DNNs explored in this research include multi-layer perceptron (MLP) networks, and each of the models in this research was developed through extensive testing with a myriad number of layers, process elements, and other data preprocessing techniques. As a result, a DNN is shown to be an improvement for modeling HLs and CLs compared to traditional artificial neural network (ANN) models. In order to extract knowledge from a trained model, a post-processing technique, called sensitivity analysis (SA), was applied to the model that performed the best with respect to the selected goodness-of-fit metric on an independent set of testing data. There are two forms of SA—local and global methods—but both have the same purpose in terms of determining the significance of independent variables within a model. Local SA assumes inputs are independent of each other, while global SA does not. To further the contribution of the research presented within this article, the results of a global SA, called state-based sensitivity analysis (SBSA), are compared to the results obtained from a traditional local technique, called sensitivity analysis about the mean (SAAM). The results of the research demonstrate an improvement over existing conclusions found in literature, which is of particular interest to decision-makers and designers of building structures.


2021 ◽  
Vol 13 (6) ◽  
pp. 3198
Author(s):  
Hossein Moayedi ◽  
Amir Mosavi

The significance of accurate heating load (HL) approximation is the primary motivation of this research to distinguish the most efficient predictive model among several neural-metaheuristic models. The proposed models are formulated through synthesizing a multi-layer perceptron network (MLP) with ant lion optimization (ALO), biogeography-based optimization (BBO), the dragonfly algorithm (DA), evolutionary strategy (ES), invasive weed optimization (IWO), and league champion optimization (LCA) hybrid algorithms. Each ensemble is optimized in terms of the operating population. Accordingly, the ALO-MLP, BBO-MLP, DA-MLP, ES-MLP, IWO-MLP, and LCA-MLP presented their best performance for population sizes of 350, 400, 200, 500, 50, and 300, respectively. The comparison was carried out by implementing a ranking system. Based on the obtained overall scores (OSs), the BBO (OS = 36) featured as the most capable optimization technique, followed by ALO (OS = 27) and ES (OS = 20). Due to the efficient performance of these algorithms, the corresponding MLPs can be promising substitutes for traditional methods used for HL analysis.


2009 ◽  
Vol 20 (2) ◽  
pp. 111-130 ◽  
Author(s):  
Ronaldo Dias ◽  
Nancy L. Garcia ◽  
Angelo Martarelli

Author(s):  
Sirous F. Yasseri ◽  
Jake Prager

This paper describes a recurrence law for explosions. The proposed recurrence law fits quite well to the historic explosion data in residential buildings as well as to the data on offshore installations in the North Sea. Generally quantified explosion risk assessment is performed for offshore installations, since it is believed historic data does not correspond to a specific installation and it may not be appropriate for use in performance based explosion engineering, which may in itself require realistic load description of explosion recurrence. The goodness-of-fit of the model for explosion occurrence data obtained using the quantified risk assessment method is also discussed. The paper then introduces the concept of performance-based design, which is an attempt to design structures with predictable performance under explosion loading. Performance objectives such as life safety, collapse prevention, or immediate resumption of operation are used to define the state of an installation following a design explosion. The recurrence law is then used to associate a level of explosion load to each limit state using a desirable level of probability of exceedance during the installations life time.


2021 ◽  
Author(s):  
Juan F. Farfán-Durán ◽  
Luis Cea

<p>In recent years, the application of model ensembles has received increasing attention in the hydrological modelling community due to the interesting results reported in several studies carried out in different parts of the world. The main idea of these approaches is to combine the results of the same hydrological model or a number of different hydrological models in order to obtain more robust, better-fitting models, reducing at the same time the uncertainty in the predictions. The techniques for combining models range from simple approaches such as averaging different simulations, to more complex techniques such as least squares, genetic algorithms and more recently artificial intelligence techniques such as Artificial Neural Networks (ANN).</p><p>Despite the good results that model ensembles are able to provide, the models selected to build the ensemble have a direct influence on the results. Contrary to intuition, it has been reported that the best fitting single models do not necessarily produce the best ensemble. Instead, better results can be obtained with ensembles that incorporate models with moderate goodness of fit. This implies that the selection of the single models might have a random component in order to maximize the results that ensemble approaches can provide.</p><p>The present study is carried out using hydrological data on an hourly scale between 2008 and 2015 corresponding to the Mandeo basin, located in the Northwest of Spain. In order to obtain 1000 single models, a hydrological model was run using 1000 sets of parameters sampled randomly in their feasible space. Then, we have classified the models in 3 groups with the following characteristics: 1) The 25 single models with highest Nash-Sutcliffe coefficient, 2) The 25 single models with the highest Pearson coefficient, and 3) The complete group of 1000 single models.</p><p>The ensemble models are built with 5 models as the input of an ANN and the observed series as the output. Then, we applied the Random-Restart Hill-Climbing (RRHC) algorithm choosing 5 random models in each iteration to re-train the ANN in order to identify a better ensemble. The algorithm is applied to build 50 ensembles in each group of models. Finally, the results are compared to those obtained by optimizing the model using a gradient-based method by means of the following goodness-of-fit measures: Nash-Sutcliffe (NSE) coefficient, adapted for high flows Nash-Sutcliffe (HF−NSE), adapted for low flows Nash-Sutcliffe (LF−W NSE) and coefficient of determination (R2).</p><p>The results show that the RRHC algorithm can identify adequate ensembles. The ensembles built using the group of models selected based on the NSE outperformed the model optimized by the gradient method in 64 % of the cases in at least 3 of 4 coefficients, both in the calibration and validation stages. Followed by the ensembles built with the group of models selected based on the Pearson coefficient with 56 %. In the case of the third group, no ensembles were identified that outperformed the gradient-based method. However, the most part of the ensembles outperformed the 1000 individual models.</p><p><strong>Keywords: Multi-model ensemble; Single-model ensemble; Artificial Neural Networks; Hydrological Model; Random-restart Hill-climbing</strong></p><p> </p>


2010 ◽  
Vol 149 (2) ◽  
pp. 249-254 ◽  
Author(s):  
A. FARIDI ◽  
M. MOTTAGHITALAB ◽  
H. DARMANI-KUHI ◽  
J. FRANCE ◽  
H. AHMADI

SUMMARYThe success of poultry meat production has been strongly related to improvements in growth and carcass yield, mainly by increasing breast proportion and reducing carcass fat. Conventional laboratory techniques for determining carcass composition are expensive, cumbersome and time consuming. These disadvantages have prompted a search for alternative methods. In this respect, the potential benefits from modelling growth are considerable. Neural networks (NNs) are a relatively new option for modelling growth in animal production systems. One self-organizing sub-model of artificial NN is the group method of data handling-type NN (GMDH-type NN). The present study aimed at applying the GMDH-type NNs to data from two studies with broilers in order to predict carcass energy (CEn, MJ/g) content and relative growth (g/g of body weight) of carcass components (carcass protein, breast muscle, leg and thigh muscles, carcass fat, abdominal fat, skin fat and visceral fat). The effective input variables involved in the prediction of CEn and carcass fat content using data from the first study were dietary metabolizable energy (ME, kJ/kg), crude protein (CP, g/kg of diet), fat (g/kg of diet) and crude fibre (CF, g/kg of diet). For data from the second study, the effective input variables involved in the prediction of carcass components were dietary ME (MJ/kg), CP (g/kg of diet), methionine (g/kg of diet), lysine (g/kg of diet) and body weight (kg). Quantitative examination of the goodness of fit, using R2 and error measurement indices, for the predictive models proposed by the GMDH-type NN revealed close agreement between observed and predicted values of CEn and carcass components.


Sign in / Sign up

Export Citation Format

Share Document