scholarly journals A New Approach of Hybrid Bee Colony Optimized Neural Computing to Estimate the Soil Compression Coefficient for a Housing Construction Project

2019 ◽  
Vol 9 (22) ◽  
pp. 4912 ◽  
Author(s):  
Pijush Samui ◽  
Nhat-Duc Hoang ◽  
Viet-Ha Nhu ◽  
My-Linh Nguyen ◽  
Phuong Thao Thi Ngo ◽  
...  

In the design phase of housing projects, predicting the settlement of soil layers beneath the buildings requires the estimation of the coefficient of soil compression. This study proposes a low-cost, fast, and reliable alternative for estimating this soil parameter utilizing a hybrid metaheuristic optimized neural network (NN). An integrated method of artificial bee colony (ABC) and the Levenberg–Marquardt (LM) algorithm is put forward to train the NN inference model. The model is capable of delivering the response variable of soil compression coefficient a set of physical properties of soil. A large-scale real-life urban project at Hai Phong city (Vietnam) was selected as a case study. Accordingly, a dataset of 441 samples with their corresponding testing values of the compression coefficient has been collected and prepared during the construction phase. Experimental outcomes confirm that the proposed NN model with the hybrid ABC-LM training algorithm has attained the highly accurate estimation of the soil compression coefficient with root mean square error (RMSE) = 0.008, mean absolute percentage error (MAPE) = 10.180%, and coefficient of determination (R2) = 0.864. Thus, the proposed machine learning method can be a promising tool for geotechnical engineers in the design phase of housing projects.

2011 ◽  
Vol 11 (4) ◽  
pp. 87-101 ◽  
Author(s):  
Ibrahim Mahamid

The objective of this study is to develop early cost estimating models for road construction projects using multiple regression techniques, based on 131 sets of data collected in the West Bank in Palestine. As the cost estimates are required at early stages of a project, considerations were given to the fact that the input data for the required regression model could be easily extracted from sketches or scope definition of the project. 11 regression models are developed to estimate the total cost of road construction project in US dollar; 5 of them include bid quantities as input variables and 6 include road length and road width. The coefficient of determination r2 for the developed models is ranging from 0.92 to 0.98 which indicate that the predicted values from a forecast models fit with the real-life data. The values of the mean absolute percentage error (MAPE) of the developed regression models are ranging from 13% to 31%, the results compare favorably with past researches which have shown that the estimate accuracy in the early stages of a project is between ±25% and ±50%.


2021 ◽  
Author(s):  
Saeed Sharafi ◽  
Mehdi Mohammadi Ghaleni

Abstract The accurate estimation of reference evapotranspiration (ETref) is a crucial component for modeling hydrological and ecological cycles. The goal of this study was the calibration of 32 empirical equations used to determine ETref in the three classes of temperature-based, solar radiation-based and mass transfer–based evapotranspiration. The calibration was based on measurements taken between the years 1990 and 2019 at 41 synoptic stations located in very dry, dry, semidry and humid climates of Iran. The performance of the original and calibrated empirical equations compared to the PM-FAO56 equation was evaluated based on model evaluation techniques including: the coefficient of determination (R2), the root mean square error (RMSE), the average percentage error (APE), the mean bias error (MBE), the index of agreement (D) and the scatter index (SI). The results show that the calibrated Baier and Robertson equation for temperature-based models, the Makkink equation for solar radiation–based models and the Penman equation for mass transfer–based models performed better than the original empirical equations. The calibrated equations had, respectively, an average R2 = 0.73, 0.67 and 0.78; RMSE = 35.14, 35.02 and 30.20 mm year− 1; and MBE=-5.6, -3.89 and 2.57 mm year− 1. The original empirical equations had values of average R2 = 0.60, 0.37 and 0.65; RMSE = 68.34, 66.98 and 52.62 mm year− 1; and MBE=-5.75, 4.26 and 8.99 mm year− 1, respectively. The calibrated empirical equations for very dry climate (e.g. Zabol, Zahedan, Bam, Iranshahr and Chabahar stations) also significantly reduced the SI value from SI > 0.3 (poor class) to SI < 0.1 (excellent class). Therefore, the calibrated empirical equations are highly recommended for estimating ETref in different climates.


2020 ◽  
Vol 2020 ◽  
pp. 1-12
Author(s):  
Shengpu Li ◽  
Yize Sun

Ink transfer rate (ITR) is a reference index to measure the quality of 3D additive printing. In this study, an ink transfer rate prediction model is proposed by applying the least squares support vector machine (LSSVM). In addition, enhanced garden balsam optimization (EGBO) is used for selection and optimization of hyperparameters that are embedded in the LSSVM model. 102 sets of experimental sample data have been collected from the production line to train and test the hybrid prediction model. Experimental results show that the coefficient of determination (R2) for the introduced model is equal to 0.8476, the root-mean-square error (RMSE) is 6.6 × 10 (−3), and the mean absolute percentage error (MAPE) is 1.6502 × 10 (−3) for the ink transfer rate of 3D additive printing.


2021 ◽  
Vol 149 ◽  
Author(s):  
Junwen Tao ◽  
Yue Ma ◽  
Xuefei Zhuang ◽  
Qiang Lv ◽  
Yaqiong Liu ◽  
...  

Abstract This study proposed a novel ensemble analysis strategy to improve hand, foot and mouth disease (HFMD) prediction by integrating environmental data. The approach began by establishing a vector autoregressive model (VAR). Then, a dynamic Bayesian networks (DBN) model was used for variable selection of environmental factors. Finally, a VAR model with constraints (CVAR) was established for predicting the incidence of HFMD in Chengdu city from 2011 to 2017. DBN showed that temperature was related to HFMD at lags 1 and 2. Humidity, wind speed, sunshine, PM10, SO2 and NO2 were related to HFMD at lag 2. Compared with the autoregressive integrated moving average model with external variables (ARIMAX), the CVAR model had a higher coefficient of determination (R2, average difference: + 2.11%; t = 6.2051, P = 0.0003 < 0.05), a lower root mean-squared error (−24.88%; t = −5.2898, P = 0.0007 < 0.05) and a lower mean absolute percentage error (−16.69%; t = −4.3647, P = 0.0024 < 0.05). The accuracy of predicting the time-series shape was 88.16% for the CVAR model and 86.41% for ARIMAX. The CVAR model performed better in terms of variable selection, model interpretation and prediction. Therefore, it could be used by health authorities to identify potential HFMD outbreaks and develop disease control measures.


Energies ◽  
2021 ◽  
Vol 14 (8) ◽  
pp. 2328
Author(s):  
Mohammed Alzubaidi ◽  
Kazi N. Hasan ◽  
Lasantha Meegahapola ◽  
Mir Toufikur Rahman

This paper presents a comparative analysis of six sampling techniques to identify an efficient and accurate sampling technique to be applied to probabilistic voltage stability assessment in large-scale power systems. In this study, six different sampling techniques are investigated and compared to each other in terms of their accuracy and efficiency, including Monte Carlo (MC), three versions of Quasi-Monte Carlo (QMC), i.e., Sobol, Halton, and Latin Hypercube, Markov Chain MC (MCMC), and importance sampling (IS) technique, to evaluate their suitability for application with probabilistic voltage stability analysis in large-scale uncertain power systems. The coefficient of determination (R2) and root mean square error (RMSE) are calculated to measure the accuracy and the efficiency of the sampling techniques compared to each other. All the six sampling techniques provide more than 99% accuracy by producing a large number of wind speed random samples (8760 samples). In terms of efficiency, on the other hand, the three versions of QMC are the most efficient sampling techniques, providing more than 96% accuracy with only a small number of generated samples (150 samples) compared to other techniques.


2021 ◽  
Vol 55 (1) ◽  
pp. 1-2
Author(s):  
Bhaskar Mitra

Neural networks with deep architectures have demonstrated significant performance improvements in computer vision, speech recognition, and natural language processing. The challenges in information retrieval (IR), however, are different from these other application areas. A common form of IR involves ranking of documents---or short passages---in response to keyword-based queries. Effective IR systems must deal with query-document vocabulary mismatch problem, by modeling relationships between different query and document terms and how they indicate relevance. Models should also consider lexical matches when the query contains rare terms---such as a person's name or a product model number---not seen during training, and to avoid retrieving semantically related but irrelevant results. In many real-life IR tasks, the retrieval involves extremely large collections---such as the document index of a commercial Web search engine---containing billions of documents. Efficient IR methods should take advantage of specialized IR data structures, such as inverted index, to efficiently retrieve from large collections. Given an information need, the IR system also mediates how much exposure an information artifact receives by deciding whether it should be displayed, and where it should be positioned, among other results. Exposure-aware IR systems may optimize for additional objectives, besides relevance, such as parity of exposure for retrieved items and content publishers. In this thesis, we present novel neural architectures and methods motivated by the specific needs and challenges of IR tasks. We ground our contributions with a detailed survey of the growing body of neural IR literature [Mitra and Craswell, 2018]. Our key contribution towards improving the effectiveness of deep ranking models is developing the Duet principle [Mitra et al., 2017] which emphasizes the importance of incorporating evidence based on both patterns of exact term matches and similarities between learned latent representations of query and document. To efficiently retrieve from large collections, we develop a framework to incorporate query term independence [Mitra et al., 2019] into any arbitrary deep model that enables large-scale precomputation and the use of inverted index for fast retrieval. In the context of stochastic ranking, we further develop optimization strategies for exposure-based objectives [Diaz et al., 2020]. Finally, this dissertation also summarizes our contributions towards benchmarking neural IR models in the presence of large training datasets [Craswell et al., 2019] and explores the application of neural methods to other IR tasks, such as query auto-completion.


Author(s):  
Krzysztof Jurczuk ◽  
Marcin Czajkowski ◽  
Marek Kretowski

AbstractThis paper concerns the evolutionary induction of decision trees (DT) for large-scale data. Such a global approach is one of the alternatives to the top-down inducers. It searches for the tree structure and tests simultaneously and thus gives improvements in the prediction and size of resulting classifiers in many situations. However, it is the population-based and iterative approach that can be too computationally demanding to apply for big data mining directly. The paper demonstrates that this barrier can be overcome by smart distributed/parallel processing. Moreover, we ask the question whether the global approach can truly compete with the greedy systems for large-scale data. For this purpose, we propose a novel multi-GPU approach. It incorporates the knowledge of global DT induction and evolutionary algorithm parallelization together with efficient utilization of memory and computing GPU’s resources. The searches for the tree structure and tests are performed simultaneously on a CPU, while the fitness calculations are delegated to GPUs. Data-parallel decomposition strategy and CUDA framework are applied. Experimental validation is performed on both artificial and real-life datasets. In both cases, the obtained acceleration is very satisfactory. The solution is able to process even billions of instances in a few hours on a single workstation equipped with 4 GPUs. The impact of data characteristics (size and dimension) on convergence and speedup of the evolutionary search is also shown. When the number of GPUs grows, nearly linear scalability is observed what suggests that data size boundaries for evolutionary DT mining are fading.


Author(s):  
Gianluca Bardaro ◽  
Alessio Antonini ◽  
Enrico Motta

AbstractOver the last two decades, several deployments of robots for in-house assistance of older adults have been trialled. However, these solutions are mostly prototypes and remain unused in real-life scenarios. In this work, we review the historical and current landscape of the field, to try and understand why robots have yet to succeed as personal assistants in daily life. Our analysis focuses on two complementary aspects: the capabilities of the physical platform and the logic of the deployment. The former analysis shows regularities in hardware configurations and functionalities, leading to the definition of a set of six application-level capabilities (exploration, identification, remote control, communication, manipulation, and digital situatedness). The latter focuses on the impact of robots on the daily life of users and categorises the deployment of robots for healthcare interventions using three types of services: support, mitigation, and response. Our investigation reveals that the value of healthcare interventions is limited by a stagnation of functionalities and a disconnection between the robotic platform and the design of the intervention. To address this issue, we propose a novel co-design toolkit, which uses an ecological framework for robot interventions in the healthcare domain. Our approach connects robot capabilities with known geriatric factors, to create a holistic view encompassing both the physical platform and the logic of the deployment. As a case study-based validation, we discuss the use of the toolkit in the pre-design of the robotic platform for an pilot intervention, part of the EU large-scale pilot of the EU H2020 GATEKEEPER project.


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4655
Author(s):  
Dariusz Czerwinski ◽  
Jakub Gęca ◽  
Krzysztof Kolano

In this article, the authors propose two models for BLDC motor winding temperature estimation using machine learning methods. For the purposes of the research, measurements were made for over 160 h of motor operation, and then, they were preprocessed. The algorithms of linear regression, ElasticNet, stochastic gradient descent regressor, support vector machines, decision trees, and AdaBoost were used for predictive modeling. The ability of the models to generalize was achieved by hyperparameter tuning with the use of cross-validation. The conducted research led to promising results of the winding temperature estimation accuracy. In the case of sensorless temperature prediction (model 1), the mean absolute percentage error MAPE was below 4.5% and the coefficient of determination R2 was above 0.909. In addition, the extension of the model with the temperature measurement on the casing (model 2) allowed reducing the error value to about 1% and increasing R2 to 0.990. The results obtained for the first proposed model show that the overheating protection of the motor can be ensured without direct temperature measurement. In addition, the introduction of a simple casing temperature measurement system allows for an estimation with accuracy suitable for compensating the motor output torque changes related to temperature.


2021 ◽  
Vol 5 (1) ◽  
pp. 14
Author(s):  
Christos Makris ◽  
Georgios Pispirigos

Nowadays, due to the extensive use of information networks in a broad range of fields, e.g., bio-informatics, sociology, digital marketing, computer science, etc., graph theory applications have attracted significant scientific interest. Due to its apparent abstraction, community detection has become one of the most thoroughly studied graph partitioning problems. However, the existing algorithms principally propose iterative solutions of high polynomial order that repetitively require exhaustive analysis. These methods can undoubtedly be considered resource-wise overdemanding, unscalable, and inapplicable in big data graphs, such as today’s social networks. In this article, a novel, near-linear, and highly scalable community prediction methodology is introduced. Specifically, using a distributed, stacking-based model, which is built on plain network topology characteristics of bootstrap sampled subgraphs, the underlined community hierarchy of any given social network is efficiently extracted in spite of its size and density. The effectiveness of the proposed methodology has diligently been examined on numerous real-life social networks and proven superior to various similar approaches in terms of performance, stability, and accuracy.


Sign in / Sign up

Export Citation Format

Share Document