scholarly journals Prediction of Static Modulus and Compressive Strength of Concrete from Dynamic Modulus Associated with Wave Velocity and Resonance Frequency Using Machine Learning Techniques

Materials ◽  
2020 ◽  
Vol 13 (13) ◽  
pp. 2886
Author(s):  
Jong Yil Park ◽  
Sung-Han Sim ◽  
Young Geun Yoon ◽  
Tae Keun Oh

The static elastic modulus (Ec) and compressive strength (fc) are critical properties of concrete. When determining Ec and fc, concrete cores are collected and subjected to destructive tests. However, destructive tests require certain test permissions and large sample sizes. Hence, it is preferable to predict Ec using the dynamic elastic modulus (Ed), through nondestructive evaluations. A resonance frequency test performed according to ASTM C215-14 and a pressure wave (P-wave) measurement conducted according to ASTM C597M-16 are typically used to determine Ed. Recently, developments in transducers have enabled the measurement of a shear wave (S-wave) velocities in concrete. Although various equations have been proposed for estimating Ec and fc from Ed, their results deviate from experimental values. Thus, it is necessary to obtain a reliable Ed value for accurately predicting Ec and fc. In this study, Ed values were experimentally obtained from P-wave and S-wave velocities in the longitudinal and transverse modes; Ec and fc values were predicted using these Ed values through four machine learning (ML) methods: support vector machine, artificial neural networks, ensembles, and linear regression. Using ML, the prediction accuracy of Ec and fc was improved by 2.5–5% and 7–9%, respectively, compared with the accuracy obtained using classical or normal-regression equations. By combining ML methods, the accuracy of the predicted Ec and fc was improved by 0.5% and 1.5%, respectively, compared with the optimal single variable results.

2020 ◽  
Vol 10 (5) ◽  
pp. 1691 ◽  
Author(s):  
Deliang Sun ◽  
Mahshid Lonbani ◽  
Behnam Askarian ◽  
Danial Jahed Armaghani ◽  
Reza Tarinejad ◽  
...  

Despite the vast usage of machine learning techniques to solve engineering problems, a very limited number of studies on the rock brittleness index (BI) have used these techniques to analyze issues in this field. The present study developed five well-known machine learning techniques and compared their performance to predict the brittleness index of the rock samples. The comparison of the models’ performance was conducted through a ranking system. These techniques included Chi-square automatic interaction detector (CHAID), random forest (RF), support vector machine (SVM), K-nearest neighbors (KNN), and artificial neural network (ANN). This study used a dataset from a water transfer tunneling project in Malaysia. Results of simple rock index tests i.e., Schmidt hammer, p-wave velocity, point load, and density were considered as model inputs. The results of this study indicated that while the RF model had the best performance for training (ranking = 25), the ANN outperformed other models for testing (ranking = 22). However, the KNN model achieved the highest cumulative ranking, which was 37. The KNN model showed desirable stability for both training and testing. However, the results of validation stage indicated that RF model with coefficient of determination (R2) of 0.971 provides higher performance capacity for prediction of the rock BI compared to KNN model with R2 of 0.807 and ANN model with R2 of 0.860. The results of this study suggest a practical use of the machine learning models in solving problems related to rock mechanics specially rock brittleness index.


Author(s):  
K Sumanth Reddy ◽  
Gaddam Pranith ◽  
Karre Varun ◽  
Thipparthy Surya Sai Teja

The compressive strength of concrete plays an important role in determining the durability and performance of concrete. Due to rapid growth in material engineering finalizing an appropriate proportion for the mix of concrete to obtain the desired compressive strength of concrete has become cumbersome and a laborious task further the problem becomes more complex to obtain a rational relation between the concrete materials used to the strength obtained. The development in computational methods can be used to obtain a rational relation between the materials used and the compressive strength using machine learning techniques which reduces the influence of outliers and all unwanted variables influence in the determination of compressive strength. In this paper basic machine learning technics Multilayer perceptron neural network (MLP), Support Vector Machines (SVM), linear regressions (LR) and Classification and Regression Tree (CART), have been used to develop a model for determining the compressive strength for two different set of data (ingredients). Among all technics used the SVM provides a better results in comparison to other, but comprehensively the SVM cannot be a universal model because many recent literatures have proved that such models need more data and also the dynamicity of the attributes involved play an important role in determining the efficacy of the model.


Geophysics ◽  
2021 ◽  
pp. 1-86
Author(s):  
Sagar Singh ◽  
Ilya Tsvankin ◽  
Ehsan Zabihi Naeini

Full-waveform inversion (FWI) of 3D wide-azimuth data for elastic orthorhombic media suffers from parameter trade-offs which cannot be overcome without constraining the model-updating procedure. We present an FWI methodology that incorporates geologic constraints to reduce the inversion nonlinearity and increase the resolution of parameter estimation for orthorhombic models. These constraints are obtained from well logs, which can provide rock-physics relationships for different geologic facies. Because the locations of the available well logs are usually sparse, a supervised machine-learning (ML) algorithm (Support Vector Machine) is employed to account for lateral heterogeneity in building the lithologic constraints. The advantages of the facies-based FWI are demonstrated on the modified SEG-EAGE 3D overthrust model, which is made orthorhombic with the symmetry planes that coincide with the Cartesian coordinate planes. We employ a velocity-based parameterization, whose suitability for FWI is studied using the radiation-pattern analysis in a companion paper. Application of the facies-based constraints substantially increases the resolution of the P- and S-wave vertical velocities ( VP0, VS0, and VS1) and, therefore, of the depth scale of the model. Improvements are also observed for the P-wave horizontal and normal-moveout velocities ( VP1, VP2, Vnmo,1, and Vnmo,2) and the S-wave horizontal velocity VS2. However, the velocity Vnmo,3 that depends on Tsvankin’s parameter δ(3) defined in the horizontal plane is not well recovered from the surface data. On the whole, the developed algorithm achieves a much higher spatial resolution compared to unconstrained FWI, even in the absence of recorded frequencies below 2 Hz.


2020 ◽  
Vol 12 (2) ◽  
pp. 84-99
Author(s):  
Li-Pang Chen

In this paper, we investigate analysis and prediction of the time-dependent data. We focus our attention on four different stocks are selected from Yahoo Finance historical database. To build up models and predict the future stock price, we consider three different machine learning techniques including Long Short-Term Memory (LSTM), Convolutional Neural Networks (CNN) and Support Vector Regression (SVR). By treating close price, open price, daily low, daily high, adjusted close price, and volume of trades as predictors in machine learning methods, it can be shown that the prediction accuracy is improved.


Author(s):  
Anantvir Singh Romana

Accurate diagnostic detection of the disease in a patient is critical and may alter the subsequent treatment and increase the chances of survival rate. Machine learning techniques have been instrumental in disease detection and are currently being used in various classification problems due to their accurate prediction performance. Various techniques may provide different desired accuracies and it is therefore imperative to use the most suitable method which provides the best desired results. This research seeks to provide comparative analysis of Support Vector Machine, Naïve bayes, J48 Decision Tree and neural network classifiers breast cancer and diabetes datsets.


2020 ◽  
Author(s):  
Azhagiya Singam Ettayapuram Ramaprasad ◽  
Phum Tachachartvanich ◽  
Denis Fourches ◽  
Anatoly Soshilov ◽  
Jennifer C.Y. Hsieh ◽  
...  

Perfluoroalkyl and Polyfluoroalkyl Substances (PFASs) pose a substantial threat as endocrine disruptors, and thus early identification of those that may interact with steroid hormone receptors, such as the androgen receptor (AR), is critical. In this study we screened 5,206 PFASs from the CompTox database against the different binding sites on the AR using both molecular docking and machine learning techniques. We developed support vector machine models trained on Tox21 data to classify the active and inactive PFASs for AR using different chemical fingerprints as features. The maximum accuracy was 95.01% and Matthew’s correlation coefficient (MCC) was 0.76 respectively, based on MACCS fingerprints (MACCSFP). The combination of docking-based screening and machine learning models identified 29 PFASs that have strong potential for activity against the AR and should be considered priority chemicals for biological toxicity testing.


2020 ◽  
Author(s):  
Nalika Ulapane ◽  
Karthick Thiyagarajan ◽  
sarath kodagoda

<div>Classification has become a vital task in modern machine learning and Artificial Intelligence applications, including smart sensing. Numerous machine learning techniques are available to perform classification. Similarly, numerous practices, such as feature selection (i.e., selection of a subset of descriptor variables that optimally describe the output), are available to improve classifier performance. In this paper, we consider the case of a given supervised learning classification task that has to be performed making use of continuous-valued features. It is assumed that an optimal subset of features has already been selected. Therefore, no further feature reduction, or feature addition, is to be carried out. Then, we attempt to improve the classification performance by passing the given feature set through a transformation that produces a new feature set which we have named the “Binary Spectrum”. Via a case study example done on some Pulsed Eddy Current sensor data captured from an infrastructure monitoring task, we demonstrate how the classification accuracy of a Support Vector Machine (SVM) classifier increases through the use of this Binary Spectrum feature, indicating the feature transformation’s potential for broader usage.</div><div><br></div>


2020 ◽  
Vol 21 ◽  
Author(s):  
Sukanya Panja ◽  
Sarra Rahem ◽  
Cassandra J. Chu ◽  
Antonina Mitrofanova

Background: In recent years, the availability of high throughput technologies, establishment of large molecular patient data repositories, and advancement in computing power and storage have allowed elucidation of complex mechanisms implicated in therapeutic response in cancer patients. The breadth and depth of such data, alongside experimental noise and missing values, requires a sophisticated human-machine interaction that would allow effective learning from complex data and accurate forecasting of future outcomes, ideally embedded in the core of machine learning design. Objective: In this review, we will discuss machine learning techniques utilized for modeling of treatment response in cancer, including Random Forests, support vector machines, neural networks, and linear and logistic regression. We will overview their mathematical foundations and discuss their limitations and alternative approaches all in light of their application to therapeutic response modeling in cancer. Conclusion: We hypothesize that the increase in the number of patient profiles and potential temporal monitoring of patient data will define even more complex techniques, such as deep learning and causal analysis, as central players in therapeutic response modeling.


Author(s):  
Amandeep Kaur ◽  
Sushma Jain ◽  
Shivani Goel ◽  
Gaurav Dhiman

Context: Code smells are symptoms, that something may be wrong in software systems that can cause complications in maintaining software quality. In literature, there exists many code smells and their identification is far from trivial. Thus, several techniques have also been proposed to automate code smell detection in order to improve software quality. Objective: This paper presents an up-to-date review of simple and hybrid machine learning based code smell detection techniques and tools. Methods: We collected all the relevant research published in this field till 2020. We extracted the data from those articles and classified them into two major categories. In addition, we compared the selected studies based on several aspects like, code smells, machine learning techniques, datasets, programming languages used by datasets, dataset size, evaluation approach, and statistical testing. Results: Majority of empirical studies have proposed machine- learning based code smell detection tools. Support vector machine and decision tree algorithms are frequently used by the researchers. Along with this, a major proportion of research is conducted on Open Source Softwares (OSS) such as, Xerces, Gantt Project and ArgoUml. Furthermore, researchers paid more attention towards Feature Envy and Long Method code smells. Conclusion: We identified several areas of open research like, need of code smell detection techniques using hybrid approaches, need of validation employing industrial datasets, etc.


Sign in / Sign up

Export Citation Format

Share Document