Bridging Finite Element and Machine Learning Modeling: Stress Prediction of Arterial Walls in Atherosclerosis

2019 ◽  
Vol 141 (8) ◽  
Author(s):  
Ali Madani ◽  
Ahmed Bakhaty ◽  
Jiwon Kim ◽  
Yara Mubarak ◽  
Mohammad R. K. Mofrad

Finite element and machine learning modeling are two predictive paradigms that have rarely been bridged. In this study, we develop a parametric model to generate arterial geometries and accumulate a database of 12,172 2D finite element simulations modeling the hyperelastic behavior and resulting stress distribution. The arterial wall composition mimics vessels in atherosclerosis–a complex cardiovascular disease and one of the leading causes of death globally. We formulate the training data to predict the maximum von Mises stress, which could indicate risk of plaque rupture. Trained deep learning models are able to accurately predict the max von Mises stress within 9.86% error on a held-out test set. The deep neural networks outperform alternative prediction models and performance scales with amount of training data. Lastly, we examine the importance of contributing features on stress value and location prediction to gain intuitions on the underlying process. Moreover, deep neural networks can capture the functional mapping described by the finite element method, which has far-reaching implications for real-time and multiscale prediction tasks in biomechanics.

Author(s):  
Gebreab K. Zewdie ◽  
David J. Lary ◽  
Estelle Levetin ◽  
Gemechu F. Garuma

Allergies to airborne pollen are a significant issue affecting millions of Americans. Consequently, accurately predicting the daily concentration of airborne pollen is of significant public benefit in providing timely alerts. This study presents a method for the robust estimation of the concentration of airborne Ambrosia pollen using a suite of machine learning approaches including deep learning and ensemble learners. Each of these machine learning approaches utilize data from the European Centre for Medium-Range Weather Forecasts (ECMWF) atmospheric weather and land surface reanalysis. The machine learning approaches used for developing a suite of empirical models are deep neural networks, extreme gradient boosting, random forests and Bayesian ridge regression methods for developing our predictive model. The training data included twenty-four years of daily pollen concentration measurements together with ECMWF weather and land surface reanalysis data from 1987 to 2011 is used to develop the machine learning predictive models. The last six years of the dataset from 2012 to 2017 is used to independently test the performance of the machine learning models. The correlation coefficients between the estimated and actual pollen abundance for the independent validation datasets for the deep neural networks, random forest, extreme gradient boosting and Bayesian ridge were 0.82, 0.81, 0.81 and 0.75 respectively, showing that machine learning can be used to effectively forecast the concentrations of airborne pollen.


2021 ◽  
Vol 11 (16) ◽  
pp. 7700
Author(s):  
Reventheran Ganasan ◽  
Chee Ghuan Tan ◽  
Zainah Ibrahim ◽  
Fadzli Mohamed Nazri ◽  
Muhammad M. Sherif ◽  
...  

In recent years, researchers have investigated the development of artificial neural networks (ANN) and finite element models (FEM) for predicting crack propagation in reinforced concrete (RC) members. However, most of the developed prediction models have been limited to focus on individual isolated RC members without considering the interaction of members in a structure subjected to hazard loads, due to earthquake and wind. This research develops models to predict the evolution of the cracks in the RC beam-column joint (BCJ) region. The RC beam-column joint is subjected to lateral cyclic loading. Four machine learning models are developed using Rapidminer to predict the crack width experienced by seven RC beam-column joints. The design parameters associated with RC beam-column joints and lateral cyclic loadings in terms of drift ratio are used as inputs. Several prediction models are developed, and the highest performing neural networks are selected, refined, and optimized using the various split data ratios, number of inputs, and performance indices. The error in predicting the experimental crack width is used as a performance index.


2015 ◽  
Vol 1 (1) ◽  
Author(s):  
Mohammad Javad Shafiee ◽  
Parthipan Siva ◽  
Paul Fieguth ◽  
Alexander Wong

<p>Transfer learning is a recent field of machine learning research that<br />aims to resolve the challenge of dealing with insufficient training<br />data in the domain of interest. This is a particular issue with traditional<br />deep neural networks where a large amount of training<br />data is needed. Recently, StochasticNets was proposed to take<br />advantage of sparse connectivity in order to decrease the number<br />of parameters that needs to be learned, which in turn may relax<br />training data size requirements. In this paper, we study the efficacy<br />of transfer learning on StochasticNet frameworks. Experimental results<br />show 7% improvement on StochasticNet performance when<br />the transfer learning is applied in training step.</p>


Symmetry ◽  
2019 ◽  
Vol 11 (7) ◽  
pp. 892 ◽  
Author(s):  
Hyun Kwon ◽  
Hyunsoo Yoon ◽  
Ki-Woong Park

Studies related to pattern recognition and visualization using computer technology have been introduced. In particular, deep neural networks (DNNs) provide good performance for image, speech, and pattern recognition. However, a poisoning attack is a serious threat to a DNN’s security. A poisoning attack reduces the accuracy of a DNN by adding malicious training data during the training process. In some situations, it may be necessary to drop a specifically chosen class of accuracy from the model. For example, if an attacker specifically disallows nuclear facilities to be selectively recognized, it may be necessary to intentionally prevent unmanned aerial vehicles from correctly recognizing nuclear-related facilities. In this paper, we propose a selective poisoning attack that reduces the accuracy of only the chosen class in the model. The proposed method achieves this by training malicious data corresponding to only the chosen class while maintaining the accuracy of the remaining classes. For the experiment, we used tensorflow as the machine-learning library as well as MNIST, Fashion-MNIST, and CIFAR10 as the datasets. Experimental results show that the proposed method can reduce the accuracy of the chosen class by 43.2%, 41.7%, and 55.3% in MNIST, Fashion-MNIST, and CIFAR10, respectively, while maintaining the accuracy of the remaining classes.


Author(s):  
Nurullah Türker ◽  
Hümeyra Tercanlı Alkış ◽  
Steven J Sadowsky ◽  
Ulviye Şebnem Büyükkaplan

An ideal occlusal scheme plays an important role in a good prognosis of All-on-Four applications, as it does for other implant therapies, due to the potential impact of occlusal loads on implant prosthetic components. The aim of the present three-dimensional (3D) finite element analysis (FEA) study was to investigate the stresses on abutments, screws and prostheses that are generated by occlusal loads via different occlusal schemes in the All-on-Four concept. Three-dimensional models of the maxilla, mandible, implants, implant substructures and prostheses were designed according to the All-on-Four concept. Forces were applied from the occlusal contact points formed in maximum intercuspation and eccentric movements in canine guidance occlusion (CGO), group function occlusion (GFO) and lingualized occlusion (LO). The von Mises stress values for abutment and screws and deformation values for prostheses were obtained and results were evaluated comparatively. It was observed that the stresses on screws and abutments were more evenly distributed in GFO. Maximum deformation values for prosthesis were observed in the CFO model for lateral movement both in the maxilla and mandible. Within the limits of the present study, GFO may be suggested to reduce stresses on screws, abutments and prostheses in the All-on-Four concept.


2020 ◽  
Vol 1 (1) ◽  
pp. 93-102
Author(s):  
Carsten Strzalka ◽  
◽  
Manfred Zehn ◽  

For the analysis of structural components, the finite element method (FEM) has become the most widely applied tool for numerical stress- and subsequent durability analyses. In industrial application advanced FE-models result in high numbers of degrees of freedom, making dynamic analyses time-consuming and expensive. As detailed finite element models are necessary for accurate stress results, the resulting data and connected numerical effort from dynamic stress analysis can be high. For the reduction of that effort, sophisticated methods have been developed to limit numerical calculations and processing of data to only small fractions of the global model. Therefore, detailed knowledge of the position of a component’s highly stressed areas is of great advantage for any present or subsequent analysis steps. In this paper an efficient method for the a priori detection of highly stressed areas of force-excited components is presented, based on modal stress superposition. As the component’s dynamic response and corresponding stress is always a function of its excitation, special attention is paid to the influence of the loading position. Based on the frequency domain solution of the modally decoupled equations of motion, a coefficient for a priori weighted superposition of modal von Mises stress fields is developed and validated on a simply supported cantilever beam structure with variable loading positions. The proposed approach is then applied to a simplified industrial model of a twist beam rear axle.


Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1654
Author(s):  
Poojitha Vurtur Badarinath ◽  
Maria Chierichetti ◽  
Fatemeh Davoudi Kakhki

Current maintenance intervals of mechanical systems are scheduled a priori based on the life of the system, resulting in expensive maintenance scheduling, and often undermining the safety of passengers. Going forward, the actual usage of a vehicle will be used to predict stresses in its structure, and therefore, to define a specific maintenance scheduling. Machine learning (ML) algorithms can be used to map a reduced set of data coming from real-time measurements of a structure into a detailed/high-fidelity finite element analysis (FEA) model of the same system. As a result, the FEA-based ML approach will directly estimate the stress distribution over the entire system during operations, thus improving the ability to define ad-hoc, safe, and efficient maintenance procedures. The paper initially presents a review of the current state-of-the-art of ML methods applied to finite elements. A surrogate finite element approach based on ML algorithms is also proposed to estimate the time-varying response of a one-dimensional beam. Several ML regression models, such as decision trees and artificial neural networks, have been developed, and their performance is compared for direct estimation of the stress distribution over a beam structure. The surrogate finite element models based on ML algorithms are able to estimate the response of the beam accurately, with artificial neural networks providing more accurate results.


Sensors ◽  
2021 ◽  
Vol 21 (1) ◽  
pp. 229
Author(s):  
Xianzhong Tian ◽  
Juan Zhu ◽  
Ting Xu ◽  
Yanjun Li

The latest results in Deep Neural Networks (DNNs) have greatly improved the accuracy and performance of a variety of intelligent applications. However, running such computation-intensive DNN-based applications on resource-constrained mobile devices definitely leads to long latency and huge energy consumption. The traditional way is performing DNNs in the central cloud, but it requires significant amounts of data to be transferred to the cloud over the wireless network and also results in long latency. To solve this problem, offloading partial DNN computation to edge clouds has been proposed, to realize the collaborative execution between mobile devices and edge clouds. In addition, the mobility of mobile devices is easily to cause the computation offloading failure. In this paper, we develop a mobility-included DNN partition offloading algorithm (MDPO) to adapt to user’s mobility. The objective of MDPO is minimizing the total latency of completing a DNN job when the mobile user is moving. The MDPO algorithm is suitable for both DNNs with chain topology and graphic topology. We evaluate the performance of our proposed MDPO compared to local-only execution and edge-only execution, experiments show that MDPO significantly reduces the total latency and improves the performance of DNN, and MDPO can adjust well to different network conditions.


2021 ◽  
Vol 11 (15) ◽  
pp. 6704
Author(s):  
Jingyong Cai ◽  
Masashi Takemoto ◽  
Yuming Qiu ◽  
Hironori Nakajo

Despite being heavily used in the training of deep neural networks (DNNs), multipliers are resource-intensive and insufficient in many different scenarios. Previous discoveries have revealed the superiority when activation functions, such as the sigmoid, are calculated by shift-and-add operations, although they fail to remove multiplications in training altogether. In this paper, we propose an innovative approach that can convert all multiplications in the forward and backward inferences of DNNs into shift-and-add operations. Because the model parameters and backpropagated errors of a large DNN model are typically clustered around zero, these values can be approximated by their sine values. Multiplications between the weights and error signals are transferred to multiplications of their sine values, which are replaceable with simpler operations with the help of the product to sum formula. In addition, a rectified sine activation function is utilized for further converting layer inputs into sine values. In this way, the original multiplication-intensive operations can be computed through simple add-and-shift operations. This trigonometric approximation method provides an efficient training and inference alternative for devices with insufficient hardware multipliers. Experimental results demonstrate that this method is able to obtain a performance close to that of classical training algorithms. The approach we propose sheds new light on future hardware customization research for machine learning.


2021 ◽  
Vol 11 (6) ◽  
pp. 2535
Author(s):  
Bruno E. Silva ◽  
Ramiro S. Barbosa

In this article, we designed and implemented neural controllers to control a nonlinear and unstable magnetic levitation system composed of an electromagnet and a magnetic disk. The objective was to evaluate the implementation and performance of neural control algorithms in a low-cost hardware. In a first phase, we designed two classical controllers with the objective to provide the training data for the neural controllers. After, we identified several neural models of the levitation system using Nonlinear AutoRegressive eXogenous (NARX)-type neural networks that were used to emulate the forward dynamics of the system. Finally, we designed and implemented three neural control structures: the inverse controller, the internal model controller, and the model reference controller for the control of the levitation system. The neural controllers were tested on a low-cost Arduino control platform through MATLAB/Simulink. The experimental results proved the good performance of the neural controllers.


Sign in / Sign up

Export Citation Format

Share Document