Wear Prediction of a Mechanism With Joint Clearance Involving Aleatory and Epistemic Uncertainty

2014 ◽  
Vol 136 (4) ◽  
Author(s):  
Dongyang Sun ◽  
Guoping Chen ◽  
Tiecheng Wang ◽  
Rujie Sun

In this paper, a dynamic wear model to predict wear volume in a mechanism, involving aleatory and epistemic uncertainty, is established. In this case, harmonic drives are applied to alleviate the impact of clearance for wear of the mechanism. The contact model of a mechanism with clearance that is subjected to harmonic drive is established, with the nonlinear spring-damp model and flexibility of harmonic gear considered. Especially, a slider-crank mechanism with clearance is simulated. The result shows that the cushioning effect of collisions for clearance with application of harmonic drive is superior to that through the flexibility of mechanical parts. Following confidence region method (CRM) for quantification of aleatory and epistemic uncertainty is proposed to analyze the effect of parameter uncertainty for wear volume during the entire time domain, and double-loop Monte Carlo sampling (MCS) approach is improved to propagate uncertainties during the entire time domain. Finally, based on Kriging model, the CRM is used to analyze the effect of parameter uncertainty for wear volume. The result shows that, when both aleatory and epistemic uncertainties are considered, the wear volume boundary is wider and better than that when only aleatory uncertainty is considered. These analyses help to improve the reliability design of system and set a theoretical foundation for the mechanic design and precision analysis of the mechanical system.

Author(s):  
Joshua Mullins ◽  
Sankaran Mahadevan

This paper proposes a comprehensive approach to prediction under uncertainty by application to the Sandia National Laboratories verification and validation challenge problem. In this problem, legacy data and experimental measurements of different levels of fidelity and complexity (e.g., coupon tests, material and fluid characterizations, and full system tests/measurements) compose a hierarchy of information where fewer observations are available at higher levels of system complexity. This paper applies a Bayesian methodology in order to incorporate information at different levels of the hierarchy and include the impact of sparse data in the prediction uncertainty for the system of interest. Since separation of aleatory and epistemic uncertainty sources is a pervasive issue in calibration and validation, maintaining this separation in order to perform these activities correctly is the primary focus of this paper. Toward this goal, a Johnson distribution family approach to calibration is proposed in order to enable epistemic and aleatory uncertainty to be separated in the posterior parameter distributions. The model reliability metric approach to validation is then applied, and a novel method of handling combined aleatory and epistemic uncertainty is introduced. The quality of the validation assessment is used to modify the parameter uncertainty and add conservatism to the prediction of interest. Finally, this prediction with its associated uncertainty is used to assess system-level reliability (a prediction goal for the challenge problem).


2020 ◽  
Vol 0 (0) ◽  
Author(s):  
Tawfik Yahya ◽  
Nur Azah Hamzaid ◽  
Sadeeq Ali ◽  
Farahiyah Jasni ◽  
Hanie Nadia Shasmin

AbstractA transfemoral prosthesis is required to assist amputees to perform the activity of daily living (ADL). The passive prosthesis has some drawbacks such as utilization of high metabolic energy. In contrast, the active prosthesis consumes less metabolic energy and offers better performance. However, the recent active prosthesis uses surface electromyography as its sensory system which has weak signals with microvolt-level intensity and requires a lot of computation to extract features. This paper focuses on recognizing different phases of sitting and standing of a transfemoral amputee using in-socket piezoelectric-based sensors. 15 piezoelectric film sensors were embedded in the inner socket wall adjacent to the most active regions of the agonist and antagonist knee extensor and flexor muscles, i. e. region with the highest level of muscle contractions of the quadriceps and hamstring. A male transfemoral amputee wore the instrumented socket and was instructed to perform several sitting and standing phases using an armless chair. Data was collected from the 15 embedded sensors and went through signal conditioning circuits. The overlapping analysis window technique was used to segment the data using different window lengths. Fifteen time-domain and frequency-domain features were extracted and new feature sets were obtained based on the feature performance. Eight of the common pattern recognition multiclass classifiers were evaluated and compared. Regression analysis was used to investigate the impact of the number of features and the window lengths on the classifiers’ accuracies, and Analysis of Variance (ANOVA) was used to test significant differences in the classifiers’ performances. The classification accuracy was calculated using k-fold cross-validation method, and 20% of the data set was held out for testing the optimal classifier. The results showed that the feature set (FS-5) consisting of the root mean square (RMS) and the number of peaks (NP) achieved the highest classification accuracy in five classifiers. Support vector machine (SVM) with cubic kernel proved to be the optimal classifier, and it achieved a classification accuracy of 98.33 % using the test data set. Obtaining high classification accuracy using only two time-domain features would significantly reduce the processing time of controlling a prosthesis and eliminate substantial delay. The proposed in-socket sensors used to detect sit-to-stand and stand-to-sit movements could be further integrated with an active knee joint actuation system to produce powered assistance during energy-demanding activities such as sit-to-stand and stair climbing. In future, the system could also be used to accurately predict the intended movement based on their residual limb’s muscle and mechanical behaviour as detected by the in-socket sensory system.


2010 ◽  
Vol 132 (4) ◽  
Author(s):  
Marwan Hassan ◽  
Achraf Hossen

This paper presents simulations of a loosely supported cantilever tube subjected to turbulence and fluidelastic instability forces. Several time domain fluid force models are presented to simulate the damping-controlled fluidelastic instability mechanism in tube arrays. These models include a negative damping model based on the Connors equation, fluid force coefficient-based models (Chen, 1983, “Instability Mechanisms and Stability Criteria of a Group of Cylinders Subjected to Cross-Flow. Part 1: Theory,” Trans. ASME, J. Vib., Acoust., Stress, Reliab. Des., 105, pp. 51–58; Tanaka and Takahara, 1981, “Fluid Elastic Vibration of Tube Array in Cross Flow,” J. Sound Vib., 77, pp. 19–37), and two semi-analytical models (Price and Païdoussis, 1984, “An Improved Mathematical Model for the Stability of Cylinder Rows Subjected to Cross-Flow,” J. Sound Vib., 97(4), pp. 615–640; Lever and Weaver, 1982, “A Theoretical Model for the Fluidelastic Instability in Heat Exchanger Tube Bundles,” ASME J. Pressure Vessel Technol., 104, pp. 104–147). Time domain modeling and implementation challenges for each of these theories were discussed. For each model, the flow velocity and the support clearance were varied. Special attention was paid to the tube/support interaction parameters that affect wear, such as impact forces and normal work rate. As the prediction of the linear threshold varies depending on the model utilized, the nonlinear response also differs. The investigated models exhibit similar response characteristics for the lift response. The greatest differences were seen in the prediction of the drag response, the impact force level, and the normal work rate. Simulation results show that the Connors-based model consistently underestimates the response and the tube/support interaction parameters for the loose support case.


2021 ◽  
Vol 37 (1_suppl) ◽  
pp. 1420-1439
Author(s):  
Albert R Kottke ◽  
Norman A Abrahamson ◽  
David M Boore ◽  
Yousef Bozorgnia ◽  
Christine A Goulet ◽  
...  

Traditional ground-motion models (GMMs) are used to compute pseudo-spectral acceleration (PSA) from future earthquakes and are generally developed by regression of PSA using a physics-based functional form. PSA is a relatively simple metric that correlates well with the response of several engineering systems and is a metric commonly used in engineering evaluations; however, characteristics of the PSA calculation make application of scaling factors dependent on the frequency content of the input motion, complicating the development and adaptability of GMMs. By comparison, Fourier amplitude spectrum (FAS) represents ground-motion amplitudes that are completely independent from the amplitudes at other frequencies, making them an attractive alternative for GMM development. Random vibration theory (RVT) predicts the peak response of motion in the time domain based on the FAS and a duration, and thus can be used to relate FAS to PSA. Using RVT to compute the expected peak response in the time domain for given FAS therefore presents a significant advantage that is gaining traction in the GMM field. This article provides recommended RVT procedures relevant to GMM development, which were developed for the Next Generation Attenuation (NGA)-East project. In addition, an orientation-independent FAS metric—called the effective amplitude spectrum (EAS)—is developed for use in conjunction with RVT to preserve the mean power of the corresponding two horizontal components considered in traditional PSA-based modeling (i.e., RotD50). The EAS uses a standardized smoothing approach to provide a practical representation of the FAS for ground-motion modeling, while minimizing the impact on the four RVT properties ( zeroth moment, [Formula: see text]; bandwidth parameter, [Formula: see text]; frequency of zero crossings, [Formula: see text]; and frequency of extrema, [Formula: see text]). Although the recommendations were originally developed for NGA-East, they and the methodology they are based on can be adapted to become portable to other GMM and engineering problems requiring the computation of PSA from FAS.


Author(s):  
Alessandra Cuneo ◽  
Alberto Traverso ◽  
Shahrokh Shahpar

In engineering design, uncertainty is inevitable and can cause a significant deviation in the performance of a system. Uncertainty in input parameters can be categorized into two groups: aleatory and epistemic uncertainty. The work presented here is focused on aleatory uncertainty, which can cause natural, unpredictable and uncontrollable variations in performance of the system under study. Such uncertainty can be quantified using statistical methods, but the main obstacle is often the computational cost, because the representative model is typically highly non-linear and complex. Therefore, it is necessary to have a robust tool that can perform the uncertainty propagation with as few evaluations as possible. In the last few years, different methodologies for uncertainty propagation and quantification have been proposed. The focus of this study is to evaluate four different methods to demonstrate strengths and weaknesses of each approach. The first method considered is Monte Carlo simulation, a sampling method that can give high accuracy but needs a relatively large computational effort. The second method is Polynomial Chaos, an approximated method where the probabilistic parameters of the response function are modelled with orthogonal polynomials. The third method considered is Mid-range Approximation Method. This approach is based on the assembly of multiple meta-models into one model to perform optimization under uncertainty. The fourth method is the application of the first two methods not directly to the model but to a response surface representing the model of the simulation, to decrease computational cost. All these methods have been applied to a set of analytical test functions and engineering test cases. Relevant aspects of the engineering design and analysis such as high number of stochastic variables and optimised design problem with and without stochastic design parameters were assessed. Polynomial Chaos emerges as the most promising methodology, and was then applied to a turbomachinery test case based on a thermal analysis of a high-pressure turbine disk.


Author(s):  
Michalis I. Vousdoukas ◽  
Dimitrios Bouziotas ◽  
Alessio Giardino ◽  
Laurens M. Bouwer ◽  
Evangelos Voukouvalas ◽  
...  

Abstract. An upscaling of flood risk assessment frameworks beyond regional and national scales has taken place during recent years, with a number of large-scale models emerging as tools for hotspot identification, support for international policy-making and harmonization of climate change adaptation strategies. There is, however, limited insight on the scaling effects and structural limitations of flood risk models and, therefore, the underlying uncertainty. In light of this, we examine key sources of epistemic uncertainty in the Coastal Flood Risk (CFR) modelling chain: (i) the inclusion and interaction of different hydraulic components leading to extreme sea-level (ESL); (ii) inundation modelling; (iii) the underlying uncertainty in the Digital Elevation Model (DEM); (iv) flood defence information; (v) the assumptions behind the use of depth-damage functions that express vulnerability; and (vi) different climate change projections. The impact of these uncertainties to estimated Expected Annual Damage (EAD) for present and future climates is evaluated in a dual case study in Faro, Portugal and in the Iberian Peninsula. The ranking of the uncertainty factors varies among the different case studies, baseline CFR estimates, as well as their absolute/relative changes. We find that uncertainty from ESL contributions, and in particular the way waves are treated, can be higher than the uncertainty of the two greenhouse gas emission projections and six climate models that are used. Of comparable importance is the quality of information on coastal protection levels and DEM information. In the absence of large-extent datasets with sufficient resolution and accuracy the latter two factors are the main bottlenecks in terms of large-scale CFR assessment quality.


2017 ◽  
Vol 15 (1-2) ◽  
Author(s):  
Santosh V. Bhaskar ◽  
Hari N. Kudal

<p>Components of forming tool dies such as draw ring, ejector pin use AISI 4140 as material for their manufacturing. The integrity of the die cutting tools is essential to achieve adequate product quality. In present study, the influence of plasma nitriding (PN) on the wear behav-iour of AISI 4140 steel was investigated. Full factorial experimental design technique was used to study the main effects and the interaction effects between operational parameters and the response variable. The control factors at their two levels (-1 and +1) were: applied load (4.905N and 14.715N), sliding speed (3.14 m/s and 5.23 m/s), and sliding distance (500m and 1000m).The parameters were coded as A, B, and C, consecutively, and were investigated at two levels (-1 and +1). Response selected was Wear Volume Loss (WVL). The effects of in-dividual variables and their interaction effects for dependent variables, namely, WVL were determined. The process of selecting significant factors, based on statistical tools, is illustrat-ed. Analysis of Variance (ANOVA) was performed to know the impact of individual factors on the WVL. Untreated and PN treated AISI 4140 specimens were investigated using field emission Scanning Electron Microscope (SEM) equipped with Energy Dispersive X-ray (EDX) analyzer. Finally diagnostics tools were used to check adequacy of the model in terms of assumptions of ANOVA. ‘Design Expert-7’ and ‘Minitab 17’ softwares were used in the study. Results of statistical analysis indicate that the most effective parameters in the WVL were load and sliding speed. The interaction between load and sliding speed was the most influencing interaction. Results of regression analysis indicate regression coefficient (R2) to be above 90% which suggests good predictability of the model. ‘Predicted-R2’ and ‘Adjusted-R2’, found to be in good agreement with R2, for both the materials under investigation. More-over, results of SEM microscopy suggest PN to be an effective technique to reduce wear.</p>


2021 ◽  
Author(s):  
Victoria J Brookes ◽  
Okta Wismandanu ◽  
Etih Sudarnika ◽  
Justin A Roby ◽  
Lynne Hayes ◽  
...  

Wet markets are important for food security in many regions worldwide but have come under scrutiny due to their potential role in the emergence of infectious diseases. The sale of live wildlife has been highlighted as a particular risk, and the World Health Organisation has called for the banning of live, wild-caught mammalian species in markets unless risk assessment and effective regulations are in place. Following PRISMA guidelines, we conducted a global scoping review of peer-reviewed information about the sale of live, terrestrial wildlife in markets that are likely to sell fresh food, and collated data about the characteristics of such markets, activities involving live wildlife, the species sold, their purpose, and animal, human, and environmental health risks that were identified. Of the 59 peer-reviewed records within scope, only 25% (n = 14) focussed on disease risks; the rest focused on the impact of wildlife sale on conservation. Although there were some global patterns (for example, the types of markets and purpose of sale of wildlife), there was wide diversity and huge epistemic uncertainty in all aspects associated with live, terrestrial wildlife sale in markets such that the feasibility of accurate assessment of the risk of emerging infectious disease associated with live wildlife trade in markets is limited. Given the value of both wet markets and wildlife trade and the need to support food affordability and accessibility, conservation, public health, and the social and economic aspects of livelihoods of often vulnerable people, there are major information gaps that need to be addressed to develop evidence-based policy in this environment. This review identifies these gaps and provides a foundation from which information for risk assessments can be collected.


2017 ◽  
Vol 600 ◽  
pp. A60 ◽  
Author(s):  
Davide Poletti ◽  
Giulio Fabbian ◽  
Maude Le Jeune ◽  
Julien Peloton ◽  
Kam Arnold ◽  
...  

Analysis of cosmic microwave background (CMB) datasets typically requires some filtering of the raw time-ordered data. For instance, in the context of ground-based observations, filtering is frequently used to minimize the impact of low frequency noise, atmospheric contributions and/or scan synchronous signals on the resulting maps. In this work we have explicitly constructed a general filtering operator, which can unambiguously remove any set of unwanted modes in the data, and then amend the map-making procedure in order to incorporate and correct for it. We show that such an approach is mathematically equivalent to the solution of a problem in which the sky signal and unwanted modes are estimated simultaneously and the latter are marginalized over. We investigated the conditions under which this amended map-making procedure can render an unbiased estimate of the sky signal in realistic circumstances. We then discuss the potential implications of these observations on the choice of map-making and power spectrum estimation approaches in the context of B-mode polarization studies. Specifically, we have studied the effects of time-domain filtering on the noise correlation structure in the map domain, as well as impact it may haveon the performance of the popular pseudo-spectrum estimators. We conclude that although maps produced by the proposed estimators arguably provide the most faithful representation of the sky possible given the data, they may not straightforwardly lead to the best constraints on the power spectra of the underlying sky signal and special care may need to be taken to ensure this is the case. By contrast, simplified map-makers which do not explicitly correct for time-domain filtering, but leave it to subsequent steps in the data analysis, may perform equally well and be easier and faster to implement. We focused on polarization-sensitive measurements targeting the B-mode component of the CMB signal and apply the proposed methods to realistic simulations based on characteristics of an actual CMB polarization experiment, POLARBEAR. Our analysis and conclusions are however more generally applicable.


Sign in / Sign up

Export Citation Format

Share Document