Analysis and Synthesis of Mechanical Error in Universal Joints

Author(s):  
J. L. Cagney ◽  
S. S. Rao

Abstract The modeling of manufacturing errors in mechanisms is a significant task to validate practical designs. The use of probability distributions for errors can simulate manufacturing variations and real world operations. This paper presents the mechanical error analysis of universal joint drivelines. Each error is simulated using a probability distribution, i.e., a design of the mechanism is created by assigning random values to the errors. Each design is then evaluated by comparing the output error with a limiting value and the reliability of the universal joint is estimated. For this, the design is considered a failure whenever the output error exceeds the specified limit. In addition, the problem of synthesis, which involves the allocation of tolerances (errors) for minimum manufacturing cost without violating a specified accuracy requirement of the output, is also considered. Three probability distributions — normal, Weibull and beta distributions — were used to simulate the random values of the errors. The similarity of the results given by the three distributions suggests that the use of normal distribution would be acceptable for modeling the tolerances in most cases.

Author(s):  
Mohammad Ahsanullah ◽  
Mohammad Shakil

<p>A probability distribution can be characterized through various methods. Before a particular probability distribution model is applied to fit the real-world data, it is necessary to confirm whether the given continuous probability distribution satisfies the underlying requirements by its characterization. In this paper, characterizations of some continuous probability distributions occurring in physics and allied sciences have been established. We have considered the normal, Laplace, Lorentz, logistic, Boltzmann, Rayleigh, log-normal, Maxwell, Fermi-Dirac, and Bose-Einstein distributions, and characterized them by applying a truncated moment method; that is, by taking a product of reverse hazard rate and another function of the truncated point. It is hoped that the proposed characterizations will be useful for researchers in various fields of physics and allied sciences.</p>


Author(s):  
Haoyi Xiong ◽  
Kafeng Wang ◽  
Jiang Bian ◽  
Zhanxing Zhu ◽  
Cheng-Zhong Xu ◽  
...  

Stochastic Gradient Hamiltonian Monte Carlo (SGHMC) methods have been widely used to sample from certain probability distributions, incorporating (kernel) density derivatives and/or given datasets. Instead of exploring new samples from kernel spaces, this piece of work proposed a novel SGHMC sampler, namely Spectral Hamiltonian Monte Carlo (SpHMC), that produces the high dimensional sparse representations of given datasets through sparse sensing and SGHMC. Inspired by compressed sensing, we assume all given samples are low-dimensional measurements of certain high-dimensional sparse vectors, while a continuous probability distribution exists in such high-dimensional space. Specifically, given a dictionary for sparse coding, SpHMC first derives a novel likelihood evaluator of the probability distribution from the loss function of LASSO, then samples from the high-dimensional distribution using stochastic Langevin dynamics with derivatives of the logarithm likelihood and Metropolis–Hastings sampling. In addition, new samples in low-dimensional measuring spaces can be regenerated using the sampled high-dimensional vectors and the dictionary. Extensive experiments have been conducted to evaluate the proposed algorithm using real-world datasets. The performance comparisons on three real-world applications demonstrate the superior performance of SpHMC beyond baseline methods.


2017 ◽  
Vol 11 (1) ◽  
pp. 45-54
Author(s):  
V. M. Artyushenko ◽  
N. A. Vasilyev ◽  
E. M. Stavrovskiy ◽  
K. L. Samarov

Reviewed and analyzed issues associated with the analysis of patterns of density probability distribution of the envelope of the signal reflected from the extended objects that are used in the analysis and synthesis of radio engineering devices of short-range radiolocation.


2021 ◽  
Vol 18 (2) ◽  
pp. 1-24
Author(s):  
Nhut-Minh Ho ◽  
Himeshi De silva ◽  
Weng-Fai Wong

This article presents GRAM (<underline>G</underline>PU-based <underline>R</underline>untime <underline>A</underline>daption for <underline>M</underline>ixed-precision) a framework for the effective use of mixed precision arithmetic for CUDA programs. Our method provides a fine-grain tradeoff between output error and performance. It can create many variants that satisfy different accuracy requirements by assigning different groups of threads to different precision levels adaptively at runtime . To widen the range of applications that can benefit from its approximation, GRAM comes with an optional half-precision approximate math library. Using GRAM, we can trade off precision for any performance improvement of up to 540%, depending on the application and accuracy requirement.


2011 ◽  
Vol 09 (supp01) ◽  
pp. 39-47
Author(s):  
ALESSIA ALLEVI ◽  
MARIA BONDANI ◽  
ALESSANDRA ANDREONI

We present the experimental reconstruction of the Wigner function of some optical states. The method is based on direct intensity measurements by non-ideal photodetectors operated in the linear regime. The signal state is mixed at a beam-splitter with a set of coherent probes of known complex amplitudes and the probability distribution of the detected photons is measured. The Wigner function is given by a suitable sum of these probability distributions measured for different values of the probe. For comparison, the same data are analyzed to obtain the number distributions and the Wigner functions for photons.


2021 ◽  
Vol 5 (1) ◽  
pp. 1-11
Author(s):  
Vitthal Anwat ◽  
Pramodkumar Hire ◽  
Uttam Pawar ◽  
Rajendra Gunjal

Flood Frequency Analysis (FFA) method was introduced by Fuller in 1914 to understand the magnitude and frequency of floods. The present study is carried out using the two most widely accepted probability distributions for FFA in the world namely, Gumbel Extreme Value type I (GEVI) and Log Pearson type III (LP-III). The Kolmogorov-Smirnov (KS) and Anderson-Darling (AD) methods were used to select the most suitable probability distribution at sites in the Damanganga Basin. Moreover, discharges were estimated for various return periods using GEVI and LP-III. The recurrence interval of the largest peak flood on record (Qmax) is 107 years (at Nanipalsan) and 146 years (at Ozarkhed) as per LP-III. Flood Frequency Curves (FFC) specifies that LP-III is the best-fitted probability distribution for FFA of the Damanganga Basin. Therefore, estimated discharges and return periods by LP-III probability distribution are more reliable and can be used for designing hydraulic structures.


2021 ◽  
Vol 118 (40) ◽  
pp. e2025782118
Author(s):  
Wei-Chia Chen ◽  
Juannan Zhou ◽  
Jason M. Sheltzer ◽  
Justin B. Kinney ◽  
David M. McCandlish

Density estimation in sequence space is a fundamental problem in machine learning that is also of great importance in computational biology. Due to the discrete nature and large dimensionality of sequence space, how best to estimate such probability distributions from a sample of observed sequences remains unclear. One common strategy for addressing this problem is to estimate the probability distribution using maximum entropy (i.e., calculating point estimates for some set of correlations based on the observed sequences and predicting the probability distribution that is as uniform as possible while still matching these point estimates). Building on recent advances in Bayesian field-theoretic density estimation, we present a generalization of this maximum entropy approach that provides greater expressivity in regions of sequence space where data are plentiful while still maintaining a conservative maximum entropy character in regions of sequence space where data are sparse or absent. In particular, we define a family of priors for probability distributions over sequence space with a single hyperparameter that controls the expected magnitude of higher-order correlations. This family of priors then results in a corresponding one-dimensional family of maximum a posteriori estimates that interpolate smoothly between the maximum entropy estimate and the observed sample frequencies. To demonstrate the power of this method, we use it to explore the high-dimensional geometry of the distribution of 5′ splice sites found in the human genome and to understand patterns of chromosomal abnormalities across human cancers.


2016 ◽  
Vol 11 (1) ◽  
pp. 432-440 ◽  
Author(s):  
M. T. Amin ◽  
M. Rizwan ◽  
A. A. Alazba

AbstractThis study was designed to find the best-fit probability distribution of annual maximum rainfall based on a twenty-four-hour sample in the northern regions of Pakistan using four probability distributions: normal, log-normal, log-Pearson type-III and Gumbel max. Based on the scores of goodness of fit tests, the normal distribution was found to be the best-fit probability distribution at the Mardan rainfall gauging station. The log-Pearson type-III distribution was found to be the best-fit probability distribution at the rest of the rainfall gauging stations. The maximum values of expected rainfall were calculated using the best-fit probability distributions and can be used by design engineers in future research.


Author(s):  
W Wu ◽  
S S Rao

The quality and performance of any mechanical system are greatly influenced by the GD&T (geometric dimensioning and tolerancing) used in its design. A proper consideration of the various types of tolerances associated with different components could not only satisfy the assembly requirements, but also minimize the manufacturing cost. To satisfy the design and functional specifications, one has to know how various tolerance patterns affect the manufacturability and assemblability of the designed parts. Therefore, a thorough understanding of how different forms of mechanical tolerances interact with each other becomes a must for designers and manufacturers. The effects of form, orientation, and position tolerances on the kinematic features and dimensions of mechanical systems are analysed using a new approach, based on fuzzy logic, in this article. In this approach, the α-cut method is used with the mechanical tolerances concerned as intervals. The proposed approach represents a more natural and realistic way of dealing with uncertain properties like geometric dimensions. A typical mechanical assembly system involving form, orientation, and position tolerances is used as an illustrative example. As the fuzzy approach leads to systems of non-linear interval equations, a modified Newton-Raphson method is developed for the solution of these equations. The current approach is found to be effective, simple, and accurate and can be extended to the analysis and synthesis of any uncertain mechanical system where the probability distribution functions of the uncertain parameters are unknown.


2017 ◽  
Vol 27 (1) ◽  
pp. 169-180 ◽  
Author(s):  
Marton Szemenyei ◽  
Ferenc Vajda

Abstract Dimension reduction and feature selection are fundamental tools for machine learning and data mining. Most existing methods, however, assume that objects are represented by a single vectorial descriptor. In reality, some description methods assign unordered sets or graphs of vectors to a single object, where each vector is assumed to have the same number of dimensions, but is drawn from a different probability distribution. Moreover, some applications (such as pose estimation) may require the recognition of individual vectors (nodes) of an object. In such cases it is essential that the nodes within a single object remain distinguishable after dimension reduction. In this paper we propose new discriminant analysis methods that are able to satisfy two criteria at the same time: separating between classes and between the nodes of an object instance. We analyze and evaluate our methods on several different synthetic and real-world datasets.


Sign in / Sign up

Export Citation Format

Share Document