Maximum Entropy Approach for Modeling Hardness Uncertainties in Rabinowicz's Abrasive Wear Equation

2014 ◽  
Vol 136 (2) ◽  
Author(s):  
Fabio Antonio Dorini ◽  
Giuseppe Pintaude ◽  
Rubens Sampaio

A very useful model for predicting abrasive wear is the linear wear law based on the Rabinowicz's equation. This equation assumes that the removed volume of the abraded material is inversely proportional to its hardness. This paper focuses on the stochastic modeling of the abrasive wear process, taking into account the experimental uncertainties in the identification process of the worn material hardness. The description of hardness is performed by means of the maximum entropy principle (MEP) using only the information available. Propagation of the uncertainties from the data to the volume of wear produced is analyzed. Moreover, comparisons and discussions with other probabilistic models for worn material hardness usually proposed in the literature are done.

2012 ◽  
Vol 79 (5) ◽  
Author(s):  
Fabio Antonio Dorini ◽  
Rubens Sampaio

The most used model for predicting wear is the linear wear law proposed by Archard. A common generalization of Archard’s wear law is based on the assumption that the wear rate at any point on the contact surface is proportional to the local contact pressure and the relative sliding velocity. This work focuses on a stochastic modeling of the wear process to take into account the experimental uncertainties in the identification process of the contact-state dependent wear coefficient. The description of the dispersion of the wear coefficient is described by a probability density function, which is performed using the maximum entropy principle using only the information available. Closed-form results for the probability density function of the wear depth for several situations that commonly occur in practice are provided.


2020 ◽  
Author(s):  
Purushottam D. Dixit

AbbstractIn modern biological physics, there is a great interest in building generative probabilistic models for ensembles of covarying binary variables. A popular approach is to use the maximum entropy principle. Here, one builds generative models that use as constraints lower level statistics estimated from the data. While extremely popular, maximum entropy models have conceptual as well as practical issues; they rely on the modelers’ choice of constraints and are computationally expensive to infer when the number of variables is large (n > 100). Here, we address both these issues with Superstastistical Generative Model for binary Data (SiGMoiD). SiGMoiD is a maximum entropy based framework where we imagine that the data as arising from superstatistical system; individual binary variables are coupled to the same bath whose intensive variables fluctuate from sample to sample. Moreover, instead of choosing the constraints, in SiGMoiD we choose only the number of constraints and let the algorithm infer them from the data itself. Notably, we show that SiGMoiD is orders of magnitude faster than current maximum entropy-based models and allows us to model collections of very large number of binary variables. We also discuss future directions.


1990 ◽  
Vol 27 (2) ◽  
pp. 303-313 ◽  
Author(s):  
Claudine Robert

The maximum entropy principle is used to model uncertainty by a maximum entropy distribution, subject to some appropriate linear constraints. We give an entropy concentration theorem (whose demonstration is based on large deviation techniques) which is a mathematical justification of this statistical modelling principle. Then we indicate how it can be used in artificial intelligence, and how relevant prior knowledge is provided by some classical descriptive statistical methods. It appears furthermore that the maximum entropy principle yields to a natural binding between descriptive methods and some statistical structures.


Author(s):  
KAI YAO ◽  
JINWU GAO ◽  
WEI DAI

Entropy is a measure of the uncertainty associated with a variable whose value cannot be exactly predicated. In uncertainty theory, it has been quantified so far by logarithmic entropy. However, logarithmic entropy sometimes fails to measure the uncertainty. This paper will propose another type of entropy named sine entropy as a supplement, and explore its properties. After that, the maximum entropy principle will be introduced, and the arc-cosine distributed variables will be proved to have the maximum sine entropy with given expected value and variance.


Sign in / Sign up

Export Citation Format

Share Document