initial probability distribution
Recently Published Documents


TOTAL DOCUMENTS

8
(FIVE YEARS 1)

H-INDEX

2
(FIVE YEARS 0)

Synthese ◽  
2021 ◽  
Author(s):  
Athamos Stradis

AbstractWhy do we have records of the past and not the future? Entropic explanations for this ‘record asymmetry’ have been popular ever since Boltzmann. Foremost amongst these is Albert and Loewer’s account, which explains the record asymmetry using a low-entropy initial macrostate (the ‘Past Hypothesis’) plus an initial probability distribution. However, the details of how this initial state underpins the record asymmetry are not fully specified. In this paper I attempt to plug this explanatory gap in two steps. First, I suggest the record asymmetry is more immediately explained by the ‘fork asymmetry’, which their picture omits. Second, by relating the fork asymmetry to an initial state that’s metaphysically similar to theirs, I clarify how this ultimately underpins the record asymmetry.


Author(s):  
Siti Sarah Januri ◽  
◽  
Zulkifli Mohd Nopiah ◽  
Ahmad Kamal Ariffin Mohd Ihsan ◽  
Nurulkamal Masseran ◽  
...  

2018 ◽  
Vol 7 (3.20) ◽  
pp. 136
Author(s):  
Siti Sarah Januri ◽  
Zulkifli Mohd Nopiah ◽  
Ahmad Kamal Ariffin Mohd Ihsan ◽  
Nurulkamal Masseran ◽  
Shahrum Abdullah

Stochastic processes in fatigue crack growth problem usually due to the uncertainties factors such as material properties, environmental conditions and geometry of the component. These random factors give an appropriate framework for modelling and predicting a lifetime of the structure. In this paper, an approach of calculating the initial probability distribution is introduced based on the statistical distribution of initial crack length. The fatigue crack growth is modelled and the probability distribution of the damage state is predicted by a Markov Chain model associated with a classical deterministic crack Paris law. It has been used in calculating the transition probabilities matrix to represent the physical meaning of fatigue crack growth problem. The initial distribution has been determined as lognormal distribution which 66% that the initial crack length will happen in the first state. The data from the experimental work under constant amplitude loading has been analyzed using the Markov Chain model. The results show that transition probability matrix affect the result of the probability distribution and the main advantage of the Markov Chain is once all the parameters are determined, the probability distribution can be generated at any time, x. 


2017 ◽  
Vol 27 (05) ◽  
pp. 909-951 ◽  
Author(s):  
Mattia Bongini ◽  
Massimo Fornasier ◽  
Markus Hansen ◽  
Mauro Maggioni

In this paper, we are concerned with the learnability of nonlocal interaction kernels for first-order systems modeling certain social interactions, from observations of realizations of their dynamics. This paper is the first of a series on learnability of nonlocal interaction kernels and presents a variational approach to the problem. In particular, we assume here that the kernel to be learned is bounded and locally Lipschitz continuous and that the initial conditions of the systems are drawn identically and independently at random according to a given initial probability distribution. Then the minimization over a rather arbitrary sequence of (finite-dimensional) subspaces of a least square functional measuring the discrepancy from observed trajectories produces uniform approximations to the kernel on compact sets. The convergence result is obtained by combining mean-field limits, transport methods, and a [Formula: see text]-convergence argument. A crucial condition for the learnability is a certain coercivity property of the least square functional, defined by the majorization of an [Formula: see text]-norm discrepancy to the kernel with respect to a probability measure, depending on the given initial probability distribution by suitable push forwards and transport maps. We illustrate the convergence result by means of several numerical experiments.


Author(s):  
MICHAEL J. MARKHAM ◽  
PAUL C. RHODES

The desire to use Causal Networks as Expert Systems even when the causal information is incomplete and/or when non-causal information is available has led researchers to look into the possibility of utilising Maximum Entropy. If this approach is taken, the known information is supplemented by maximising entropy to provide a unique initial probability distribution which would otherwise have been a consequence of the known information and the independence relationships implied by the network. Traditional maximising techniques can be used if the constraints are linear but the independence relationships give rise to non-linear constraints. This paper extends traditional maximising techniques to incorporate those types of non-linear constraints that arise from the independence relationships and presents an algorithm for implementing the extended method. Maximising entropy does not involve the concept of "causal" information. Consequently, the extended method will accept any mutually consistent set of conditional probabilities and expressions of independence. The paper provides a small example of how this property can be used to provide complete causal information, for use in a causal network, when the known information is incomplete and not in a causal form.


1988 ◽  
Vol 43 (2) ◽  
pp. 93-104 ◽  
Author(s):  
K. J. G. Kruscha ◽  
B. Pompe

An information theoretical description is given of the action of 1D maps on probability measures (e.g. on ergodic invariant measures of chaotic maps). On the basis of a detailed analysis of the elements of information flow the problem of optimum measuring of initial states for state predictions is discussed. Moreover, we give an information theoretical description of the relaxation, under the action of a map, of an initial probability distribution to any, not necessarily steady, final distribution. In this connection we formulate an H-theorem for 1D maps.


1965 ◽  
Vol 5 (2) ◽  
pp. 285-287 ◽  
Author(s):  
R. M. Phatarfod

Consider a positive regular Markov chain X0, X1, X2,… with s(s finite) number of states E1, E2,… E8, and a transition probability matrix P = (pij) where = , and an initial probability distribution given by the vector p0. Let {Zr} be a sequence of random variables such that and consider the sum SN = Z1+Z2+ … ZN. It can easily be shown that (cf. Bartlett [1] p. 37), where λ1(t), λ2(t)…λ1(t) are the latent roots of P(t) ≡ (pijethij) and si(t) and t′i(t) are the column and row vectors corresponding to λi(t), and so constructed as to give t′i(t)Si(t) = 1 and t′i(t), si(o) = si where t′i(t) and si are the corresponding column and row vectors, considering the matrix .


Sign in / Sign up

Export Citation Format

Share Document