probability range
Recently Published Documents


TOTAL DOCUMENTS

9
(FIVE YEARS 0)

H-INDEX

5
(FIVE YEARS 0)

2019 ◽  
Vol 197 ◽  
pp. 39-51 ◽  
Author(s):  
Chenmu Xing ◽  
Joanna Paul ◽  
Alexandra Zax ◽  
Sara Cordes ◽  
Hilary Barth ◽  
...  

2019 ◽  
Author(s):  
Chenmu Xing ◽  
Joanna Paul ◽  
Alexandra Zax ◽  
Sara Cordes ◽  
Hilary Barth ◽  
...  

In decision making under risk, adults tend to overestimate small and underestimate large probabilities (Tversky & Kahneman, 1992). This inverse S-shaped distortion pattern is similar to that observed in a wide variety of proportion judgment tasks (see Hollands & Dyre, 2000, for review). In proportion judgment tasks, distortion patterns tend not to be fixed but rather to depend on the reference points to which the targets are compared. Here, we tested the novel hypothesis that probability distortion in decision making under risk might also be influenced by reference points—in this case, references implied by the probability range. Adult participants were assigned to either a full-range (probabilities from 0-100%), upper-range (50-100%), or lower-range (0-50%) condition, where they indicated certainty equivalents for 176 hypothetical monetary gambles (e.g., “a 50% chance of $100, otherwise $0”). Using a modified cumulative prospect theory model, we found only minimal differences in probability distortion as a function of condition, suggesting no differences in use of reference points by condition, and broadly demonstrating the robustness of distortion pattern across contexts. However, we also observed deviations from the curve across all conditions that warrant further research.


Energies ◽  
2018 ◽  
Vol 11 (9) ◽  
pp. 2398 ◽  
Author(s):  
Lin Zhu ◽  
Lichun He ◽  
Peipei Shang ◽  
Yingchun Zhang ◽  
Xiaojun Ma

The power industry is the industry with the most direct uses of fossil fuels in China and is one of China’s main carbon industries. A comprehensive and accurate analysis of the impacts of carbon emissions by the power industry can reveal the potential for carbon emissions reductions in the power industry to achieve China’s emissions reduction targets. The main contribution of this paper is the use of a Generalized Divisia Index Model for the first time to factorize the change of carbon emissions in China’s power industry from 2000 to 2015, and gives full consideration to the influence of the economy, population, and energy consumption on the carbon emissions. At the same time, the Monte Carlo method is first used to predict the carbon emissions of the power industry from 2017 to 2030 under three different scenarios. The results show that the output scale is the most important factor leading to an increase in carbon emissions in China’s power industry from 2000 to 2015, followed by the energy consumption scale and population size. Energy intensity levels have always promoted carbon emissions reduction in the power industry, where energy intensity and carbon intensity effects of energy consumption have great potential to mitigate carbon levels. By setting the main factors affecting carbon emissions in the future three scenarios, this paper predicts the carbon emissions of China’s power industry from 2017 to 2030. Under the baseline scenario, the maximum probability range of the potential annual growth rate of carbon emissions by the power industry in China from 2017 to 2030 is 1.9–2.2%. Under the low carbon scenario and technological breakthrough scenario, carbon emissions in China’s power industry continue to decline from 2017 to 2030. The maximum probability range of the potential annual drop rate are measured at 1.6–2.1% and 1.9–2.4%, respectively. The results of this study show that China’s power industry still has great potential to reduce carbon emissions. In the future, the development of carbon emissions reduction in the power industry should focus on the innovation and development of energy saving and emissions reduction technology on the premise of further optimizing the energy structure and adhering to the low-carbon road.


2011 ◽  
Vol Vol. 13 no. 2 (Graph and Algorithms) ◽  
Author(s):  
Damien Pitman

Graphs and Algorithms International audience We describe a limiting distribution for the number of connected components in the subgraph of the discrete cube induced by the satisfying assignments to a random 2-SAT formula. We show that, for the probability range where formulas are likely to be satisfied, the random number of components converges weakly (in the number of variables) to a distribution determined by a Poisson random variable. The number of satisfying assignments or solutions is known to grow exponentially in the number of variables. Thus, our result implies that exponentially many solutions are organized into a stochastically bounded number of components. We also describe an application to biological evolution; in particular, to a type of fitness landscape where satisfying assignments represent viable genotypes and connectivity of genotypes is limited by single site mutations. The biological result is that, with probability approaching 1, each viable genotype is connected by single site mutations to an exponential number of other viable genotypes while the number of viable clusters is finite.


2010 ◽  
Vol 77 (1) ◽  
pp. 312-319 ◽  
Author(s):  
Micha Peleg ◽  
Mark D. Normand ◽  
Joseph Horowitz ◽  
Maria G. Corradini

ABSTRACTThe expanded Fermi solution was originally developed for estimating the number of food-poisoning victims when information concerning the circumstances of exposure is scarce. The method has been modified for estimating the initial number of pathogenic or probiotic cells or spores so that enough of them will survive the food preparation and digestive tract's obstacles to reach or colonize the gut in sufficient numbers to have an effect. The method is based on identifying the relevant obstacles and assigning each a survival probability range. The assumed number of needed survivors is also specified as a range. The initial number is then estimated to be the ratio of the number of survivors to the product of the survival probabilities. Assuming that the values of the number of survivors and the survival probabilities are uniformly distributed over their respective ranges, the sought initial number is construed as a random variable with a probability distribution whose parameters are explicitly determined by the individual factors' ranges. The distribution of the initial number is often approximately lognormal, and its mode is taken to be the best estimate of the initial number. The distribution also provides a credible interval for this estimated initial number. The best estimate and credible interval are shown to be robust against small perturbations of the ranges and therefore can help assessors achieve consensus where hard knowledge is scant. The calculation procedure has been automated and made freely downloadable as a Wolfram Demonstration.


2010 ◽  
Vol 31 (7) ◽  
pp. 748-754 ◽  
Author(s):  
Christopher Sikora ◽  
A. Uma Chandran ◽  
A. Mark Joffe ◽  
David Johnson ◽  
Marcia Johnson

Background.In 2008, the Medical Officer of Health at Alberta Health Services (Edmonton, Canada) was notified that, in some practice settings, a syringe was used to administer medication through the side port of an intravenous circuit and then the syringe, with residual drug, was used to administer medication to other patients in the same manner. This practice has been implicated in several outbreaks of bloodborne infection in hospital and clinic settings.Methods.A risk assessment model was developed to predict the risk of a patient contracting a bloodborne viral infection from the practice. The risk of transmission was defined as the product of 5 factors: (1) the population prevalence of a specific bloodborne pathogen, (2) the probability of finding a viral bloodborne pathogen in an intravenous circuit, (3) the rate of syringe reuse, (4) the probability of causing disease given a bloodborne pathogen exposure, and (5) the susceptibility of the exposed person.Results.The risk was modeled first with consistent use of the proximal port of the intravenous circuit. The risk of transmission of hepatitis B virus was approximately 12–53 transmission events per 1,000,000 exposure events for a range of practice probabilities (ie, frequency of the risk practice) from 20% to 80%, respectively. The risk of transmission of hepatitis C virus was approximately 1.0–4.3 transmission events per 1,000,000 exposure events for the same practice probability range, and the risk of transmission of human immunodeficiency virus was approximately 0.03–0.15 transmission events per 1,000,000 exposure events for the same practice probability range. The use of the distal port was associated with a 10-fold decrease in the risk.Conclusions.Practitioners must practice safe, aseptic injection techniques. The model presented here can be used to estimate the risk of disease transmission in situations where reuse has occurred and can serve as a framework for informing public health action.


Radiocarbon ◽  
2008 ◽  
Vol 50 (2) ◽  
pp. 159-180 ◽  
Author(s):  
Amihai Mazar ◽  
Christopher Bronk Ramsey

Boaretto et al. (2005) published 68 radiocarbon dates relating to 30 samples from 10 Iron Age sites in Israel as part of their Early Iron Age Dating Project. Though the main goal of their paper was an interlaboratory comparison, they also presented results of Bayesian models, calculating the transition from Iron Age I to Iron Age II in Israel to be about 900 BCE instead of the conventional date of about 1000 BCE. Since this date has great importance for all of Eastern Mediterranean archaeology, in this paper we examine the results in light of the dates published in the above-mentioned article. Our paper was revised in light of new data and interpretations published by Sharon et al. (2007).Following a survey of the contexts and specific results at each site, we present several Bayesian models. Model C2 suggests the date range of 961–942 BCE (68% probability) for the transition from Iron Age I to Iron Age II, while Model C3 indicates a somewhat later date of 948–919 BCE (compare the date 992–961 BCE calculated at Tel Rehov for the same transition). In our Model D, we calculated this transition date at Megiddo as taking place between 967–943 BCE. Finally, we calculated the range of dates of major destruction levels marking the end of the Iron Age I, with the following results: Megiddo VIA: 1010–943 BCE; Yoqne'am XVII: 1045–997 BCE; Tell Qasile X: 1039–979 BCE; Tel Hadar: 1043–979 BCE (all in the 68.2% probability range). Figure 4 indicates that the transition between Iron I and II probably occurred between these above-mentioned destruction events and the dates achieved in our Models C2 or C3, namely during the first half of the 10th century BCE.This study emphasizes the sensitivity of Bayesian models to outliers, and for reducing or adding dates from the models. This sensitivity should be taken into account when using Bayesian models for interpreting radiometric dates in relation to subtle chronological questions in historical periods.


CHANCE ◽  
2007 ◽  
Vol 20 (1) ◽  
pp. 11-16 ◽  
Author(s):  
Ian Ayres ◽  
Antonia R. Ayres-Brown ◽  
Henry J. Ayres-Brown
Keyword(s):  

1989 ◽  
Vol 9 (4) ◽  
pp. 451-465 ◽  
Author(s):  
Ernest R. Alexander

ABSTRACTThe Pressman–Wildavsky model of implementation finds a paradox in the success of any federal programs, based on their low probability of approval. Bowen relaxed this model's independence assumption to improve implementation. Here the model is reexamined: a. its sensitivity is tested; b. the empirical base for the probability range is reviewed; and c. its fit with implementation processes in general is checked. The conclusions are (1) the model's ‘proof’ depends on its assumptions and computations, (2) there is no empirical basis for the probability estimates, and (3) the model only fits one special case of implementation processes. Better models can be developed, and successful implementation may require organizational interdependencies.


Sign in / Sign up

Export Citation Format

Share Document