scholarly journals Assessing categorization performance at the individual level: A comparison of Monte Carlo Simulation and Probability Estimate Model procedures

2011 ◽  
Vol 34 (2) ◽  
pp. 321-326 ◽  
Author(s):  
Martha E. Arterberry ◽  
Marc H. Bornstein ◽  
O. Maurice Haynes
Author(s):  
Martina Kuncova

The situation on the electricity retail market in the Czech Republic is not clear because of the number of suppliers and its products. Although the information about the prices for the electricity consumption for households is available on the web and each household can change the supplier nearly with no extra effort and cost, households are still often not familiar with the individual price items of the products. In this article the analysis of the Czech electricity market from the distribution rate D25d point of view is made for the years 2017-2018 when the household annual consumption is simulated via Monte Carlo simulation model. The aim of this paper is to select such a supplier and product that minimizes the total costs of the electricity for a household for the selected distribution rate and compare it with the results from the previous years.


2003 ◽  
Vol 36 (5) ◽  
pp. 1262-1265 ◽  
Author(s):  
A. N. Falcão ◽  
F. M. A. Margaça ◽  
F. G. Carvalho

A study of the contribution of the individual channels of a converging multichannel collimator to the operation of the device is carried out by means of a Monte Carlo computer simulation. The simulation shows that the coupling between the divergence of the incident neutron beam and the inclination of the individual CMC channel axis relative to the beam direction modulates the channel performance as far as intensity and resolution of the transmitted neutrons are concerned. While this does not impair in any significant way the usefulness of the device, the results are helpful to the designer.


Diachronica ◽  
2015 ◽  
Vol 32 (3) ◽  
pp. 331-364
Author(s):  
Marwan Kilani

This paper presents an extension of Baxter & Manaster-Ramer’s (2000) approach to the problem of false cognates in the determination of relationships between languages. Their approach uses a Monte Carlo simulation to estimate how many lexical similarities we can expect to be due to chance between two lexical lists from different languages, and consequently how many are too many to be all false cognates. Although very efficient, their model has the shortcoming of being applicable only to simple lexical lists such as the Swadesh list, with one-to-one semantic correspondences between the individual terms. Here I present a new model that can be applied to any kind of word list, and can include comparisons between multiple terms sharing the same semantic field. After a theoretical description, a controlled test and a contra-test, I finally apply the method to a real test case, investigating the probability of relation between Pre-Greek, the nonIndo-European substrate of classical Greek, and Proto-Basque, Proto-Uralic and ‘Proto-Altaic’.


2020 ◽  
Vol 640 ◽  
pp. A83
Author(s):  
J. Klüter ◽  
U. Bastian ◽  
J. Wambsganss

Context. Astrometric gravitational microlensing can be used to determine the mass of a single star (the lens) with an accuracy of a few percent. To do so, precise measurements of the angular separations between lens and background star with an accuracy below 1 milli − arcsec at different epochs are needed. Therefore only the most accurate instruments can be used. However, since the timescale is on the order of months to years, the astrometric deflection might be detected by Gaia, even though each star is only observed on a low cadence. Aims. We want to show how accurately Gaia can determine the mass of the lensing star. Methods. Using conservative assumptions based on the results of the second Gaia data release (Gaia DR2), we simulated the individual Gaia measurements for 501 predicted astrometric microlensing events during the Gaia era (2014.5–2026.5). For this purpose we used the astrometric parameters of Gaia DR2, as well as an approximative mass based on the absolute G magnitude. By fitting the motion of the lens and source simultaneously, we then reconstructed the 11 parameters of the lensing event. For lenses passing by multiple background sources, we also fitted the motion of all background sources and the lens simultaneously. Using a Monte-Carlo simulation we determined the achievable precision of the mass determination. Results. We find that Gaia can detect the astrometric deflection for 114 events. Furthermore, for 13 events Gaia can determine the mass of the lens with a precision better than 15% and for 13 + 21 = 34 events with a precision of 30% or better.


2007 ◽  
Vol 12 (2) ◽  
pp. 276-284 ◽  
Author(s):  
Stephen R. Johnson ◽  
Ramesh Padmanabha ◽  
Wayne Vaccaro ◽  
Mark Hermsmeier ◽  
Angela Cacace ◽  
...  

Among the several goals of a high-throughput screening campaign is the identification of as many active chemotypes as possible for further evaluation. Often, however, the number of concentration response curves (e.g., IC50s or Kis) that can be collected following a primary screen is limited by practical constraints such as protein supply, screening workload, and so forth. One possible approach to this dilemma is to cluster the hits from the primary screen and sample only a few compounds from each cluster. This introduces the question as to how many compounds must be selected from a cluster to ensure that an active compound is identified, if it exists at all. This article seeks to address this question using a Monte Carlo simulation in which the dependence of the success of sampling is directly linked to screening data variability. Furthermore, the authors demonstrate that the use of replicated compounds in the screening collection can easily assess this variability and provide a priori guidance to the screener and chemist as to the extent of sampling required to maximize chemotype identification during the triage process. The individual steps of the Monte Carlo simulation provide insight into the correspondence between the percentage inhibition and eventual IC50 curves.


2013 ◽  
Vol 19 (1) ◽  
pp. 168-218
Author(s):  
Pietro Parodi

AbstractThis paper argues that all reserving methods based on claims triangulations (the “triangle trick”), no matter how sophisticated the subsequent processing of the information contained in the triangle is, are inherently inadequate to accurately model the distribution of reserves, although they may be good enough to produce a point estimate of such reserves. The reason is that the triangle representation involves the compression (and ultimately the loss) of crucial information about the individual losses, which comes back to haunt us when we try to extract detailed information on the distribution of incurred but not reported (IBNR) and reported but not settled (RBNS) losses.This paper then argues that in order to avoid such loss of information it is necessary to adopt an approach which is similar to that used in pricing, where a separate frequency and severity model are developed and then combined by Monte Carlo simulation or other numerical techniques to produce the aggregate loss distribution.A specific implementation of this approach is described, whose core feature is a method to produce a frequency model for the incurred but not reported claim count based on the empirical distribution of delays (delay = the time between loss date and reporting date), after adjustments to make up for the bias towards smaller delays. The method also produces a kernel severity model for the individual losses, from which the severity distribution of each year of occurrence can be derived. By combining the frequency and severity model in the usual way (e.g. through Monte Carlo simulation), an aggregate model for IBNR and UPR losses can be produced.As for RBNS losses, we suggest using one of the many methods to analyse the distribution of IBNER (incurred but not enough reserved) factors to produce a possible distribution of outcomes.A case study based on real-world liability claims is used to illustrate how the method works in practice.Also, in a first step towards validating the method for calculating IBNR and comparing it with existing methods, a series of experiments with artificial data sets was undertaken, which show a drastic reduction in the prediction error of both the IBNR claim count and the IBNR total amount with respect to the standard chain ladder method. And what is perhaps most promising, the experiments show that the distribution of IBNR reserves is much closer (in terms of the Kolmogorov-Smirnov distance) to the “true” one than that based on Mack's method in the way it is normally applied. The method promises therefore a more accurate assessment of the uncertainty around reserves.


2017 ◽  
Vol 15 (1) ◽  
pp. 1-13
Author(s):  
Francois Joubert ◽  
Leon Pretorius

This paper combines various concepts related to (i) project risk management, (ii) Monte Carlo simulation, (iii) project contingency cost estimation, and (iv) the relationship between project and programme risks, to illustrate that the contingency requirements are lower when simulating all the risks in the programme when comparing it with the individual project contingency requirement. A case study organisation provided 86 quantified risk registers related to port and rail capital projects. For each of these risk registers, the project contingency was estimated using a prescribed risk register template and Monte Carlo simulation software. The same 86 quantified risk registers were then used to simulate the programme contingency. The simulation results indicated that the programme contingency requirement was approximately 8% points lower than that of the sum of the individual projects. The first implication of this research result is that, should borrowed capital be used to fund the projects, the interest bill would be higher when calculating project contingency on a project-by-project basis. The second is that regularly appearing low probability, high impact risks, should be identified and these risks should be quantified not in the projects themselves, but in a centrally managed, programme cost contingency fund.


Sign in / Sign up

Export Citation Format

Share Document