specific assumption
Recently Published Documents


TOTAL DOCUMENTS

14
(FIVE YEARS 2)

H-INDEX

4
(FIVE YEARS 0)

Author(s):  
Floriane Plard ◽  
Daniel Turek ◽  
Michael Schaub

AbstractWhile ecologists know that models require assumptions, the consequences of their violation become vague as model complexity increases. Integrated population models (IPMs) combine several datasets to inform a population model and to estimate survival and reproduction parameters jointly with higher precision than is possible using independent models. However, accuracy actually depends on an adequate fit of the model to datasets. We first investigated bias of parameters obtained from integrated population models when specific assumptions are violated. For instance, a model may assume that all females reproduce although there are non-breeding females in the population. Our second goal was to identify which diagnostic tests are sensitive to detect violations of the assumptions of IPMs. We simulated data mimicking a short- and a long-lived species under five scenarios in which a specific assumption is violated. For each simulated scenario, we fitted an IPM that violates the assumption (simple IPM) and an IPM that does not violate each specific assumption. We estimated bias and uncertainty of parameters and performed seven diagnostic tests to assess the fit of the models to the data. Our results show that the simple IPM was quite robust to violation of many assumptions and only resulted in small bias of the parameter estimates. Yet, the applied diagnostic tests were not sensitive to detect such small bias. The violation of some assumptions such as the absence of immigrants resulted in larger bias to which diagnostic tests were more sensitive. The parameters informed by the least amount of data were the most biased in all scenarios. We provide guidelines to identify misspecified models and to diagnose the assumption being violated. Simple models should often be sufficient to describe simple population dynamics, and when data are abundant, complex models accounting for specific processes will be able to shed light on specific biological questions.


2021 ◽  
Vol 16 (1) ◽  
pp. 14-19
Author(s):  
Andrea Basso ◽  
Fabien Pazuki

Abstract The main attack against static-key supersingular isogeny Diffie–Hellman (SIDH) is the Galbraith–Petit–Shani–Ti (GPST) attack, which also prevents the application of SIDH to other constructions such as non-interactive key-exchange. In this paper, we identify and study a specific assumption on which the GPST attack relies that does not necessarily hold in all circumstances. We show that in some circumstances the attack fails to recover part of the secret key. We also characterize the conditions necessary for the attack to fail and show that it rarely happens in real cases. We give a link with collisions in the Charles-Goren-Lauter (CGL) hash function.


2020 ◽  
Vol 30 (10) ◽  
pp. 1089-1113 ◽  
Author(s):  
Emmanuel Godard ◽  
Eloi Perdereau

AbstractWe consider the well-known Coordinated Attack Problem, where two generals have to decide on a common attack, when their messengers can be captured by the enemy. Informally, this problem represents the difficulties to agree in the presence of communication faults. We consider here only omission faults (loss of message), but contrary to previous studies, we do not to restrict the way messages can be lost, i.e., we make no specific assumption, we use no specific failure metric. In the large subclass of message adversaries where the double simultaneous omission can never happen, we characterize which ones are obstructions for the Coordinated Attack Problem. We give two proofs of this result. One is combinatorial and uses the classical bivalency technique for the necessary condition. The second is topological and uses simplicial complexes to prove the necessary condition. We also present two different Consensus algorithms that are combinatorial (resp. topological) in essence. Finally, we analyze the two proofs and illustrate the relationship between the combinatorial approach and the topological approach in the very general case of message adversaries. We show that the topological characterization gives a clearer explanation of why some message adversaries are obstructions or not. This result is a convincing illustration of the power of topological tools for distributed computability.


As one of the underlying pixel-based illumination estimation algorithms, the White Patch algorithm is an algorithm for calculating the global illumination RGB value of an image based on the specific assumption that the maximum reflected light on the scene is chromatic. The algorithm is harsh on the assumptions of scene illumination, and many images are difficult to satisfy this assumption constraint. In this paper, we propose an improved White Patch image illumination estimation method. Firstly, the image patch is extracted by using sliding window method, we then use the white patch algorithm to estimate the illumination color value of each patch, and finally the kernel density estimation is adopted to obtain the overall illumination color value of the image. The experimental results show that the improved White Patch images illumination estimation method proposed to this paper performs better on the illumination estimation of natural illumination scene images.


2018 ◽  
Vol 9 (2) ◽  
pp. 373-387 ◽  
Author(s):  
Hamid Reza Motamedian ◽  
Artem Kulachenko

Abstract. In this paper, we develop two alternative formulations for the rotational constraint between the tangents to connected beams with large deformations in 3-D space. Such a formulation is useful for modeling bonded/welded connections between beams. The first formulation is derived by consistently linearizing the variation of the strain energy and by assuming linear shape functions for the beam elements. This formulation can be used with both the Lagrange multiplier and the penalty stiffness method. The second non-consistent formulation assumes that the contact normal is independent of the nodal displacements at each iteration, and is updated consistently between iterations. In other words, we ignore the contribution due to the change of the contact normal in the linearization of the contact gap function. This assumption yields simpler equations and requires no specific assumption regarding the shape functions for the underlying beam elements. However, it is limited to the penalty method. We demonstrate the performance of the presented formulations in solving problems using implicit time integration. We also present a case showing the implications of ignoring this rotational constraint in modeling a network of beams.


2018 ◽  
Author(s):  
Deborah Marciano ◽  
Eden Krispin ◽  
Sacha Bourgeois-Gironde ◽  
Leon Deouell

When humans learn of the outcome of an option they did not choose (the alternative outcome), before their own outcome is known, they form biased expectations about their future reward. Specifically, people see an illusory negative correlation between the two outcomes, which we coined the Alternative Omen Effect (ALOE). Why does this happen? Here, we tested several alternative explanations and conclude that the ALOE may derive from a pervasive belief that good luck is a limited resource. In Experiment 1, we show that the ALOE is due to people seeing a good alternative outcome as a bad sign regarding their outcome, but not vice versa. Experiment 2 confirms that the ALOE is a highly ingrained bias that replicates across tasks, and that the ALOE cannot be explained by preconceptions regarding outcome distribution, including 1) the Limited Good Hypothesis (zero-sum bias), according to which people see the world as a zero-sum game, and assume that resources there means fewer resources here, and/or 2) a more specific assumption that laboratory tasks are programmed as zero-sum games. To neutralize these potential beliefs, participants had to draw actual colored beads from two real, distinct bags. In spite of the unequivocal situational evidence of the independence of the two resources, we found a strong ALOE. Finally, in Experiment 3, we tested the Limited Luck Hypothesis: by eliminating the value of the outcomes we eliminated the ALOE. These results suggest that individuals perceive good luck itself, rather than material goods, as a limited resource. We discuss how the Limited Luck belief might explain a wide range of behaviors traditionally associated with the Limited Good belief.


2017 ◽  
Vol 29 (4) ◽  
pp. 1074-1095 ◽  
Author(s):  
Chin-wei Huang

Purpose In past literature, employee has been extensively utilized as input in most data envelopment analysis (DEA) studies, but different labor types are identically defined to be the same input factor without a specific assumption for their heterogeneity. The influence of manual and non-manual labor utilization on performance also has not been investigated in hotel efficiency analyses. The purpose of this study is to assess inefficiency indices derived from manual and non-manual labor, and analyze the influence of labor utilization on hotel’s operational efficiency. Design/methodology/approach Based on the different features of the two labor types, performance indicators are evaluated through the hybrid DEA model. Findings More than 32 per cent of tourist hotels are evaluated as efficient and more than half the hotels have an efficiency score lower than the average. The author evaluated the inefficiency caused by radial inputs that have a greater influence on efficiency. This finding indicates that most hotels are efficient in their utilization of non-manual labor. The investigation of external factors shows that excessive utilization of non-manual labor results in a slight influence on operational efficiency across many non-chain hotels. The author also found the efficiency of non-manual labor utilization to be lower at hotels located in resort areas. Originality/value This study used the hybrid DEA model, in which non-manual and manual labors are assumed as non-radial and radial inputs, respectively, to evaluate efficiency. Finding the significance of heterogeneous assumptions for manual and non-manual labor types is the main contribution to the theory of hotel efficiency measurement.


2016 ◽  
Vol 64 (6) ◽  
Author(s):  
Salman Zaidi ◽  
Andreas Kroll

AbstractA novel interval-data based Takagi-Sugeno fuzzy system is proposed to identify uncertain nonlinear dynamic systems by endowing the classical TS fuzzy system with probability theory and symbolic data analysis. Such systems have variability in their outputs, that is they produce varying responses each time when the same stimuli is applied to them under the same condition. Interval data is generated by repeating the identification experiment multiple times and applying the probabilistic techniques to get soft bounds of output. The interval data is then directly used in the TS fuzzy modelling, giving rise to interval antecedent and consequent parameters. This method does not require any specific assumption on the probability distribution of the random variable that models the uncertainty. The developed procedure is demonstrated for a pneumatic drive system.


2015 ◽  
Vol 18 ◽  
Author(s):  
Debora de Chiusole ◽  
Luca Stefanutti ◽  
Pasquale Anselmi ◽  
Egidio Robusto

AbstractThe basic local independence model (BLIM) is a probabilistic model for knowledge structures, characterized by the property that lucky guess and careless error parameters of the items are independent of the knowledge states of the subjects. When fitting the BLIM to empirical data, a good fit can be obtained even when the invariance assumption is violated. Therefore, statistical tests are needed for detecting violations of this specific assumption. This work provides an extension to theoretical results obtained by de Chiusole, Stefanutti, Anselmi, and Robusto (2013), showing that statistical tests based on the partitioning of the empirical data set into two (or more) groups are not adequate for testing the BLIM’s invariance assumption. A simulation study confirms the theoretical results.


2014 ◽  
Vol 61 (1-2) ◽  
pp. 3-15
Author(s):  
Andrzej Sawicki

Abstract The problem of dilation is discussed in the context of classical Cam-Clay model, which was developed on the basis of a specific assumption regarding the plastic work. This assumption leads to a special form of the dilation function, from which a shape of yield function is derived. The above mentioned assumption is compared with the results of the triaxial tests, performed on the model “Skarpa” sand. It is shown that the Cam-Clay approach is not realistic, as it is based on the assumption which is not consistent with experimental data. Some general considerations and discussion of this problem are also presented.


Sign in / Sign up

Export Citation Format

Share Document