prior odds
Recently Published Documents


TOTAL DOCUMENTS

12
(FIVE YEARS 0)

H-INDEX

7
(FIVE YEARS 0)

2020 ◽  
Vol 4 (1) ◽  
pp. p1
Author(s):  
Steven C. Gabaeff

Probabilistic language is language used to convey mathematical probabilities in narrative form including terms like “highly likely”, “concerning for”, “suspicious of”, and many others). PL can be used in conformance with standards elucidated in Forensic Epidemiology or misused with intentional imprecision, when not justified, to promote a misdiagnosis of abuse, with dire consequences. The application of actual probability analysis using tested mathematical models, like Bayes Theorem, is essential to an analysis of the actual probability of abuse in a specific case to avoid false accusations of abuse. Consideration of the prior odds of abuse combined with calculations the reliability of nonspecific and/or unreliable criteria or “indicators”, is being disregarded by child abuse pediatricians to justify diagnosing abuse with statements of false certainty that depend on the misuse of probabilistic language. These suppositious statements of false certainty are the sine qua non of accusatory expert opinion. Currently, and unfortunately, false certainty is only detected by scientists and physicians with the requisite advanced knowledge of these issues. When probabilities and evidence based science are studied and applied, deep flaws in the fund of knowledge of child abuse pediatrics have been exposed. On balance, there is an emerging reality that the collective suffering of falsely accused families may dwarf the horrific impacts associated with real abuse. It also exposes iatrogenic abuse as possibly the most common form of prosecuted child abuse in the legal system. A false accusation of child abuse is child abuse. The misuse of probabilistic language to convey false certainty and its ramifications for innocent caregivers is discussed herein and must be prevented.


2018 ◽  
Author(s):  
Ivan Chistyakov ◽  
Olga Soboleva

Questionnaires are common tools in psychological studies, and they include questions about frequencies (e.g., State-Trait Anxiety Inventory asks how often do you feel nervous with response options ‘never’, ‘sometimes’, ‘often’ and ‘very often’), but the meaning of responses is not clear. B. F. Skinner proposed an experimental analysis as a way to find the meaning of verbal behavior. The term ‘often’ was defined functionally as behavior with positive sensitivity to the relative frequency of an event and sensitivity to the question and social consequences. The matching law was used to describe context-behavior relations quantitatively. We conducted four experiments on ten Russian native speakers to determine the meaning of the term ‘often’. During each experiment inducers (alternating events ‘1’ and ‘0’ with a predetermined probability of occurrence, the question about the relative frequency of one of the events ‘Do you often see ‘1’s?’) and response options (‘Yes’ and ‘No’) were constantly presented. We documented free operant responses over the sequences of events with different lengths (from 4 to 12 events) and prior odds of ‘1’s to ‘0’s (from 1:5 to 5:1). Collected data suggests that ‘often’ means ‘at least three times in a row’.


2018 ◽  
Vol 1 (2) ◽  
pp. 186-197 ◽  
Author(s):  
Brent M. Wilson ◽  
John T. Wixted

Efforts to increase replication rates in psychology generally consist of recommended improvements to methodology, such as increasing sample sizes to increase power or using a lower alpha level. However, little attention has been paid to how the prior odds ( R) that a tested effect is true can affect the probability that a significant result will be replicable. The lower R is, the less likely a published result will be replicable even if power is high. It follows that if R is lower in one set of studies than in another, then all else being equal, published results will be less replicable in the set with lower R. We illustrate this point by presenting an analysis of data from the social-psychology and cognitive-psychology studies that were included in the Open Science Collaboration’s (2015) replication project. We found that R was lower for the social-psychology studies than for the cognitive-psychology studies, which might explain why the rate of successful replications differed between these two sets of studies. This difference in replication rates may reflect the degree to which scientists in the two fields value risky but potentially groundbreaking (i.e., low- R) research. Critically, high- R research is not inherently better or worse than low- R research for advancing knowledge. However, if they wish to achieve replication rates comparable to those of high- R fields (a judgment call), researchers in low- R fields would need to use an especially low alpha level, conduct experiments that have especially high power, or both.


2012 ◽  
Vol 3 (1) ◽  
pp. 3 ◽  
Author(s):  
Bruce Budowle ◽  
Jianye Ge ◽  
Ranajit Chakraborty ◽  
Harrell Gill-King
Keyword(s):  

2012 ◽  
Vol 3 (1) ◽  
pp. 2 ◽  
Author(s):  
Alex Biedermann ◽  
Franco Taroni ◽  
Pierre Margot
Keyword(s):  

2011 ◽  
Vol 2 (1) ◽  
pp. 15 ◽  
Author(s):  
Bruce Budowle ◽  
Jianye Ge ◽  
Ranajit Chakraborty ◽  
Harrell Gill-King
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document