scholarly journals Automated Sleep Scoring with Human Supervision Adds Value Compared with Human Scoring Alone: A reply to Zammit G. K. Insufficient evidence for the use of automated and semi-automated scoring of polysomnographic recordings. SLEEP 2008:31;449–50

SLEEP ◽  
2008 ◽  
Vol 31 (4) ◽  
pp. 451-451 ◽  
Author(s):  
Vladimir Svetnik ◽  
Junshui Ma ◽  
Keith A. Soper ◽  
Scott Doran ◽  
John J. Renger ◽  
...  
2019 ◽  
Author(s):  
C. Berthomier ◽  
V. Muto ◽  
C. Schmidt ◽  
G. Vandewalle ◽  
M. Jaspar ◽  
...  

AbstractStudy ObjectivesNew challenges in sleep science require to describe fine grain phenomena or to deal with large datasets. Beside the human resource challenge of scoring huge datasets, the inter- and intra-expert variability may also reduce the sensitivity of such studies. Searching for a way to disentangle the variability induced by the scoring method from the actual variability in the data, visual and automatic sleep scorings of healthy individuals were examined.MethodsA first dataset (DS1, 4 recordings) scored by 6 experts plus an autoscoring algorithm was used to characterize inter-scoring variability. A second dataset (DS2, 88 recordings) scored a few weeks later was used to investigate intra-expert variability. Percentage agreements and Conger’s kappa were derived from epoch-by-epoch comparisons on pairwise, consensus and majority scorings.ResultsOn DS1 the number of epochs of agreement decreased when the number of expert increased, in both majority and consensus scoring, where agreement ranged from 86% (pairwise) to 69% (all experts). Adding autoscoring to visual scorings changed the kappa value from 0.81 to 0.79. Agreement between expert consensus and autoscoring was 93%. On DS2 intra-expert variability was evidenced by the kappa systematic decrease between autoscoring and each single expert between datasets (0.75 to 0.70).ConclusionsVisual scoring induces inter- and intra-expert variability, which is difficult to address especially in big data studies. When proven to be reliable and if perfectly reproducible, autoscoring methods can cope with intra-scorer variability making them a sensible option when dealing with large datasets.Statement of SignificanceWe confirmed and extended previous findings highlighting the intra- and inter-expert variability in visual sleep scoring. On large datasets those variability issues cannot be completely addressed by neither practical nor statistical solutions such as group training, majority or consensus scoring.When an automated scoring method can be proven to be as reasonably imperfect as visual scoring but perfectly reproducible, it can serve as a reliable scoring reference for sleep studies.


2012 ◽  
Vol 15 (1) ◽  
pp. 387-399
Author(s):  
Jonathan Adler

James’ The Will to Believe is the most influential writing in the ethics of belief. In it, James defends the right and rationality to believe on non-evidential grounds. James’ argument is directed against Clifford’s “Evidentialism” presented in The Ethics of Belief in which Clifford concludes that “[i]t is wrong always, everywhere, and for anyone, to believe anything upon insufficient evidence”. After an overview of the James-Clifford exchange and James’ argument, I reconstruct his argument in detail. Subsequently, I examine four steps in James’ argument, and try to show that these amount to fallacies – enticing to reason, but not cogent.


Science ◽  
2021 ◽  
Vol 372 (6539) ◽  
pp. eabe9230 ◽  
Author(s):  
Elan Ness-Cohn ◽  
Ravi Allada ◽  
Rosemary Braun

Ray et al. (Reports, 14 February 2020, p. 800) report apparent transcriptional circadian rhythms in mouse tissues lacking the core clock component BMAL1. To better understand these surprising results, we reanalyzed the associated data. We were unable to reproduce the original findings, nor could we identify reliably cycling genes. We conclude that there is insufficient evidence to support circadian transcriptional rhythms in the absence of Bmal1.


Author(s):  
Trey Roady ◽  
Kyle Wilson ◽  
Jonny Kuo ◽  
Michael G. Lenné

Objective Research shows frequent mobile phone use in vehicles but says little regarding how drivers hold their phone. This knowledge would inform countermeasures and benefit law enforcement in detecting phone use. Methods 934 participants were surveyed over phone-use prevalence, handedness, traffic-direction, and where they held their device. Results The majority (66%) reported using their phone while driving. Younger drivers were more likely to use their device. Of device-users, 67% preferred their passenger-side hand, 25% driver-side, and 8% both. Height- wise: 22% held in-lap, 52% even with the wheel, and 22% at wheel-top. Older drivers were more likely to hold the phone in the highest position The three most popular combinations were passenger-middle (35%), passenger-low (19%), and passenger-high (13.9%). There was insufficient evidence of differences based on handedness, prevalence, or traffic-direction. Conclusion Driver-preferred attention regions often require substantial neck flexion and eye-movement, which facilitates distraction detection. However, behavior may change in response to future interventions.


Sign in / Sign up

Export Citation Format

Share Document