scholarly journals Multi-Timescale Drowsiness Characterization Based on a Video of a Driver’s Face

Sensors ◽  
2018 ◽  
Vol 18 (9) ◽  
pp. 2801 ◽  
Author(s):  
Quentin Massoz ◽  
Jacques Verly ◽  
Marc Van Droogenbroeck

Drowsiness is a major cause of fatal accidents, in particular in transportation. It is therefore crucial to develop automatic, real-time drowsiness characterization systems designed to issue accurate and timely warnings of drowsiness to the driver. In practice, the least intrusive, physiology-based approach is to remotely monitor, via cameras, facial expressions indicative of drowsiness such as slow and long eye closures. Since the system’s decisions are based upon facial expressions in a given time window, there exists a trade-off between accuracy (best achieved with long windows, i.e., at long timescales) and responsiveness (best achieved with short windows, i.e., at short timescales). To deal with this trade-off, we develop a multi-timescale drowsiness characterization system composed of four binary drowsiness classifiers operating at four distinct timescales (5 s, 15 s, 30 s, and 60 s) and trained jointly. We introduce a multi-timescale ground truth of drowsiness, based on the reaction times (RTs) performed during standard Psychomotor Vigilance Tasks (PVTs), that strategically enables our system to characterize drowsiness with diverse trade-offs between accuracy and responsiveness. We evaluated our system on 29 subjects via leave-one-subject-out cross-validation and obtained strong results, i.e., global accuracies of 70%, 85%, 89%, and 94% for the four classifiers operating at increasing timescales, respectively.

2003 ◽  
Vol 17 (3) ◽  
pp. 113-123 ◽  
Author(s):  
Jukka M. Leppänen ◽  
Mirja Tenhunen ◽  
Jari K. Hietanen

Abstract Several studies have shown faster choice-reaction times to positive than to negative facial expressions. The present study examined whether this effect is exclusively due to faster cognitive processing of positive stimuli (i.e., processes leading up to, and including, response selection), or whether it also involves faster motor execution of the selected response. In two experiments, response selection (onset of the lateralized readiness potential, LRP) and response execution (LRP onset-response onset) times for positive (happy) and negative (disgusted/angry) faces were examined. Shorter response selection times for positive than for negative faces were found in both experiments but there was no difference in response execution times. Together, these results suggest that the happy-face advantage occurs primarily at premotoric processing stages. Implications that the happy-face advantage may reflect an interaction between emotional and cognitive factors are discussed.


2012 ◽  
Vol 11 (3) ◽  
pp. 118-126 ◽  
Author(s):  
Olive Emil Wetter ◽  
Jürgen Wegge ◽  
Klaus Jonas ◽  
Klaus-Helmut Schmidt

In most work contexts, several performance goals coexist, and conflicts between them and trade-offs can occur. Our paper is the first to contrast a dual goal for speed and accuracy with a single goal for speed on the same task. The Sternberg paradigm (Experiment 1, n = 57) and the d2 test (Experiment 2, n = 19) were used as performance tasks. Speed measures and errors revealed in both experiments that dual as well as single goals increase performance by enhancing memory scanning. However, the single speed goal triggered a speed-accuracy trade-off, favoring speed over accuracy, whereas this was not the case with the dual goal. In difficult trials, dual goals slowed down scanning processes again so that errors could be prevented. This new finding is particularly relevant for security domains, where both aspects have to be managed simultaneously.


2020 ◽  
Vol 2020 (14) ◽  
pp. 378-1-378-7
Author(s):  
Tyler Nuanes ◽  
Matt Elsey ◽  
Radek Grzeszczuk ◽  
John Paul Shen

We present a high-quality sky segmentation model for depth refinement and investigate residual architecture performance to inform optimally shrinking the network. We describe a model that runs in near real-time on mobile device, present a new, highquality dataset, and detail a unique weighing to trade off false positives and false negatives in binary classifiers. We show how the optimizations improve bokeh rendering by correcting stereo depth misprediction in sky regions. We detail techniques used to preserve edges, reject false positives, and ensure generalization to the diversity of sky scenes. Finally, we present a compact model and compare performance of four popular residual architectures (ShuffleNet, MobileNetV2, Resnet-101, and Resnet-34-like) at constant computational cost.


2019 ◽  
Author(s):  
Anna Katharina Spälti ◽  
Mark John Brandt ◽  
Marcel Zeelenberg

People often have to make trade-offs. We study three types of trade-offs: 1) "secular trade-offs" where no moral or sacred values are at stake, 2) "taboo trade-offs" where sacred values are pitted against financial gain, and 3) "tragic trade-offs" where sacred values are pitted against other sacred values. Previous research (Critcher et al., 2011; Tetlock et al., 2000) demonstrated that tragic and taboo trade-offs are not only evaluated by their outcomes, but are also evaluated based on the time it took to make the choice. We investigate two outstanding questions: 1) whether the effect of decision time differs for evaluations of decisions compared to decision makers and 2) whether moral contexts are unique in their ability to influence character evaluations through decision process information. In two experiments (total N = 1434) we find that decision time affects character evaluations, but not evaluations of the decision itself. There were no significant differences between tragic trade-offs and secular trade-offs, suggesting that the decisions structure may be more important in evaluations than moral context. Additionally, the magnitude of the effect of decision time shows us that decision time, may be of less practical use than expected. We thus urge, to take a closer examination of the processes underlying decision time and its perception.


2019 ◽  
Author(s):  
Kasper Van Mens ◽  
Joran Lokkerbol ◽  
Richard Janssen ◽  
Robert de Lange ◽  
Bea Tiemens

BACKGROUND It remains a challenge to predict which treatment will work for which patient in mental healthcare. OBJECTIVE In this study we compare machine algorithms to predict during treatment which patients will not benefit from brief mental health treatment and present trade-offs that must be considered before an algorithm can be used in clinical practice. METHODS Using an anonymized dataset containing routine outcome monitoring data from a mental healthcare organization in the Netherlands (n = 2,655), we applied three machine learning algorithms to predict treatment outcome. The algorithms were internally validated with cross-validation on a training sample (n = 1,860) and externally validated on an unseen test sample (n = 795). RESULTS The performance of the three algorithms did not significantly differ on the test set. With a default classification cut-off at 0.5 predicted probability, the extreme gradient boosting algorithm showed the highest positive predictive value (ppv) of 0.71(0.61 – 0.77) with a sensitivity of 0.35 (0.29 – 0.41) and area under the curve of 0.78. A trade-off can be made between ppv and sensitivity by choosing different cut-off probabilities. With a cut-off at 0.63, the ppv increased to 0.87 and the sensitivity dropped to 0.17. With a cut-off of at 0.38, the ppv decreased to 0.61 and the sensitivity increased to 0.57. CONCLUSIONS Machine learning can be used to predict treatment outcomes based on routine monitoring data.This allows practitioners to choose their own trade-off between being selective and more certain versus inclusive and less certain.


Author(s):  
Steven Bernstein

This commentary discusses three challenges for the promising and ambitious research agenda outlined in the volume. First, it interrogates the volume’s attempts to differentiate political communities of legitimation, which may vary widely in composition, power, and relevance across institutions and geographies, with important implications not only for who matters, but also for what gets legitimated, and with what consequences. Second, it examines avenues to overcome possible trade-offs from gains in empirical tractability achieved through the volume’s focus on actor beliefs and strategies. One such trade-off is less attention to evolving norms and cultural factors that may underpin actors’ expectations about what legitimacy requires. Third, it addresses the challenge of theory building that can link legitimacy sources, (de)legitimation practices, audiences, and consequences of legitimacy across different types of institutions.


Author(s):  
Lisa Best ◽  
Kimberley Fung-Loy ◽  
Nafiesa Ilahibaks ◽  
Sara O. I. Ramirez-Gomez ◽  
Erika N. Speelman

AbstractNowadays, tropical forest landscapes are commonly characterized by a multitude of interacting institutions and actors with competing land-use interests. In these settings, indigenous and tribal communities are often marginalized in landscape-level decision making. Inclusive landscape governance inherently integrates diverse knowledge systems, including those of indigenous and tribal communities. Increasingly, geo-information tools are recognized as appropriate tools to integrate diverse interests and legitimize the voices, values, and knowledge of indigenous and tribal communities in landscape governance. In this paper, we present the contribution of the integrated application of three participatory geo-information tools to inclusive landscape governance in the Upper Suriname River Basin in Suriname: (i) Participatory 3-Dimensional Modelling, (ii) the Trade-off! game, and (iii) participatory scenario planning. The participatory 3-dimensional modelling enabled easy participation of community members, documentation of traditional, tacit knowledge and social learning. The Trade-off! game stimulated capacity building and understanding of land-use trade-offs. The participatory scenario planning exercise helped landscape actors to reflect on their own and others’ desired futures while building consensus. Our results emphasize the importance of systematically considering tool attributes and key factors, such as facilitation, for participatory geo-information tools to be optimally used and fit with local contexts. The results also show how combining the tools helped to build momentum and led to diverse yet complementary insights, thereby demonstrating the benefits of integrating multiple tools to address inclusive landscape governance issues.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Yoav Kolumbus ◽  
Noam Nisan

AbstractWe study the effectiveness of tracking and testing policies for suppressing epidemic outbreaks. We evaluate the performance of tracking-based intervention methods on a network SEIR model, which we augment with an additional parameter to model pre-symptomatic and asymptomatic individuals, and study the effectiveness of these methods in combination with or as an alternative to quarantine and global lockdown policies. Our focus is on the basic trade-off between human-lives lost and economic costs, and on how this trade-off changes under different quarantine, lockdown, tracking, and testing policies. Our main findings are as follows: (1) Tests combined with patient quarantines reduce both economic costs and mortality, however, an extensive-scale testing capacity is required to achieve a significant improvement. (2) Tracking significantly reduces both economic costs and mortality. (3) Tracking combined with a moderate testing capacity can achieve containment without lockdowns. (4) In the presence of a flow of new incoming infections, dynamic “On–Off” lockdowns are more efficient than fixed lockdowns. In this setting as well, tracking strictly improves efficiency. The results show the extreme usefulness of policies that combine tracking and testing for reducing mortality and economic costs, and their potential to contain outbreaks without imposing any social distancing restrictions. This highlights the difficult social question of trading-off these gains against patient privacy, which is inevitably infringed by tracking.


Author(s):  
Gaurav Chaurasia ◽  
Arthur Nieuwoudt ◽  
Alexandru-Eugen Ichim ◽  
Richard Szeliski ◽  
Alexander Sorkine-Hornung

We present an end-to-end system for real-time environment capture, 3D reconstruction, and stereoscopic view synthesis on a mobile VR headset. Our solution allows the user to use the cameras on their VR headset as their eyes to see and interact with the real world while still wearing their headset, a feature often referred to as Passthrough. The central challenge when building such a system is the choice and implementation of algorithms under the strict compute, power, and performance constraints imposed by the target user experience and mobile platform. A key contribution of this paper is a complete description of a corresponding system that performs temporally stable passthrough rendering at 72 Hz with only 200 mW power consumption on a mobile Snapdragon 835 platform. Our algorithmic contributions for enabling this performance include the computation of a coarse 3D scene proxy on the embedded video encoding hardware, followed by a depth densification and filtering step, and finally stereoscopic texturing and spatio-temporal up-sampling. We provide a detailed discussion and evaluation of the challenges we encountered, as well as algorithm and performance trade-offs in terms of compute and resulting passthrough quality.;AB@The described system is available to users as the Passthrough+ feature on Oculus Quest. We believe that by publishing the underlying system and methods, we provide valuable insights to the community on how to design and implement real-time environment sensing and rendering on heavily resource constrained hardware.


Sign in / Sign up

Export Citation Format

Share Document