scholarly journals Changes in Gaze Behavior during the Learning of the Epidural Technique with a Simulator in Anesthesia Novices (Preprint)

2020 ◽  
Author(s):  
Emanuele Capogna ◽  
Francesco Salvi ◽  
Lorena Delvino ◽  
Andrea Di Giacinto ◽  
Angelica Del Vecchio ◽  
...  

BACKGROUND Current literature demonstrates the ability of eye tracking to provide reliable quantitative data as an objective assessment tool, with potential applications to medical and surgical training to improve performance. OBJECTIVE The aim of this study was to evaluate the changes in gaze behavior in anesthesia novice trainees when performing a simulated epidural technique before and after a hands-on training on the epidural simulator. METHODS We enrolled 48 novice trainees who had never previously performed an epidural block. After a standardized learning module, each trainee practiced the epidural procedure on the epidural simulator while wearing a pair of eye tracking glasses (Tobii Pro Glasses 50 Hz wearable wireless eye tracker). After this baseline recording, each trainee spent two hours practicing with the epidural simulator and afterwards once again performed the eye tracking epidural procedure. Eye tracking metrics and epidural learning (duration of the procedure and number of attempts) before and after the simulated practice were recorded. RESULTS The duration of the epidural procedure was shorter and the number of epidural attempts reduced after the tutorial. Before the tutorial, during needle insertion. the eye tracking metrics showed more visit counts of shorter duration and after the tutorial less visit counts (P=.05) but of longer duration (P=.03). A significant correlation was observed between the number of epidural needle insertions (additional attempts) and the number (OR=2.02 (0.23-1.27; P=.008)) and duration (OR=0.65 (-0.93-0.02; P=.05)) of visits. CONCLUSIONS In novice anesthesia trainees who had never previously performed an epidural block, we observed significant changes in gaze behavior associated with increased performance during the initial phase of epidural technique learning with a simulator. These results may create a prototype for future studies on eye tracking technique as a teaching and evaluating tool in simulation. CLINICALTRIAL Not necessary

2019 ◽  
Vol 3 (3) ◽  
pp. 47 ◽  
Author(s):  
Thibault Sénac ◽  
Arnaud Lelevé ◽  
Richard Moreau ◽  
Cyril Novales ◽  
Laurence Nouaille ◽  
...  

Simulators have been traditionally used for centuries during medical gestures training. Nowadays, mechatronic technologies have opened the way to more evolved solutions enabling objective assessment and dedicated pedagogic scenarios. Trainees can now practice in virtual environments representing various kind of patient and body parts including physio-pathologies issues. Gestures, to be mastered, vary according to each medical specialty (e.g., ultrasound probe orientations, or forceps installation during assisted delivery). Hence, medical students need kinesthetic feedback in order to significantly improve their learning capabilities. Gesture simulators require haptic devices with variable stiffness actuators. Existing solutions do not always fit the requirements because of their significant size. Contrary to electric actuators, pneumatic technology is low-cost, available off-the-shelf and offers a better mass–power ratio. However, it presents two main drawbacks: nonlinear dynamics and need for a compressed air supply. During the last decade, we have developed several haptic solutions based on pneumatic actuation (e.g., birth simulator, epidural needle insertion simulator) and, recently, in a joint venture with Prisme laboratory, a pneumatic probe master device for remote ultrasonography. This paper recalls literature scientific approaches on pneumatic actuation developed in the medical context and illustrated with the aforementioned applications to highlight the benefits.


2017 ◽  
Vol 39 (5) ◽  
pp. 283-294 ◽  
Author(s):  
Guo-Chung Dong ◽  
Li-Chen Chiu ◽  
Chien-Kun Ting ◽  
Jia-Ruei Hsu ◽  
Chih-Chung Huang ◽  
...  

Ultrasound guidance for epidural block has improved clinical blind-trial problems but the design of present ultrasonic probes poses operating difficulty of ultrasound-guided catheterization, increasing the failure rate. The purpose of this study was to develop a novel ultrasonic probe to avoid needle contact with vertebral bone during epidural catheterization. The probe has a central circular passage for needle insertion. Two focused annular transducers are deployed around the passage for on-axis guidance. A 17-gauge insulated Tuohy needle containing the self-developed fiber-optic–modified stylet was inserted into the back of the anesthetized pig, in the lumbar region under the guidance of our ultrasonic probe. The inner transducer of the probe detected the shallow echo signals of the peak-peak amplitude of 2.8 V over L3 at the depth of 2.4 cm, and the amplitude was decreased to 0.8 V directly over the L3 to L4 interspace. The outer transducer could detect the echoes from the deeper bone at the depth of 4.5 cm, which did not appear for the inner transducer. The operator tilted the probe slightly in left-right and cranial-caudal directions until the echoes at the depth of 4.5 cm disappeared, and the epidural needle was inserted through the central passage of the probe. The needle was advanced and stopped when the epidural space was identified by optical technique. The needle passed without bone contact. Designs of the hollow probe for needle pass and dual transducers with different focal lengths for detection of shallow and deep vertebrae may benefit operation, bone/nonbone identification, and cost.


Author(s):  
Nahumi Nugrahaningsih ◽  
Marco Porta ◽  
Aleksandra Klasnja-Milicevic

Adapting the presentation of learning material to the specific student?s characteristics is useful to improve the overall learning experience and learning styles can play an important role to this purpose. In this paper, we investigate the possibility to distinguish between Visual and Verbal learning styles from gaze data. In an experiment involving first year students of an engineering faculty, content regarding the basics of programming was presented in both text and graphic form, and participants? gaze data was recorded by means of an eye tracker. Three metrics were selected to characterize the user?s gaze behavior, namely, percentage of fixation duration, percentage of fixations, and average fixation duration. Percentages were calculated on ten intervals into which each participant?s interaction time was subdivided, and this allowed us to perform timebased assessments. The obtained results showed a significant relation between gaze data and Visual/Verbal learning styles for an information arrangement where the same concept is presented in graphical format on the left and in text format on the right. We think that this study can provide a useful contribution to learning styles research carried out exploiting eye tracking technology, as it is characterized by unique traits that cannot be found in similar investigations.


2012 ◽  
Vol 2012 ◽  
pp. 1-4 ◽  
Author(s):  
Sami Mansour ◽  
Nizar Din ◽  
Kumaran Ratnasingham ◽  
Shashidhar Irukulla ◽  
George Vasilikostas ◽  
...  

Objective.The demand for laparoscopic surgery has led to the core laparoscopic skills course (CLSC) becoming mandatory for trainees in UK. Virtual reality simulation (VR) has a great potential as a training and assessment tool of laparoscopic skills. The aim of this study was to determine the role of the CLSC in developing laparoscopic skills using the VR.Design.Prospective study. Doctors were given teaching to explain how to perform PEG transfer and clipping skills using the VR. They carried out these skills before and after the course. During the course they were trained using the Box Trainer (BT). Certain parameters assessed.Setting.Between 2008 and 2010, doctors attending the CLSC at St Georges Hospital.Participants.All doctors with minimal laparoscopic experience attending the CLSC.Results. Forty eight doctors were included. The time taken for the PEG skill improved by 52%, total left hand and right hand length by 41% and 48%. The total time in the clipping skill improved by 57%. Improvement in clips applied in the marked area was 38% and 45% in maximum vessel stretch.Conclusions.This study demonstrated that CLSC improved some aspects of the laparoscopic surgical skills. It addresses Practice-based Learning and patient care.


2021 ◽  
Vol 6 ◽  
Author(s):  
Eva Minarikova ◽  
Zuzana Smidekova ◽  
Miroslav Janik ◽  
Kenneth Holmqvist

To date most of our knowledge on professional vision has relied on verbal data or questionnaires that used classroom videos as prompts. This has been used to tell us about a teacher’s professional vision. Recently, however, new studies explore professional vision during the act of teaching through the use of mobile eye-tracking. This novel approach poses the question: how do these two “professional visions” differ? Visual attention represented by gaze was used as a proxy to studying professional vision (specifically its noticing component). To achieve this, eye-tracking as a data collection method was used. We worked with three teachers and employed eye-tracking glasses to record teacher eye movements during teaching (4 lessons per teacher; labelled as IN mode). After each lesson, we selected short clips from the lesson recorded by a static camera aimed at pupils and showed them to the same teacher (i.e., providing a similar setting as traditional studies on professional vision) while recording eye movements and gaze behavior data through a screen-based eye-tracker (labelled as ON mode). The two modes differ and due to these differences, comparison is difficult. However, by overlaying them and describing them in detail we want to highlight the exact variance observed. A comparison between IN vs ON condition in terms of dwell time on the same students in either condition was made using both quantitative (correlation) and qualitative (timeline comparison) methods. The findings suggest that the greatest differences in attention given to individual pupils occur when a pupil who was interacted with during the situation is missing from the view in the video recording. Even though individual differences are present in the patterns of gaze in IN and ON modes, the teachers in our sample consistently monitored more pupils more often in the ON mode than in the IN mode. On the other hand, the IN mode was mostly characterized by focused gaze on the pupil that the teacher interacted with in the moment with few side glances. The results aim to open a discussion about our understanding of professional vision in different contexts and about how current research may need to expand its outlook.


2020 ◽  
Vol 52 (3) ◽  
pp. 1140-1160 ◽  
Author(s):  
Diederick C. Niehorster ◽  
Thiago Santini ◽  
Roy S. Hessels ◽  
Ignace T. C. Hooge ◽  
Enkelejda Kasneci ◽  
...  

AbstractMobile head-worn eye trackers allow researchers to record eye-movement data as participants freely move around and interact with their surroundings. However, participant behavior may cause the eye tracker to slip on the participant’s head, potentially strongly affecting data quality. To investigate how this eye-tracker slippage affects data quality, we designed experiments in which participants mimic behaviors that can cause a mobile eye tracker to move. Specifically, we investigated data quality when participants speak, make facial expressions, and move the eye tracker. Four head-worn eye-tracking setups were used: (i) Tobii Pro Glasses 2 in 50 Hz mode, (ii) SMI Eye Tracking Glasses 2.0 60 Hz, (iii) Pupil-Labs’ Pupil in 3D mode, and (iv) Pupil-Labs’ Pupil with the Grip gaze estimation algorithm as implemented in the EyeRecToo software. Our results show that whereas gaze estimates of the Tobii and Grip remained stable when the eye tracker moved, the other systems exhibited significant errors (0.8–3.1∘ increase in gaze deviation over baseline) even for the small amounts of glasses movement that occurred during the speech and facial expressions tasks. We conclude that some of the tested eye-tracking setups may not be suitable for investigating gaze behavior when high accuracy is required, such as during face-to-face interaction scenarios. We recommend that users of mobile head-worn eye trackers perform similar tests with their setups to become aware of its characteristics. This will enable researchers to design experiments that are robust to the limitations of their particular eye-tracking setup.


2018 ◽  
Author(s):  
M Guerra Veloz ◽  
M Jose González-Mariscal ◽  
M Belvis Jimenez ◽  
J Loscertales ◽  
H Galera-Ruiz ◽  
...  

2021 ◽  
pp. 019459982198960
Author(s):  
Tiffany V. Wang ◽  
Nat Adamian ◽  
Phillip C. Song ◽  
Ramon A. Franco ◽  
Molly N. Huston ◽  
...  

Objectives (1) Demonstrate true vocal fold (TVF) tracking software (AGATI [Automated Glottic Action Tracking by artificial Intelligence]) as a quantitative assessment of unilateral vocal fold paralysis (UVFP) in a large patient cohort. (2) Correlate patient-reported metrics with AGATI measurements of TVF anterior glottic angles, before and after procedural intervention. Study Design Retrospective cohort study. Setting Academic medical center. Methods AGATI was used to analyze videolaryngoscopy from healthy adults (n = 72) and patients with UVFP (n = 70). Minimum, 3rd percentile, 97th percentile, and maximum anterior glottic angles (AGAs) were computed for each patient. In patients with UVFP, patient-reported outcomes (Voice Handicap Index 10, Dyspnea Index, and Eating Assessment Tool 10) were assessed, before and after procedural intervention (injection or medialization laryngoplasty). A receiver operating characteristic curve for the logistic fit of paralysis vs control group was used to determine AGA cutoff values for defining UVFP. Results Mean (SD) 3rd percentile AGA (in degrees) was 2.67 (3.21) in control and 5.64 (5.42) in patients with UVFP ( P < .001); mean (SD) 97th percentile AGA was 57.08 (11.14) in control and 42.59 (12.37) in patients with UVFP ( P < .001). For patients with UVFP who underwent procedural intervention, the mean 97th percentile AGA decreased by 5 degrees from pre- to postprocedure ( P = .026). The difference between the 97th and 3rd percentile AGA predicted UVFP with 77% sensitivity and 92% specificity ( P < .0001). There was no correlation between AGA measurements and patient-reported outcome scores. Conclusions AGATI demonstrated a difference in AGA measurements between paralysis and control patients. AGATI can predict UVFP with 77% sensitivity and 92% specificity.


2021 ◽  
pp. 1-16
Author(s):  
Leigha A. MacNeill ◽  
Xiaoxue Fu ◽  
Kristin A. Buss ◽  
Koraly Pérez-Edgar

Abstract Temperamental behavioral inhibition (BI) is a robust endophenotype for anxiety characterized by increased sensitivity to novelty. Controlling parenting can reinforce children's wariness by rewarding signs of distress. Fine-grained, dynamic measures are needed to better understand both how children perceive their parent's behaviors and the mechanisms supporting evident relations between parenting and socioemotional functioning. The current study examined dyadic attractor patterns (average mean durations) with state space grids, using children's attention patterns (captured via mobile eye tracking) and parental behavior (positive reinforcement, teaching, directives, intrusion), as functions of child BI and parent anxiety. Forty 5- to 7-year-old children and their primary caregivers completed a set of challenging puzzles, during which the child wore a head-mounted eye tracker. Child BI was positively correlated with proportion of parent's time spent teaching. Child age was negatively related, and parent anxiety level was positively related, to parent-focused/controlling parenting attractor strength. There was a significant interaction between parent anxiety level and child age predicting parent-focused/controlling parenting attractor strength. This study is a first step to examining the co-occurrence of parenting behavior and child attention in the context of child BI and parental anxiety levels.


Sign in / Sign up

Export Citation Format

Share Document