scholarly journals The integration of visual and auditory cues for express saccade generation

2010 ◽  
Vol 10 (7) ◽  
pp. 505-505
Author(s):  
P. Schiller ◽  
M. Kwak
2004 ◽  
Vol 21 (2) ◽  
pp. 119-127 ◽  
Author(s):  
PETER H. SCHILLER ◽  
JOHANNES HAUSHOFER ◽  
GEOFFERY KENDALL

The frequency with which express saccades are generated under a variety of conditions in rhesus monkeys was examined. Increasing the gap time between fixation spot termination and target onset increased express saccade frequency but was progressively less effective in doing so as the number of target positions in the sample was increased. Express saccades were rarely produced when two targets were presented simultaneously and the choice of either of which was rewarded; a temporal asynchrony of only 17 ms between the targets reinstated express saccade generation. Express saccades continued to be generated when the vergence or pursuit systems was coactivated with the saccadic system.


2020 ◽  
Vol 123 (5) ◽  
pp. 1907-1919 ◽  
Author(s):  
Suryadeep Dash ◽  
Tyler R. Peel ◽  
Stephen G. Lomber ◽  
Brian D. Corneil

Express saccades are the shortest-latency saccade. The frontal eye fields (FEF) are thought to promote express saccades by presetting the superior colliculus. Here, by reversibly inactivating the FEF either unilaterally or bilaterally via cortical cooling, we support this by showing that the FEF plays a facilitative but not critical role in express saccade generation. We also found that FEF inactivation lowered express saccade peak velocity, emphasizing a contribution of the FEF to express saccade kinematics.


1993 ◽  
Vol 97 (2) ◽  
Author(s):  
Jon Currie ◽  
Sarah Joyce ◽  
Paul Maruff ◽  
Ben Ramsden ◽  
Cheryl McArthur-Jackson ◽  
...  

2018 ◽  
Vol 44 (7) ◽  
pp. 1012-1021 ◽  
Author(s):  
Dominika Radziun ◽  
H. Henrik Ehrsson

1998 ◽  
Author(s):  
W. T. Nelson ◽  
Robert S. Bolia ◽  
Richard L. McKinley ◽  
Tamara L. Chelette ◽  
Lloyd D. Tripp
Keyword(s):  

Author(s):  
Adam F. Werner ◽  
Jamie C. Gorman

Objective This study examines visual, auditory, and the combination of both (bimodal) coupling modes in the performance of a two-person perceptual-motor task, in which one person provides the perceptual inputs and the other the motor inputs. Background Parking a plane or landing a helicopter on a mountain top requires one person to provide motor inputs while another person provides perceptual inputs. Perceptual inputs are communicated either visually, auditorily, or through both cues. Methods One participant drove a remote-controlled car around an obstacle and through a target, while another participant provided auditory, visual, or bimodal cues for steering and acceleration. Difficulty was manipulated using target size. Performance (trial time, path variability), cue rate, and spatial ability were measured. Results Visual coupling outperformed auditory coupling. Bimodal performance was best in the most difficult task condition but also high in the easiest condition. Cue rate predicted performance in all coupling modes. Drivers with lower spatial ability required a faster auditory cue rate, whereas drivers with higher ability performed best with a lower rate. Conclusion Visual cues result in better performance when only one coupling mode is available. As predicted by multiple resource theory, when both cues are available, performance depends more on auditory cueing. In particular, drivers must be able to transform auditory cues into spatial actions. Application Spotters should be trained to provide an appropriate cue rate to match the spatial ability of the driver or pilot. Auditory cues can enhance visual communication when the interpersonal task is visual with spatial outputs.


Sign in / Sign up

Export Citation Format

Share Document