Is saccade averaging determined by visual processing or movement planning?

2012 ◽  
Vol 108 (12) ◽  
pp. 3161-3171 ◽  
Author(s):  
Neha Bhutani ◽  
Supriya Ray ◽  
Aditya Murthy

Saccadic averaging that causes subjects' gaze to land between the location of two targets when faced with simultaneously or sequentially presented stimuli has been often used as a probe to investigate the nature of computations that transform sensory representations into an oculomotor plan. Since saccadic movements involve at least two processing stages—a visual stage that selects a target and a movement stage that prepares the response—saccade averaging can either occur due to interference in visual processing or movement planning. By having human subjects perform two versions of a saccadic double-step task, in which the stimuli remained the same, but different instructions were provided (REDIRECT gaze to the later-appearing target vs. FOLLOW the sequence of targets in their order of appearance), we tested two alternative hypotheses. If saccade averaging were due to visual processing alone, the pattern of saccade averaging is expected to remain the same across task conditions. However, whereas subjects produced averaged saccades between two targets in the FOLLOW condition, they produced hypometric saccades in the direction of the initial target in the REDIRECT condition, suggesting that the interaction between competing movement plans produces saccade averaging.

1997 ◽  
Vol 8 (2) ◽  
pp. 95-100 ◽  
Author(s):  
Kimron Shapiro ◽  
Jon Driver ◽  
Robert Ward ◽  
Robyn E. Sorensen

When people must detect several targets in a very rapid stream of successive visual events at the same location, detection of an initial target induces misses for subsequent targets within a brief period. This attentional blink may serve to prevent interruption of ongoing target processing by temporarily suppressing vision for subsequent stimuli. We examined the level at which the internal blink operates, specifically, whether it prevents early visual processing or prevents quite substantial processing from reaching awareness. Our data support the latter view. We observed priming from missed letter targets, benefiting detection of a subsequent target with the same identity but a different case. In a second study, we observed semantic priming from word targets that were missed during the blink. These results demonstrate that attentional gating within the blink operates only after substantial stimulus processing has already taken place. The results are discussed in terms of two forms of visual representation, namely, types and tokens.


2001 ◽  
Vol 13 (3) ◽  
pp. 319-331 ◽  
Author(s):  
Daeyeol Lee ◽  
Nicholas L. Port ◽  
Wolfgang Kruse ◽  
Apostolos P. Georgopoulos

Two rhesus monkeys were trained to intercept a moving target at a fixed location with a feedback cursor controlled bya 2-D manipulandum. The direction from which the target appeared, the time from the target onset to its arrival at the interception point, and the target acceleration were randomized for each trial, thus requiring the animal to adjust its movement according to the visual input on a trail-by-trail basis. The two animals adopted different strategies, similar to those identified previously in human subjects. Single-cell activity was recorded from the arm area of the primary motor cortex in these two animals, and the neurons were classified based on the temporal patterns in their activity, using a nonhierarchical cluster analysis. Results of this analysis revealed differences in the complexity and diversity of motor cortical activity between the two animals that paralleled those of behavioral strategies. Most clusters displayed activity closedly related to the kinematics of hand movements. In addition, some clusters displayed patterns of activation that conveyed additional information necessary for successful performance of the task, such as the initial target velocity and the interval between successive submovements, suggesting that such information is represented in selective subpopulations of neurons in the primary motor cortex. These results also suggest that conversion of information about target motion into movement-related signals takes place in a broad network of cortical areas including the primary motor cortex.


2004 ◽  
Vol 91 (3) ◽  
pp. 1158-1170 ◽  
Author(s):  
Jonathan B. Dingwell ◽  
Christopher D. Mah ◽  
Ferdinando A. Mussa-Ivaldi

Determining the principles used to plan and execute movements is a fundamental question in neuroscience research. When humans reach to a target with their hand, they exhibit stereotypical movements that closely follow an optimally smooth trajectory. Even when faced with various perceptual or mechanical perturbations, subjects readily adapt their motor output to preserve this stereotypical trajectory. When humans manipulate non-rigid objects, however, they must control the movements of the object as well as the hand. Such tasks impose a fundamentally different control problem than that of moving one's arm alone. Here, we developed a mathematical model for transporting a mass-on-a-spring to a target in an optimally smooth way. We demonstrate that the well-known “minimum-jerk” model for smooth reaching movements cannot accomplish this task. Our model extends the concept of smoothness to allow for the control of non-rigid objects. Although our model makes some predictions that are similar to minimum jerk, it predicts distinctly different optimal trajectories in several specific cases. In particular, when the relative speed of the movement becomes fast enough or when the object stiffness becomes small enough, the model predicts that subjects will transition from a uni-phasic hand motion to a bi-phasic hand motion. We directly tested these predictions in human subjects. Our subjects adopted trajectories that were well-predicted by our model, including all of the predicted transitions between uni- and bi-phasic hand motions. These findings suggest that smoothness of motion is a general principle of movement planning that extends beyond the control of hand trajectories.


2016 ◽  
Vol 283 (1833) ◽  
pp. 20160263 ◽  
Author(s):  
Elisa Zamboni ◽  
Timothy Ledgeway ◽  
Paul V. McGraw ◽  
Denis Schluppeck

Visual perception is strongly influenced by contextual information. A good example is reference repulsion, where subjective reports about the direction of motion of a stimulus are significantly biased by the presence of an explicit reference. These perceptual biases could arise early, during sensory encoding, or alternatively, they may reflect decision-related processes occurring relatively late in the task sequence. To separate these two competing possibilities, we asked (human) subjects to perform a fine motion-discrimination task and then estimate the direction of motion in the presence or absence of an oriented reference line. When subjects performed the discrimination task with the reference, but subsequently estimated motion direction in its absence, direction estimates were unbiased. However, when subjects viewed the same stimuli but performed the estimation task only, with the orientation of the reference line jittered on every trial, the directions estimated by subjects were biased and yoked to the orientation of the shifted reference line. These results show that judgements made relative to a reference are subject to late, decision-related biases . A model in which information about motion is integrated with that of an explicit reference cue, resulting in a late, decision-related re-weighting of the sensory representation, can account for these results.


2018 ◽  
Vol 120 (6) ◽  
pp. 3042-3062 ◽  
Author(s):  
Devin H. Kehoe ◽  
Selvi Aybulut ◽  
Mazyar Fallah

Previous behavioral and physiological research has demonstrated that as the behavioral relevance of potential saccade goals increases, they elicit more competition during target selection processing as evidenced by increased saccade curvature and neural activity. However, these effects have only been demonstrated for lower order feature singletons, and it remains unclear whether more complicated featural differences between higher order objects also elicit vector modulation. Therefore, we measured human saccades curvature elicited by distractors bilaterally flanking a target during a visual search saccade task and systematically varied subsets of features shared between the two distractors and the target, referred to as objective similarity (OS). Our results demonstrate that saccades deviated away from the distractor highest in OS to the target and that there was a linear relationship between the magnitude of saccade deviation and the number of feature differences between the most similar distractor and the target. Furthermore, an analysis of curvature over the time course of the saccade demonstrated that curvature only occurred in the first 20–30 ms of the movement. Given the multifeatural complexity of the novel stimuli, these results suggest that saccadic target selection processing involves dynamically reweighting vector representations for movement planning to several possible targets based on their behavioral relevance. NEW & NOTEWORTHY We demonstrate that small featural differences between unfamiliar, higher order object representations modulate vector weights during saccadic target selection processing. Such effects have previously only been demonstrated for familiar, simple feature singletons (e.g., color) in which features characterize entire objects. The complexity and novelty of our stimuli suggest that the oculomotor system dynamically receives visual/cognitive information processed in the higher order representational networks of the cortical visual processing hierarchy and integrates this information for saccadic movement planning.


2021 ◽  
Vol 397 ◽  
pp. 112930
Author(s):  
Manuel Vázquez-Marrufo ◽  
Alberto del Barco-Gavala ◽  
Alejandro Galvao-Carmona ◽  
Rubén Martín-Clemente

1992 ◽  
Vol 67 (6) ◽  
pp. 1417-1427 ◽  
Author(s):  
G. L. Gottlieb ◽  
M. L. Latash ◽  
D. M. Corcos ◽  
T. J. Liubinskas ◽  
G. C. Agarwal

1. Normal human subjects made discrete elbow flexions in the horizontal plane under different task conditions of initial or final position, inertial loading, or instruction about speed. We measured joint angle, acceleration, and electromyographic signals (EMGs) from two agonist and two antagonist muscles. 2. For many of the experimental tasks, the latency of the antagonist EMG burst was strongly correlated with parameters of the first agonist EMG burst defined by a single equation, expressed in terms of the agonist's hypothetical excitation pulse. Latency is proportional to the ratio of pulse duration to pulse intensity, making it proportional to movement distance and inertial load and inversely proportional to planned movement speed. However, these rules are not sufficient to define the timing of every possible single joint movement. 3. For movements described by the speed-insensitive strategy, the quantity of both antagonist and agonist muscle activity can be uniformly associated with selected kinetic measures that incorporate muscle force-velocity relations. 4. For movements collectively described by the speed-sensitive strategy, (i.e., that have direct or indirect constraints on speed), no single rule can describe all the combinations of agonist-antagonist coordination that are used to perform these diverse tasks. 5. Estimates of joint viscosity were made by calculating the amount of velocity-dependent torque used to terminate movements on target. These estimates are similar to those that have previously been made of limb viscosity during postural maintenance. They imply that a significant component of muscle activity must be used to overcome these forces. 6. These and previous results are all consistent with a dual-strategy hypothesis for those single-joint movements that are sufficiently fast to require pulse-like muscle activation patterns. The major features of such patterns (pulse intensities, durations, and latencies) are determined by central commands programmed in advance of movement initiation. The selection between speed-insensitive or speed-sensitive rules of motoneuron pool excitation is implicitly specified by the nature of speed constraints of the movement task.


1999 ◽  
Vol 263 (2-3) ◽  
pp. 133-136 ◽  
Author(s):  
Thomas Kammer ◽  
Lucia Lehr ◽  
Kuno Kirschfeld

2011 ◽  
Vol 23 (1) ◽  
pp. 238-246 ◽  
Author(s):  
Søren K. Andersen ◽  
Sandra Fuchs ◽  
Matthias M. Müller

We investigated mechanisms of concurrent attentional selection of location and color using electrophysiological measures in human subjects. Two completely overlapping random dot kinematograms (RDKs) of two different colors were presented on either side of a central fixation cross. On each trial, participants attended one of these four RDKs, defined by its specific combination of color and location, in order to detect coherent motion targets. Sustained attentional selection while monitoring for targets was measured by means of steady-state visual evoked potentials (SSVEPs) elicited by the frequency-tagged RDKs. Attentional selection of transient targets and distractors was assessed by behavioral responses and by recording event-related potentials to these stimuli. Spatial attention and attention to color had independent and largely additive effects on the amplitudes of SSVEPs elicited in early visual areas. In contrast, behavioral false alarms and feature-selective modulation of P3 amplitudes to targets and distractors were limited to the attended location. These results suggest that feature-selective attention produces an early, global facilitation of stimuli having the attended feature throughout the visual field, whereas the discrimination of target events takes place at a later stage of processing that is only applied to stimuli at the attended position.


eLife ◽  
2021 ◽  
Vol 10 ◽  
Author(s):  
Nick Taubert ◽  
Michael Stettler ◽  
Ramona Siebert ◽  
Silvia Spadacenta ◽  
Louisa Sting ◽  
...  

Dynamic facial expressions are crucial for communication in primates. Due to the difficulty to control shape and dynamics of facial expressions across species, it is unknown how species-specific facial expressions are perceptually encoded and interact with the representation of facial shape. While popular neural network models predict a joint encoding of facial shape and dynamics, the neuromuscular control of faces evolved more slowly than facial shape, suggesting a separate encoding. To investigate these alternative hypotheses, we developed photo-realistic human and monkey heads that were animated with motion capture data from monkeys and humans. Exact control of expression dynamics was accomplished by a Bayesian machine-learning technique. Consistent with our hypothesis, we found that human observers learned cross-species expressions very quickly, where face dynamics was represented largely independently of facial shape. This result supports the co-evolution of the visual processing and motor control of facial expressions, while it challenges appearance-based neural network theories of dynamic expression recognition.


Sign in / Sign up

Export Citation Format

Share Document