scholarly journals Evidence of common and separate eye and hand accumulators underlying flexible eye-hand coordination

2017 ◽  
Vol 117 (1) ◽  
pp. 348-364 ◽  
Author(s):  
Sumitash Jana ◽  
Atul Gopal ◽  
Aditya Murthy

Eye and hand movements are initiated by anatomically separate regions in the brain, and yet these movements can be flexibly coupled and decoupled, depending on the need. The computational architecture that enables this flexible coupling of independent effectors is not understood. Here, we studied the computational architecture that enables flexible eye-hand coordination using a drift diffusion framework, which predicts that the variability of the reaction time (RT) distribution scales with its mean. We show that a common stochastic accumulator to threshold, followed by a noisy effector-dependent delay, explains eye-hand RT distributions and their correlation in a visual search task that required decision-making, while an interactive eye and hand accumulator model did not. In contrast, in an eye-hand dual task, an interactive model better predicted the observed correlations and RT distributions than a common accumulator model. Notably, these two models could only be distinguished on the basis of the variability and not the means of the predicted RT distributions. Additionally, signatures of separate initiation signals were also observed in a small fraction of trials in the visual search task, implying that these distinct computational architectures were not a manifestation of the task design per se. Taken together, our results suggest two unique computational architectures for eye-hand coordination, with task context biasing the brain toward instantiating one of the two architectures.NEW & NOTEWORTHY Previous studies on eye-hand coordination have considered mainly the means of eye and hand reaction time (RT) distributions. Here, we leverage the approximately linear relationship between the mean and standard deviation of RT distributions, as predicted by the drift-diffusion model, to propose the existence of two distinct computational architectures underlying coordinated eye-hand movements. These architectures, for the first time, provide a computational basis for the flexible coupling between eye and hand movements.

2015 ◽  
Vol 113 (7) ◽  
pp. 2033-2048 ◽  
Author(s):  
Atul Gopal ◽  
Pooja Viswanathan ◽  
Aditya Murthy

The computational architecture that enables the flexible coupling between otherwise independent eye and hand effector systems is not understood. By using a drift diffusion framework, in which variability of the reaction time (RT) distribution scales with mean RT, we tested the ability of a common stochastic accumulator to explain eye-hand coordination. Using a combination of behavior, computational modeling and electromyography, we show how a single stochastic accumulator to threshold, followed by noisy effector-dependent delays, explains eye-hand RT distributions and their correlation, while an alternate independent, interactive eye and hand accumulator model does not. Interestingly, the common accumulator model did not explain the RT distributions of the same subjects when they made eye and hand movements in isolation. Taken together, these data suggest that a dedicated circuit underlies coordinated eye-hand planning.


Author(s):  
P. Manivannan ◽  
Sara Czaja ◽  
Colin Drury ◽  
Chi Ming Ip

Visual search is an important component of many real world tasks such as industrial inspection and driving. Several studies have shown that age has an impact on visual search performance. In general older people demonstrate poorer performance on such tasks as compared to younger people. However, there is controversy regarding the source of the age-performance effect. The objective of this study was to examine the relationship between component abilities and visual search performance, in order to identify the locus of age-related performance differences. Six abilities including reaction time, working memory, selective attention and spatial localization were identified as important components of visual search performance. Thirty-two subjects ranging in age from 18 - 84 years, categorized in three different age groups (young, middle, and older) participated in the study. Their component abilities were measured and they performed a visual search task. The visual search task varied in complexity in terms of type of targets detected. Significant relationships were found between some of the component skills and search performance. Significant age effects were also observed. A model was developed using hierarchical multiple linear regression to explain the variance in search performance. Results indicated that reaction time, selective attention, and age were important predictors of search performance with reaction time and selective attention accounting for most of the variance.


2015 ◽  
Vol 15 (12) ◽  
pp. 1370
Author(s):  
Carissa Romero ◽  
Kandace Markovich ◽  
Yvonne Johnson ◽  
Eriko Self

2009 ◽  
Vol 102 (5) ◽  
pp. 2681-2692 ◽  
Author(s):  
Joo-Hyun Song ◽  
Robert M. McPeek

We examined the coordination of saccades and reaches in a visual search task in which monkeys were rewarded for reaching to an odd-colored target among distractors. Eye movements were unconstrained, and monkeys typically made one or more saccades before initiating a reach. Target selection for reaching and saccades was highly correlated with the hand and eyes landing near the same final stimulus both for correct reaches to the target and for incorrect reaches to a distractor. Incorrect reaches showed a bias in target selection: they were directed to the distractor in the same hemifield as the target more often than to other distractors. A similar bias was seen in target selection for the initial saccade in correct reaching trials with multiple saccades. We also examined the temporal coupling of saccades and reaches. In trials with a single saccade, a reaching movement was made after a fairly stereotyped delay. In multiple-saccade trials, a reach to the target could be initiated near or even before the onset of the final target-directed saccade. In these trials, the initial trajectory of the reach was often directed toward the fixated distractor before veering toward the target around the time of the final saccade. In virtually all cases, the eyes arrived at the target before the hand, and remained fixated until reach completion. Overall, these results are consistent with flexible temporal coupling of saccade and reach initiation, but fairly tight coupling of target selection for the two types of action.


2019 ◽  
Author(s):  
Cherie Zhou ◽  
Monicque M. Lorist ◽  
Sebastiaan Mathôt

AbstractDuring visual search, task-relevant representations in visual working memory (VWM), known as attentional templates, are assumed to guide attention. A current debate concerns whether only one (Single-Item-Template hypothesis, or SIT) or multiple (Multiple-Item-Template hypothesis, or MIT) items can serve as attentional templates simultaneously. The current study was designed to test these two hypotheses. Participants memorized two colors, prior to a visual-search task in which the target and the distractor could match or not match the colors held in VWM. Robust attentional guidance was observed when one of the memory colors was presented as the target (reduced response times [RTs] on target-match trials) or the distractor (increased RTs on distractor-match trials). We constructed two drift-diffusion models that implemented the MIT and SIT hypotheses, which are similar in their predictions about overall RTs, but differ in their predictions about RTs on individual trials. Critically, simulated RT distributions and error rates revealed a better match of the MIT hypothesis to the observed data than the SIT hypothesis. Taken together, our findings provide behavioral and computational evidence for the concurrent guidance of attention by multiple items in VWM.Significance statementTheories differ in how many items within visual working memory can guide attention at the same time. This question is difficult to address, because multiple- and single-item-template theories make very similar predictions about average response times. Here we use drift-diffusion modeling in addition to behavioral data, to model response times at an individual level. Crucially, we find that our model of the multiple-item-template theory predicts human behavior much better than our model of the single-item-template theory; that is, modeling of behavioral data provides compelling evidence for multiple attentional templates that are simultaneously active.


Sign in / Sign up

Export Citation Format

Share Document