A Significant Bilateral Field Advantage for Shapes Defined by Static and Motion Cues

Perception ◽  
10.1068/p6129 ◽  
2009 ◽  
Vol 38 (8) ◽  
pp. 1132-1143 ◽  
Author(s):  
Charles A Collin ◽  
Patricia A McMullen ◽  
Julie-Anne Séguin

Matching performance is better when pairs of visual stimuli are presented in bilateral conditions—in which one stimulus is presented to each side of the visual field—than in unilateral presentations—when both stimuli are presented to one side of the field. This is called the bilateral field advantage (BFA). The processing of visual motion has also been found to be more strongly integrated across the cerebral hemispheres than is processing of static cues. However, in these studies higher-order motion tasks, such as processing motion-defined form, have not been examined. To determine if the BFA generalises to such tasks, we measured the magnitude of the effect using a shape-matching task in which the stimuli were random polygons that were either in motion, motion-defined, or static. The polygon pairs were presented either: (i) bilaterally, one to either side of the vertical meridian; (ii) unilaterally, both to one side of the vertical meridian (left or right visual fields); or (iii) centrally, vertically separated across the horizontal meridian (a control condition). An equal advantage of bilateral conditions over unilateral ones was found for all three types of polygon shape cues, showing that the BFA generalises to conditions where shapes are in motion and where shape is defined by motion. These findings are compatible with the notion that motion processing is strongly integrated across the cerebral hemispheres, and with the idea that this integration manifests itself with simple motion information, rather than with higher-order motion processing such as matching shapes defined by motion.

2019 ◽  
Vol 32 (1) ◽  
pp. 45-65 ◽  
Author(s):  
G. M. Hanada ◽  
J. Ahveninen ◽  
F. J. Calabro ◽  
A. Yengo-Kahn ◽  
L. M. Vaina

Abstract The everyday environment brings to our sensory systems competing inputs from different modalities. The ability to filter these multisensory inputs in order to identify and efficiently utilize useful spatial cues is necessary to detect and process the relevant information. In the present study, we investigate how feature-based attention affects the detection of motion across sensory modalities. We were interested to determine how subjects use intramodal, cross-modal auditory, and combined audiovisual motion cues to attend to specific visual motion signals. The results showed that in most cases, both the visual and the auditory cues enhance feature-based orienting to a transparent visual motion pattern presented among distractor motion patterns. Whereas previous studies have shown cross-modal effects of spatial attention, our results demonstrate a spread of cross-modal feature-based attention cues, which have been matched for the detection threshold of the visual target. These effects were very robust in comparisons of the effects of valid vs. invalid cues, as well as in comparisons between cued and uncued valid trials. The effect of intramodal visual, cross-modal auditory, and bimodal cues also increased as a function of motion-cue salience. Our results suggest that orienting to visual motion patterns among distracters can be facilitated not only by intramodal priors, but also by feature-based cross-modal information from the auditory system.


2008 ◽  
Vol 99 (5) ◽  
pp. 2329-2346 ◽  
Author(s):  
Ryusuke Hayashi ◽  
Kenichiro Miura ◽  
Hiromitsu Tabata ◽  
Kenji Kawano

Brief movements of a large-field visual stimulus elicit short-latency tracking eye movements termed “ocular following responses” (OFRs). To address the question of whether OFRs can be elicited by purely binocular motion signals in the absence of monocular motion cues, we measured OFRs from monkeys using dichoptic motion stimuli, the monocular inputs of which were flickering gratings in spatiotemporal quadrature, and compared them with OFRs to standard motion stimuli including monocular motion cues. Dichoptic motion did elicit OFRs, although with longer latencies and smaller amplitudes. In contrast to these findings, we observed that other types of motion stimuli categorized as non-first-order motion, which is undetectable by detectors for standard luminance-defined (first-order) motion, did not elicit OFRs, although they did evoke the sensation of motion. These results indicate that OFRs can be driven solely by cortical visual motion processing after binocular integration, which is distinct from the process incorporating non-first-order motion for elaborated motion perception. To explore the nature of dichoptic motion processing in terms of interaction with monocular motion processing, we further recorded OFRs from both humans and monkeys using our novel motion stimuli, the monocular and dichoptic motion signals of which move in opposite directions with a variable motion intensity ratio. We found that monocular and dichoptic motion signals are processed in parallel to elicit OFRs, rather than suppressing each other in a winner-take-all fashion, and the results were consistent across the species.


Cephalalgia ◽  
2004 ◽  
Vol 24 (5) ◽  
pp. 363-372 ◽  
Author(s):  
AM McKendrick ◽  
DR Badcock

This study was designed to determine whether cortical motion processing abnormalities are present in individuals with migraine. Performance was measured using a visual motion coherence task (motion coherence perimetry, MCP) thought to depend on the operation of cortical area V5. Motion coherence thresholds were measured using stimuli composed of moving dots at 17 locations in the central ± 20° of visual field. Pre-cortical visual function was also measured using frequency doubling perimetry (FDP) at the same 17 locations. Several migraine subjects demonstrated significant pre-cortical visual functional abnormalities, however, most subjects had normal visual fields measured with FDP. Abnormal MCP performance was measured in 15 of 19 migraine-with-aura subjects, and 11 of 17 migraine-without-aura subjects. A decreased ability to detect coherent motion may possibly be explained by an increase in baseline neuronal noise, such as would be consistent with the concept of cortical hyperexcitability in migraine.


2019 ◽  
Author(s):  
Sha Sun ◽  
Zhentao Zuo ◽  
Michelle Manxiu Ma ◽  
Chencan Qian ◽  
Lin Chen ◽  
...  

ABSTRACTVisual stabilization is an inevitable requirement for animals during active motion interaction with the environment. Visual motion cues of the surroundings or induced by self-generated behaviors are perceived then trigger proper motor responses mediated by neural representations conceptualized as the internal model: one part of it predicts the consequences of sensory dynamics as a forward model, another part generates proper motor control as a reverse model. However, the neural circuits between the two models remain mostly unknown. Here, we demonstrate that an internal component, the efference copy, coordinated the two models in a push-pull manner by generating extra reset saccades during active motion processing in larval zebrafish. Calcium imaging indicated that the saccade preparation circuit is enhanced while the velocity integration circuit is inhibited during the interaction, balancing the internal representations from both directions. This is the first model of efference copy on visual stabilization beyond the sensorimotor stage.


Perception ◽  
2017 ◽  
Vol 47 (1) ◽  
pp. 30-43 ◽  
Author(s):  
Nathan H. Heller ◽  
Nicolas Davidenko

Motion processing is thought of as a hierarchical system composed of higher and lower order components. Past research has shown that these components can be dissociated using motion priming paradigms in which the lower order system produces negative priming while the higher order system produces positive priming. By manipulating various stimulus parameters, researchers have probed these two systems using bistable test stimuli that permit only two motion interpretations. Here we employ maximally ambiguous test stimuli composed of randomly refreshing pixels in a task that allows observers to report more than just two types of motion percepts. We show that even with such stimuli, motion priming can constrain the unstructured random pixel patterns into coherent percepts of positive or negative apparent motion. Moreover, we find that the higher order system is uniquely susceptible to cognitive influences, as evidenced by a significant suppression of positive priming in the presence of alternative response options.


Author(s):  
Edita Poljac ◽  
Ab de Haan ◽  
Gerard P. van Galen

Two experiments investigated the way that beforehand preparation influences general task execution in reaction-time matching tasks. Response times (RTs) and error rates were measured for switching and nonswitching conditions in a color- and shape-matching task. The task blocks could repeat (task repetition) or alternate (task switch), and the preparation interval (PI) was manipulated within-subjects (Experiment 1) and between-subjects (Experiment 2). The study illustrated a comparable general task performance after a long PI for both experiments, within and between PI manipulations. After a short PI, however, the general task performance increased significantly for the between-subjects manipulation of the PI. Furthermore, both experiments demonstrated an analogous preparation effect for both task switching and task repetitions. Next, a consistent switch cost throughout the whole run of trials and a within-run slowing effect were observed in both experiments. Altogether, the present study implies that the effects of the advance preparation go beyond the first trials and confirms different points of the activation approach ( Altmann, 2002) to task switching.


Sign in / Sign up

Export Citation Format

Share Document