Bayesian aggregation of independent successive visual inputs.

1971 ◽  
Vol 90 (2) ◽  
pp. 300-305 ◽  
Author(s):  
Stuart M. Keeley ◽  
Michael E. Doherty
2009 ◽  
Vol 106 (37) ◽  
pp. 15996-16001 ◽  
Author(s):  
Christopher L. Striemer ◽  
Craig S. Chapman ◽  
Melvyn A. Goodale

When we reach toward objects, we easily avoid potential obstacles located in the workspace. Previous studies suggest that obstacle avoidance relies on mechanisms in the dorsal visual stream in the posterior parietal cortex. One fundamental question that remains unanswered is where the visual inputs to these dorsal-stream mechanisms are coming from. Here, we provide compelling evidence that these mechanisms can operate in “real-time” without direct input from primary visual cortex (V1). In our first experiment, we used a reaching task to demonstrate that an individual with a dense left visual field hemianopia after damage to V1 remained strikingly sensitive to the position of unseen static obstacles placed in his blind field. Importantly, in a second experiment, we showed that his sensitivity to the same obstacles in his blind field was abolished when a short 2-s delay (without vision) was introduced before reach onset. These findings have far-reaching implications, not only for our understanding of the time constraints under which different visual pathways operate, but also in relation to how these seemingly “primitive” subcortical visual pathways can control complex everyday behavior without recourse to conscious vision.


2011 ◽  
Vol 105 (4) ◽  
pp. 1558-1573 ◽  
Author(s):  
Yu-Ting Mao ◽  
Tian-Miao Hua ◽  
Sarah L. Pallas

Sensory neocortex is capable of considerable plasticity after sensory deprivation or damage to input pathways, especially early in development. Although plasticity can often be restorative, sometimes novel, ectopic inputs invade the affected cortical area. Invading inputs from other sensory modalities may compromise the original function or even take over, imposing a new function and preventing recovery. Using ferrets whose retinal axons were rerouted into auditory thalamus at birth, we were able to examine the effect of varying the degree of ectopic, cross-modal input on reorganization of developing auditory cortex. In particular, we assayed whether the invading visual inputs and the existing auditory inputs competed for or shared postsynaptic targets and whether the convergence of input modalities would induce multisensory processing. We demonstrate that although the cross-modal inputs create new visual neurons in auditory cortex, some auditory processing remains. The degree of damage to auditory input to the medial geniculate nucleus was directly related to the proportion of visual neurons in auditory cortex, suggesting that the visual and residual auditory inputs compete for cortical territory. Visual neurons were not segregated from auditory neurons but shared target space even on individual target cells, substantially increasing the proportion of multisensory neurons. Thus spatial convergence of visual and auditory input modalities may be sufficient to expand multisensory representations. Together these findings argue that early, patterned visual activity does not drive segregation of visual and auditory afferents and suggest that auditory function might be compromised by converging visual inputs. These results indicate possible ways in which multisensory cortical areas may form during development and evolution. They also suggest that rehabilitative strategies designed to promote recovery of function after sensory deprivation or damage need to take into account that sensory cortex may become substantially more multisensory after alteration of its input during development.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jordan Navarro ◽  
Otto Lappi ◽  
François Osiurak ◽  
Emma Hernout ◽  
Catherine Gabaude ◽  
...  

AbstractActive visual scanning of the scene is a key task-element in all forms of human locomotion. In the field of driving, steering (lateral control) and speed adjustments (longitudinal control) models are largely based on drivers’ visual inputs. Despite knowledge gained on gaze behaviour behind the wheel, our understanding of the sequential aspects of the gaze strategies that actively sample that input remains restricted. Here, we apply scan path analysis to investigate sequences of visual scanning in manual and highly automated simulated driving. Five stereotypical visual sequences were identified under manual driving: forward polling (i.e. far road explorations), guidance, backwards polling (i.e. near road explorations), scenery and speed monitoring scan paths. Previously undocumented backwards polling scan paths were the most frequent. Under highly automated driving backwards polling scan paths relative frequency decreased, guidance scan paths relative frequency increased, and automation supervision specific scan paths appeared. The results shed new light on the gaze patterns engaged while driving. Methodological and empirical questions for future studies are discussed.


2003 ◽  
Vol 3 (4) ◽  
pp. 251-259
Author(s):  
Laurel Kincl ◽  
Amit Bhattacharya ◽  
Paul Succop ◽  
Angshuman Bagchee

Maintenance of upright balance involves interplay between sensory (somatosensory, vestibular and visual) inputs and neuro-motor outputs. Visual spatial perception (VSP) of vertical and horizontal orientation plays a significant role in the maintenance of upright balance. For this experiment, a custom designed computer program randomly generated 42 images of horizontal and vertical lines at various angles for 60 industrial workers (39 ± 9.8 years). Half of the workers had more than three years of experience working on inclined and/or elevated surfaces. The main effects investigated included within subject factors of standing surface inclination (0°, 14° and 26°), job experience (number of months), and postural workload (0%, 50% or 100%). The VSP outcome measure was the count of correct responses to the angles presented. The inclination did not have a significant effect on VSP, but the parameter estimates indicated less correct responses on the inclined surfaces. The postural workload significantly affected the VSP, indicating that with increased workload, less correct responses were given. Finally, job experience was found to improve VSP response scores. In summary, these results indicate that job experience increases accurate VSP, while workloads and inclined work surfaces decrease accurate VSP responses.


2021 ◽  
Vol 25 (1) ◽  
pp. 39-42
Author(s):  
Shuochao Yao ◽  
Jinyang Li ◽  
Dongxin Liu ◽  
Tianshi Wang ◽  
Shengzhong Liu ◽  
...  

Future mobile and embedded systems will be smarter and more user-friendly. They will perceive the physical environment, understand human context, and interact with end-users in a human-like fashion. Daily objects will be capable of leveraging sensor data to perform complex estimation and recognition tasks, such as recognizing visual inputs, understanding voice commands, tracking objects, and interpreting human actions. This raises important research questions on how to endow low-end embedded and mobile devices with the appearance of intelligence despite their resource limitations.


1995 ◽  
Vol 66 (2) ◽  
pp. 313-351 ◽  
Author(s):  
Philippe Mongin
Keyword(s):  

Author(s):  
Matteo Venanzi ◽  
John Guiver ◽  
Gabriella Kazai ◽  
Pushmeet Kohli ◽  
Milad Shokouhi

2015 ◽  
Vol 22 (6) ◽  
pp. 1814-1819 ◽  
Author(s):  
Carola Salvi ◽  
Emanuela Bricolo ◽  
Steven L. Franconeri ◽  
John Kounios ◽  
Mark Beeman
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document