Perceiving Possibilities for Action: On the Necessity of Calibration and Perceptual Learning for the Visual Guidance of Action

Perception ◽  
10.1068/p5405 ◽  
2005 ◽  
Vol 34 (6) ◽  
pp. 717-740 ◽  
Author(s):  
Brett R Fajen

Tasks such as steering, braking, and intercepting moving objects constitute a class of behaviors, known as visually guided actions, which are typically carried out under continuous control on the basis of visual information. Several decades of research on visually guided action have resulted in an inventory of control laws that describe for each task how information about the sufficiency of one's current state is used to make ongoing adjustments. Although a considerable amount of important research has been generated within this framework, several aspects of these tasks that are essential for successful performance cannot be captured. The purpose of this paper is to provide an overview of the existing framework, discuss its limitations, and introduce a new framework that emphasizes the necessity of calibration and perceptual learning. Within the proposed framework, successful human performance on these tasks is a matter of learning to detect and calibrate optical information about the boundaries that separate possible from impossible actions. This resolves a long-lasting incompatibility between theories of visually guided action and the concept of an affordance. The implications of adopting this framework for the design of experiments and models of visually guided action are discussed.

2008 ◽  
Vol 31 (2) ◽  
pp. 220-221 ◽  
Author(s):  
David Whitney

AbstractAccurate perception of moving objects would be useful; accurate visually guided action is crucial. Visual motion across the scene influences perceived object location and the trajectory of reaching movements to objects. In this commentary, I propose that the visual system assigns the position of any object based on the predominant motion present in the scene, and that this is used to guide reaching movements to compensate for delays in visuomotor processing.


Author(s):  
I Caprara ◽  
P Janssen

AbstractEfficient object grasping requires the continuous control of arm and hand movements based on visual information. Previous studies have identified a network of parietal and frontal areas that is crucial for the visual control of prehension movements. Electrical microstimulation of 3D shape-selective clusters in AIP during fMRI activates areas F5a and 45B, suggesting that these frontal areas may represent important downstream areas for object processing during grasping, but the role of area F5a and 45B in grasping is unknown. To assess their causal role in the frontal grasping network, we reversibly inactivated 45B, F5a and F5p during visually-guided grasping in macaque monkeys. First, we recorded single neuron activity in 45B, F5a and F5p to identify sites with object responses during grasping. Then, we injected muscimol or saline to measure the grasping deficit induced by the temporary disruption of each of these three nodes in the grasping network. The inactivation of all three areas resulted in a significant increase in the grasping time in both animals, with the strongest effect observed in area F5p. These results not only confirm a clear involvement of F5p, but also indicate causal contributions of area F5a and 45B in visually-guided object grasping.


2017 ◽  
Vol 372 (1717) ◽  
pp. 20160077 ◽  
Author(s):  
Anna Honkanen ◽  
Esa-Ville Immonen ◽  
Iikka Salmela ◽  
Kyösti Heimonen ◽  
Matti Weckström

Night vision is ultimately about extracting information from a noisy visual input. Several species of nocturnal insects exhibit complex visually guided behaviour in conditions where most animals are practically blind. The compound eyes of nocturnal insects produce strong responses to single photons and process them into meaningful neural signals, which are amplified by specialized neuroanatomical structures. While a lot is known about the light responses and the anatomical structures that promote pooling of responses to increase sensitivity, there is still a dearth of knowledge on the physiology of night vision. Retinal photoreceptors form the first bottleneck for the transfer of visual information. In this review, we cover the basics of what is known about physiological adaptations of insect photoreceptors for low-light vision. We will also discuss major enigmas of some of the functional properties of nocturnal photoreceptors, and describe recent advances in methodologies that may help to solve them and broaden the field of insect vision research to new model animals. This article is part of the themed issue ‘Vision in dim light’.


2005 ◽  
Vol 43 (2) ◽  
pp. 216-226 ◽  
Author(s):  
Jonathan S. Cant ◽  
David A. Westwood ◽  
Kenneth F. Valyear ◽  
Melvyn A. Goodale

2020 ◽  
pp. 095679762095485
Author(s):  
Mathieu Landry ◽  
Jason Da Silva Castanheira ◽  
Jérôme Sackur ◽  
Amir Raz

Suggestions can cause some individuals to miss or disregard existing visual stimuli, but can they infuse sensory input with nonexistent information? Although several prominent theories of hypnotic suggestion propose that mental imagery can change our perceptual experience, data to support this stance remain sparse. The present study addressed this lacuna, showing how suggesting the presence of physically absent, yet critical, visual information transforms an otherwise difficult task into an easy one. Here, we show how adult participants who are highly susceptible to hypnotic suggestion successfully hallucinated visual occluders on top of moving objects. Our findings support the idea that, at least in some people, suggestions can add perceptual information to sensory input. This observation adds meaningful weight to theoretical, clinical, and applied aspects of the brain and psychological sciences.


1997 ◽  
Vol 8 (3) ◽  
pp. 224-230 ◽  
Author(s):  
Rick O. Gilmore ◽  
Mark H. Johnson

The extent to which infants combine visual (i e, retinal position) and nonvisual (eye or head position) spatial information in planning saccades relates to the issue of what spatial frame or frames of reference influence early visually guided action We explored this question by testing infants from 4 to 6 months of age on the double-step saccade paradigm, which has shown that adults combine visual and eye position information into an egocentric (head- or trunk-centered) representation of saccade target locations In contrast, our results imply that infants depend on a simple retinocentric representation at age 4 months, but by 6 months use egocentric representations more often to control saccade planning Shifts in the representation of visual space for this simple sensorimotor behavior may index maturation in cortical circuitry devoted to visual spatial processing in general


Perception ◽  
1997 ◽  
Vol 26 (1_suppl) ◽  
pp. 59-59
Author(s):  
J M Zanker ◽  
M P Davey

Visual information processing in primate cortex is based on a highly ordered representation of the surrounding world. In addition to the retinotopic mapping of the visual field, systematic variations of the orientation tuning of neurons are described electrophysiologically for the first stages of the visual stream. On the way to understanding the relation of position and orientation representation, in order to give an adequate account of cortical architecture, it will be an essential step to define the minimum spatial requirements for detection of orientation. We addressed the basic question of spatial limits for detecting orientation by comparing computer simulations of simple orientation filters with psychophysical experiments in which the orientation of small lines had to be detected at various positions in the visual field. At sufficiently high contrast levels, the minimum physical length of a line whose orientation can just be resolved is not constant when presented at various eccentricities, but covaries inversely with the cortical magnification factor. A line needs to span less than 0.2 mm on the cortical surface in order to be recognised as oriented, independently of the actual eccentricity at which the stimulus is presented. This seems to indicate that human performance for this task approaches the physical limits, requiring hardly more than approximately three input elements to be activated, in order to detect the orientation of a highly visible line segment. Combined with the estimates for receptive field sizes of orientation-selective filters derived from computer simulations, this experimental result may nourish speculations of how the rather local elementary process underlying orientation detection in the human visual system can be assembled to form much larger receptive fields of the orientation-sensitive neurons known to exist in the primate visual system.


Sign in / Sign up

Export Citation Format

Share Document