Revision: Is Visual Perception a Requisite for Visual Imagery?

Perception ◽  
10.1068/p3360 ◽  
2002 ◽  
Vol 31 (6) ◽  
pp. 717-731 ◽  
Author(s):  
Diego Kaski

Vision is the most highly developed sense in man and represents the doorway through which most of our knowledge of the external world arises. Visual imagery can be defined as the representation of perceptual information in the absence of visual input. Visual imagery has been shown to complement vision in this acquisition of knowledge—it is used in memory retrieval, problem solving, and the recognition of properties of objects. The processes underlying visual imagery have been assimilated to those of the visual system and are believed to share a neural substrate. However, results from studies in congenitally and cortically blind subjects have opposed this hypothesis. Here I review the currently available evidence.

2021 ◽  
Author(s):  
Yingying Huang ◽  
Frank Pollick ◽  
Ming Liu ◽  
Delong Zhang

Abstract Visual mental imagery and visual perception have been shown to share a hierarchical topological visual structure of neural representation. Meanwhile, many studies have reported a dissociation of neural substrate between mental imagery and perception in function and structure. However, we have limited knowledge about how the visual hierarchical cortex involved into internally generated mental imagery and perception with visual input. Here we used a dataset from previous fMRI research (Horikawa & Kamitani, 2017), which included a visual perception and an imagery experiment with human participants. We trained two types of voxel-wise encoding models, based on Gabor features and activity patterns of high visual areas, to predict activity in the early visual cortex (EVC, i.e., V1, V2, V3) during perception, and then evaluated the performance of these models during mental imagery. Our results showed that during perception and imagery, activities in the EVC could be independently predicted by the Gabor features and activity of high visual areas via encoding models, which suggested that perception and imagery might share neural representation in the EVC. We further found that there existed a Gabor-specific and a non-Gabor-specific neural response pattern to stimuli in the EVC, which were shared by perception and imagery. These findings provide insight into mechanisms of how visual perception and imagery shared representation in the EVC.


1973 ◽  
Vol 37 (3) ◽  
pp. 683-693
Author(s):  
Mark H. Healy ◽  
David Symmes ◽  
Ayub K. Ommaya

Contrary to previous reports, adaptation to laterally displaced visual input does require visual perception of the visuomotor mismatch. Using 4 rhesus monkeys as Ss, it was found that reaching errors induced by wearing 20 diopter wedge prisms remained at optically predicted magnitudes for 24 hr., provided that no visual misreaching cues were available. Unrestricted head movement did not provide such cues. However, terminal viewing of the prism induced reaching errors produced dramatic, rapid adaptation. Tactile and proprioceptive discordance cues alone, without visual feedback, were not corrective.


NeuroImage ◽  
2014 ◽  
Vol 100 ◽  
pp. 237-243 ◽  
Author(s):  
Daniela Dentico ◽  
Bing Leung Cheung ◽  
Jui-Yang Chang ◽  
Jeffrey Guokas ◽  
Melanie Boly ◽  
...  

2012 ◽  
Vol 5 (1) ◽  
pp. 1-10
Author(s):  
Mateusz Woźniak

Brain system responsible for visual perception has been extensively studied. Visual system analyses a wide variety of stimuli in order to let us create adaptive representation of surrounding world. But among vast amounts of processed information come visual cues describing our own bodies. These cues constitute our so-called body-image. We tend to perceive it as a relatively stable structure but recent research, especially within the domain of virtual reality, introduces doubts to this assumption. New problems appear concerning perceiving others’ and our own bodies in virtual space and how does it influence our experience of ourselves and true reality. Recent studies show that how we see our avatars influence how we behave in artificial worlds. It introduces a brand new way of thinking about human embodiment. Virtual reality allows us to transcend beyond the casual visual-sensory-motor integration and create new ways to experience embodiment, temporarily replacing permanent body image with almost any imaginable digital one. Santrauka Smegenų sistema, atsakinga už vizualųjį suvokimą, yra nuodugniai ištirta. Vizualioji sistema analizuoja plačią akstinų įvairovę, padedančią mums sukurti adaptuotą supančio pasaulio reprezentaciją. Tačiau tarp didelio kiekio apdorotos informacijos kyla vizualiosios užuominos, atvaizduojančios mūsų pačių kūnus. Šios užuominos steigia vadinamąjį kūną-atvaizdą. Mes linkstame jį suvokti kaip sąlygiškai stabilią struktūrą, tačiau dabartiniai tyrimai, o ypač tie, kurie vykdomi virtualiojoje realybėje, tokia prielaida verčia suabejoti. Kyla naujų problemų, suvokiant kitų ir mūsų pačių kūnus virtualiojoje erdvėje bei kokios įtakos tai turi mūsų pačių savęs ir tikrosios realybės patyrimui. Nūdieniai tyrinėjimai atskleidžia, kad tai, kaip mes suvokiame savąjį kūniškumą, turi įtakos tam, kaip elgiamės dirbtiniuose pasauliuose. Tai steigia visiškai naują žmogiškojo kūniškumo suvokimo būdą. Virtualioji realybė leidžia mums peržengti paprastą vizualinęjutiminę-motorinę integraciją ir kurti naujus būdus patirti kūniškumą, palaipsniui pakeičiant ilgalaikį kūno atvaizdą bet kokiu įsivaizduojamu skaitmeniniu.


1993 ◽  
Vol 114 (1) ◽  
pp. 25-35 ◽  
Author(s):  
H.L. Lagrèze ◽  
A. Hartmann ◽  
G. Anzinger ◽  
A. Schaub ◽  
A. Deister

Perception ◽  
1997 ◽  
Vol 26 (1_suppl) ◽  
pp. 295-295
Author(s):  
A Oliva ◽  
P G Schyns

When people categorise complex stimuli such as faces, they might flexibly use the perceptual information available from the visual input. Three experiments were run to test this hypothesis with two different categorisations (gender and expression) of identical face stimuli. Stimuli were hybrids (Schyns and Oliva, 1994 Psychological Science5 195 – 200): they combined either a man or a woman with a particular expression at a coarse spatial scale with a face of the opposite gender with a different expression at the fine spatial scale. In experiment 1 we tested whether a gender vs an expression categorisation task tapped preferentially into a different spatial scale of the hybrids. Results showed that expression was biased to the fine scale, but that gender was not biased. In experiment 2 the same task was replicated, following a learning of the identity of the faces. It was then found that gender also became biased to the fine scale. In experiment 3 the expression task was changed to an identification of each expression to establish whether this could revert the scale biases observed in experiments 1 and 2. Results suggest that different categorisations of identical faces use different perceptual cues. This suggests that the nature of a task changes the representation of a stimulus.


Perception ◽  
1997 ◽  
Vol 26 (1_suppl) ◽  
pp. 132-132
Author(s):  
S Edelman ◽  
S Duvdevani-Bar

To recognise a previously seen object, the visual system must overcome the variability in the object's appearance caused by factors such as illumination and pose. It is possible to counter the influence of these factors, by learning to interpolate between stored views of the target object, taken under representative combinations of viewing conditions. Routine visual tasks, however, typically require not so much recognition as categorisation, that is making sense of objects not seen before. Despite persistent practical difficulties, theorists in computer vision and visual perception traditionally favour the structural route to categorisation, according to which forming a description of a novel shape in terms of its parts and their spatial relationships is a prerequisite to the ability to categorise it. In comparison, we demonstrate that knowledge of instances of each of several representative categories can provide the necessary computational substrate for the categorisation of their new instances, as well as for representation and processing of radically novel shapes, not belonging to any of the familiar categories. The representational scheme underlying this approach, according to which objects are encoded by their similarities to entire reference shapes (S Edelman, 1997 Behavioral and Brain Sciences in press), is computationally viable, and is readily mapped onto the mechanisms of biological vision revealed by recent psychophysical and physiological studies.


Sign in / Sign up

Export Citation Format

Share Document