perceptual invariance
Recently Published Documents


TOTAL DOCUMENTS

28
(FIVE YEARS 3)

H-INDEX

9
(FIVE YEARS 0)

2021 ◽  
Author(s):  
Katie H. Long ◽  
Justin D. Lieber ◽  
Sliman J. Bensmaia

AbstractWe are exquisitely sensitive to the microstructure and material properties of surfaces. In the peripheral nerves, two separate mechanisms convey texture information: coarse textural features are encoded in spatial patterns of activation that reflect their spatial layout and fine features are encoded in highly repeatable, texture specific temporal spiking patterns evoked as the skin moves across the surface. In the present study, we examined whether this temporal code is preserved in the responses of neurons in somatosensory cortex. To this end, we scanned a diverse set of everyday textures across the fingertip of awake macaques while recording the responses evoked in individual cortical neurons. We found that temporal spiking patterns are highly repeatable across multiple presentations of the same texture, with millisecond precision. As a result, texture identity can be reliably decoded from the temporal patterns themselves, even after information carried in the spike rates is eliminated. However, a neural code that combines rate and timing is more informative than either code in isolation. The temporal precision of the texture response is heterogenous across cortical neurons and depends on the submodality composition of their input and on their location along the somatosensory neuraxis. Furthermore, temporal spiking patterns in cortex dilate and contract with decreases and increases in scanning speed and this systematic relationship between speed and patterning may contribute to the observed perceptual invariance to speed. Finally, we find that the quality of a texture percept can be better predicted when these temporal patterns are taken into consideration. We conclude that high-precision spike timing complements rate-based signals to encode texture in somatosensory cortex.


2021 ◽  
Vol 19 ◽  
pp. 148-155
Author(s):  
Madhur Mangalam ◽  
Cristian Cuadra ◽  
Tarkeshwar Singh

2020 ◽  
Author(s):  
Jonathan Melchor ◽  
Isaac Morán ◽  
José Vergara ◽  
Tonatiuh Figueroa ◽  
Javier Perez-Orive ◽  
...  

ABSTRACTThe supplementary motor area (SMA) of the brain is critical for integrating memory and sensory signals into perceptual decisions. For example, in macaques, SMA activity correlates with decisions based on the comparison of sounds.1 In humans, functional MRI shows SMA activation during the invariant recognition of words pronounced by different speakers.2 Nevertheless, the neuronal correlates of perceptual invariance are unknown. Here we show that the SMA of macaques associates novel sounds with behaviors triggered by similar learned categories when recognizing sounds such as words. Notably, the neuronal activity at single and population levels correlates with the monkeys’ behaviors (e.g. hits and false alarms). Our results demonstrate that invariant recognitions of complex sounds involve premotor computations in areas other than the temporal and parietal speech areas. Therefore, we propose that perceptual invariance depends on motor predictions and not only sensory representations. We anticipate that studies on speech will observe sensory-motor transformations of acoustic information into motor skills.


2019 ◽  
Author(s):  
Jonathan Melchor ◽  
Isaac Morán ◽  
Tonatiuh Figueroa ◽  
Luis Lemus

AbstractThe ability to invariably identify spoken words and other naturalistic sounds in different temporal modulations and timbres requires perceptual tolerance to numerous acoustic variations. However, the mechanisms by which auditory information is perceived to be invariant are poorly understood, and no study has explicitly tested the perceptual constancy skills of nonhuman primates. We investigated the ability of two trained rhesus monkeys to learn and then recognize multiple sounds that included multisyllabic words. Importantly, we tested their ability to group unexperienced sounds into corresponding categories. We found that the monkeys adequately categorized sounds whose formants were at close Euclidean distance to the learned sounds. Our results indicate that macaques can attend and memorize complex sounds such as words. This ability was not studied or reported before and can be used to study the neuronal mechanisms underlying auditory perception.


2018 ◽  
Vol 115 (30) ◽  
pp. 7807-7812 ◽  
Author(s):  
Erin Koch ◽  
Famya Baig ◽  
Qasim Zaidi

Pose estimation of objects in real scenes is critically important for biological and machine visual systems, but little is known of how humans infer 3D poses from 2D retinal images. We show unexpectedly remarkable agreement in the 3D poses different observers estimate from pictures. We further show that all observers apply the same inferential rule from all viewpoints, utilizing the geometrically derived back-transform from retinal images to actual 3D scenes. Pose estimations are altered by a fronto-parallel bias, and by image distortions that appear to tilt the ground plane. We used pictures of single sticks or pairs of joined sticks taken from different camera angles. Observers viewed these from five directions, and matched the perceived pose of each stick by rotating an arrow on a horizontal touchscreen. The projection of each 3D stick to the 2D picture, and then onto the retina, is described by an invertible trigonometric expression. The inverted expression yields the back-projection for each object pose, camera elevation, and observer viewpoint. We show that a model that uses the back-projection, modulated by just two free parameters, explains 560 pose estimates per observer. By considering changes in retinal image orientations due to position and elevation of limbs, the model also explains perceived limb poses in a complex scene of two bodies lying on the ground. The inferential rules simply explain both perceptual invariance and dramatic distortions in poses of real and pictured objects, and show the benefits of incorporating projective geometry of light into mental inferences about 3D scenes.


Author(s):  
Casper J. Erkelens

A picture is a powerful and convenient medium for inducing the illusion that one perceives a real three-dimensional scene. The relative invariance of picture perception across viewing positions has aroused the interest of painters, photographers and visual scientists. Many studies have been devoted to perceptual invariance when pictures are viewed from oblique directions. Invariance across viewing distances has received less attention. This study presents a computational analysis of pictures of perspective scenes taken from different distances between camera and physical objects. Distances and directions of pictorial objects were computed as function of viewing distance to the picture and compared with distances and directions of the physical objects as function of camera position. The computations show that pictorial distance and direction are determined by angular size of the depicted objects. Pictorial distance and direction are independent of camera position, focal length of the lens, and picture size. Ratios of pictorial distances, directions and sizes are constant as function of viewing distance. The constant ratios are proposed as the reason for invariance of picture perception over a range of viewing distances. Reanalysis of distance judgments obtained from the literature shows that perspective space, previously proposed as the model for visual space, is also a good model for pictorial space. The geometry of pictorial space contradicts some conceptions about picture perception.


Author(s):  
Anitha Pasupathy ◽  
Yasmine El-Shamayleh ◽  
Dina V. Popovkina

Humans and other primates rely on vision. Our visual system endows us with the ability to perceive, recognize, and manipulate objects, to avoid obstacles and dangers, to choose foods appropriate for consumption, to read text, and to interpret facial expressions in social interactions. To support these visual functions, the primate brain captures a high-resolution image of the world in the retina and, through a series of intricate operations in the cerebral cortex, transforms this representation into a percept that reflects the physical characteristics of objects and surfaces in the environment. To construct a reliable and informative percept, the visual system discounts the influence of extraneous factors such as illumination, occlusions, and viewing conditions. This perceptual “invariance” can be thought of as the brain’s solution to an inverse inference problem in which the physical factors that gave rise to the retinal image are estimated. While the processes of perception and recognition seem fast and effortless, it is a challenging computational problem that involves a substantial proportion of the primate brain.


2006 ◽  
Vol 18 (11) ◽  
pp. 1899-1912 ◽  
Author(s):  
Axel Lindner ◽  
Thomas Haarmeier ◽  
Michael Erb ◽  
Wolfgang Grodd ◽  
Peter Thier

Despite smooth pursuit eye movements, we are unaware of resultant retinal image motion. This example of perceptual invariance is achieved by comparing retinal image slip with an internal reference signal predicting the sensory consequences of the eye movement. This prediction can be manipulated experimentally, allowing one to vary the amount of self-induced image motion for which the reference signal compensates and, accordingly, the resulting percept of motion. Here we were able to map regions in CRUS I within the lateral cerebellar hemispheres that exhibited a significant correlation between functional magnetic resonance imaging signal amplitudes and the amount of motion predicted by the reference signal. The fact that these cerebellar regions were found to be functionally coupled with the left parieto-insular cortex and the supplementary eye fields points to these cortical areas as the sites of interaction between predicted and experienced sensory events, ultimately giving rise to the perception of a stable world despite self-induced retinal motion.


Sign in / Sign up

Export Citation Format

Share Document