scholarly journals Explaining Visual Cortex Phenomena using Recursive Cortical Network

2018 ◽  
Author(s):  
Alexander Lavin ◽  
J. Swaroop Guntupalli ◽  
Miguel Lázaro-Gredilla ◽  
Wolfgang Lehrach ◽  
Dileep George

AbstractThe connectivity and information pathways of visual cortex are well studied, as are observed physiological phenomena, yet a cohesive model for explaining visual cortex processes remains an open problem. For a comprehensive understanding, we need to build models of the visual cortex that are capable of robust real-world performance, while also being able to explain psychophysical and physiological observations. To this end, we demonstrate how the Recursive Cortical Network (George et al., 2017) can be used as a computational model to reproduce and explain subjective contours, neon color spreading, occlusion vs. deletion, and the border-ownership competition phenomena observed in the visual cortex.

2003 ◽  
Vol 15 (10) ◽  
pp. 2399-2418 ◽  
Author(s):  
Zhao Songnian ◽  
Xiong Xiaoyun ◽  
Yao Guozheng ◽  
Fu Zhi

Based on synchronized responses of neuronal populations in the visual cortex to external stimuli, we proposed a computational model consisting primarily of a neuronal phase-locked loop (NPLL) and multiscaled operator. The former reveals the function of synchronous oscillations in the visual cortex. Regardless of which of these patterns of the spike trains may be an average firing-rate code, a spike-timing code, or a rate-time code, the NPLL can decode original visual information from neuronal spike trains modulated with patterns of external stimuli, because a voltage-controlled oscillator (VCO), which is included in the NPLL, can precisely track neuronal spike trains and instantaneous variations, that is, VCO can make a copy of an external stimulus pattern. The latter, however, describes multi-scaled properties of visual information processing, but not merely edge and contour detection. In this study, in which we combined NPLL with a multiscaled operator and maximum likelihood estimation, we proved that the model, as a neurodecoder, implements optimum algorithm decoding visual information from neuronal spike trains at the system level. At the same time, the model also obtains increasingly important supports, which come from a series of experimental results of neurobiology on stimulus-specific neuronal oscillations or synchronized responses of the neuronal population in the visual cortex. In addition, the problem of how to describe visual acuity and multiresolution of vision by wavelet transform is also discussed. The results indicate that the model provides a deeper understanding of the role of synchronized responses in decoding visual information.


Author(s):  
Stephen Grossberg

The distinction between seeing and knowing, and why our brains even bother to see, are discussed using vivid perceptual examples, including image features without visible qualia that can nonetheless be consciously recognized, The work of Helmholtz and Kanizsa exemplify these issues, including examples of the paradoxical facts that “all boundaries are invisible”, and that brighter objects look closer. Why we do not see the big holes in, and occluders of, our retinas that block light from reaching our photoreceptors is explained, leading to the realization that essentially all percepts are visual illusions. Why they often look real is also explained. The computationally complementary properties of boundary completion and surface filling-in are introduced and their unifying explanatory power is illustrated, including that “all conscious qualia are surface percepts”. Neon color spreading provides a vivid example, as do self-luminous, glary, and glossy percepts. How brains embody general-purpose self-organizing architectures for solving modal problems, more general than AI algorithms, but less general than digital computers, is described. New concepts and mechanisms of such architectures are explained, including hierarchical resolution of uncertainty. Examples from the visual arts and technology are described to illustrate them, including paintings of Baer, Banksy, Bleckner, da Vinci, Gene Davis, Hawthorne, Hensche, Matisse, Monet, Olitski, Seurat, and Stella. Paintings by different artists and artistic schools instinctively emphasize some brain processes over others. These choices exemplify their artistic styles. The role of perspective, T-junctions, and end gaps are used to explain how 2D pictures can induce percepts of 3D scenes.


2004 ◽  
Vol 92 (5) ◽  
pp. 2947-2959 ◽  
Author(s):  
Miguel Á. Carreira-Perpiñán ◽  
Geoffrey J. Goodhill

Maps of ocular dominance and orientation in primary visual cortex have a highly characteristic structure. The factors that determine this structure are still largely unknown. In particular, it is unclear how short-range excitatory and inhibitory connections between nearby neurons influence structure both within and between maps. Using a generalized version of a well-known computational model of visual cortical map development, we show that the number of excitatory and inhibitory oscillations in this interaction function critically influences map structure. Specifically, we demonstrate that functions that oscillate more than once do not produce maps closely resembling those seen biologically. This strongly suggests that local lateral connections in visual cortex oscillate only once and have the form of a Mexican hat.


Perception ◽  
1997 ◽  
Vol 26 (11) ◽  
pp. 1353-1366 ◽  
Author(s):  
Paola Bressan ◽  
Ennio Mingolla ◽  
Lothar Spillmann ◽  
Takeo Watanabe

2009 ◽  
Vol 51 (3) ◽  
pp. 132-145 ◽  
Author(s):  
YASUO NAGASAKA ◽  
RYUZABURO NAKATA ◽  
YOSHIHISA OSADA

Author(s):  
Harvey S. Smallman ◽  
Mark St. John ◽  
Michael B. Cowen

Despite the increasing prevalence of three-dimensional (3-D) perspective views of scenes, there remain a number of concerns about their utility, particularly for precise relative position tasks. Here, we empirically measure and then mathematically model the perceptual biases found in participants' perceptual reconstruction of perspective views. Participants reconstructed the length of 10 test posts scattered across a 3-D scene to match the physical length of a reference post. The test posts were all oriented in the X, Y or Z cardinal directions of 3-D space. Four viewing angles from 90 degrees (“2-D”) down to 22.5 degrees (“3-D”) were used. Matches systematically underestimated the compression of distances into the scene (Y) and systematically overestimated the compression of height (Z). A simple computational model is developed to account for the results that posits that linear perspective (that only operates in X) is inappropriately used to scale matching lengths in all three dimensions of space. The model suggests a novel account of the systematic underestimation of egocentric distances in the real world.


Sign in / Sign up

Export Citation Format

Share Document