Oscillatory network model of the brain visual cortex with controlled synchronization

Author(s):  
M. Kuzmina ◽  
E. Manykin ◽  
I. Surina
2015 ◽  
Vol 113 (9) ◽  
pp. 3159-3171 ◽  
Author(s):  
Caroline D. B. Luft ◽  
Alan Meeson ◽  
Andrew E. Welchman ◽  
Zoe Kourtzi

Learning the structure of the environment is critical for interpreting the current scene and predicting upcoming events. However, the brain mechanisms that support our ability to translate knowledge about scene statistics to sensory predictions remain largely unknown. Here we provide evidence that learning of temporal regularities shapes representations in early visual cortex that relate to our ability to predict sensory events. We tested the participants' ability to predict the orientation of a test stimulus after exposure to sequences of leftward- or rightward-oriented gratings. Using fMRI decoding, we identified brain patterns related to the observers' visual predictions rather than stimulus-driven activity. Decoding of predicted orientations following structured sequences was enhanced after training, while decoding of cued orientations following exposure to random sequences did not change. These predictive representations appear to be driven by the same large-scale neural populations that encode actual stimulus orientation and to be specific to the learned sequence structure. Thus our findings provide evidence that learning temporal structures supports our ability to predict future events by reactivating selective sensory representations as early as in primary visual cortex.


2017 ◽  
Vol 372 (1715) ◽  
pp. 20160504 ◽  
Author(s):  
Megumi Kaneko ◽  
Michael P. Stryker

Mechanisms thought of as homeostatic must exist to maintain neuronal activity in the brain within the dynamic range in which neurons can signal. Several distinct mechanisms have been demonstrated experimentally. Three mechanisms that act to restore levels of activity in the primary visual cortex of mice after occlusion and restoration of vision in one eye, which give rise to the phenomenon of ocular dominance plasticity, are discussed. The existence of different mechanisms raises the issue of how these mechanisms operate together to converge on the same set points of activity. This article is part of the themed issue ‘Integrating Hebbian and homeostatic plasticity’.


2020 ◽  
Author(s):  
Yaelan Jung ◽  
Dirk B. Walther

AbstractNatural scenes deliver rich sensory information about the world. Decades of research has shown that the scene-selective network in the visual cortex represents various aspects of scenes. It is, however, unknown how such complex scene information is processed beyond the visual cortex, such as in the prefrontal cortex. It is also unknown how task context impacts the process of scene perception, modulating which scene content is represented in the brain. In this study, we investigate these questions using scene images from four natural scene categories, which also depict two types of global scene properties, temperature (warm or cold), and sound-level (noisy or quiet). A group of healthy human subjects from both sexes participated in the present study using fMRI. In the study, participants viewed scene images under two different task conditions; temperature judgment and sound-level judgment. We analyzed how different scene attributes (scene categories, temperature, and sound-level information) are represented across the brain under these task conditions. Our findings show that global scene properties are only represented in the brain, especially in the prefrontal cortex, when they are task-relevant. However, scene categories are represented in the brain, in both the parahippocampal place area and the prefrontal cortex, regardless of task context. These findings suggest that the prefrontal cortex selectively represents scene content according to task demands, but this task selectivity depends on the types of scene content; task modulates neural representations of global scene properties but not of scene categories.


1989 ◽  
Vol 1 (4) ◽  
pp. 317-326 ◽  
Author(s):  
Sabrina J. Goodman ◽  
Richard A. Andersen

Microstimulation of many saccadic centers in the brain produces eye movements that are not consistent with either a strictly retinal or strictly head-centered coordinate coding of eye movements. Rather, stimulation produces some features of both types of coordinate coding. Recently we demonstrated a neural network model that was trained to localize the position of visual stimuli in head-centered coordinates at the output using inputs of eye and retinal position similar to those converging on area 7a of the posterior parietal cortex of monkeys (Zipser & Andersen 1988; Andersen & Zipser 1988). Here we show that microstimulation of this trained network, achieved by fully activating single units in the middle layer, produces “saccades” that are very much like the saccades produced by stimulating the brain. The activity of the middle-layer units can be considered to code the desired location of the eyes in head-centered coordinates; however, stimulation of these units does not produce the saccades predicted by a classical head-centered coordinate coding because the location in space appears to be coded in a distributed fashion among a population of units rather than explicitly at the level of single cells.


2022 ◽  
Author(s):  
Andrea Kóbor ◽  
Karolina Janacsek ◽  
Petra Hermann ◽  
Zsofia Zavecz ◽  
Vera Varga ◽  
...  

Previous research recognized that humans could extract statistical regularities of the environment to automatically predict upcoming events. However, it has remained unexplored how the brain encodes the distribution of statistical regularities if it continuously changes. To investigate this question, we devised an fMRI paradigm where participants (N = 32) completed a visual four-choice reaction time (RT) task consisting of statistical regularities. Two types of blocks involving the same perceptual elements alternated with one another throughout the task: While the distribution of statistical regularities was predictable in one block type, it was unpredictable in the other. Participants were unaware of the presence of statistical regularities and of their changing distribution across the subsequent task blocks. Based on the RT results, although statistical regularities were processed similarly in both the predictable and unpredictable blocks, participants acquired less statistical knowledge in the unpredictable as compared with the predictable blocks. Whole-brain random-effects analyses showed increased activity in the early visual cortex and decreased activity in the precuneus for the predictable as compared with the unpredictable blocks. Therefore, the actual predictability of statistical regularities is likely to be represented already at the early stages of visual cortical processing. However, decreased precuneus activity suggests that these representations are imperfectly updated to track the multiple shifts in predictability throughout the task. The results also highlight that the processing of statistical regularities in a changing environment could be habitual.


Author(s):  
A. Jayanthiladevi ◽  
S. Murugan ◽  
K. Manivel

Today, images and image sequences (videos) make up about 80% of all corporate and public unstructured big data. As growth of unstructured data increases, analytical systems must assimilate and interpret images and videos as well as they interpret structured data such as text and numbers. An image is a set of signals sensed by the human eye and processed by the visual cortex in the brain creating a vivid experience of a scene that is instantly associated with concepts and objects previously perceived and recorded in one's memory. To a computer, images are either a raster image or a vector image. Simply put, raster images are a sequence of pixels with discreet numerical values for color; vector images are a set of color-annotated polygons. To perform analytics on images or videos, the geometric encoding must be transformed into constructs depicting physical features, objects and movement represented by the image or video. This chapter explores text, images, and video analytics in fog computing.


CNS Spectrums ◽  
1999 ◽  
Vol 4 (8) ◽  
pp. 17-29 ◽  
Author(s):  
Georg Winterer ◽  
Werner M. Herrmann ◽  
Richard Coppola

ABSTRACTA growing number of anatomic and physiologic studies have shown that parallel sensory and motor information processing occurs in multiple cortical areas. These findings challenge the traditional model of brain processing, which states that the brain is a collection of physically discrete processing modules that pass information to each other by neuronal impulses in a stepwise manner. New concepts based on neural network models suggest that the brain is a dynamically shifting collection of interpenetrating, distributed, and transient neural networks. Neither of these models is necessarily mutually exclusive, but each gives different perspectives on the brain that might be complementary. Each model has its own research methodology, with functional magnetic resonance imaging supporting notions of modular processing, and electrophysiology (eg, electroencephalography) emphasizing the network model. These two technologies might be combined fruitfully in the near future to provide us with a better understanding of the brain. However, this common enterprise can succeed only when the inherent limitations and advantages of both models and technologies are known. After a general introduction about electrophysiology as a research tool and its relation to the network model, several practical examples are given on the generation of pathophysiologic models and disease classification, intermediate phenotyping for genetic investigations, and pharmacodynamic modeling. Finally, proposals are made about how to integrate electrophysiology and neuroimaging methods.


2016 ◽  
Vol 23 (5) ◽  
pp. 529-541 ◽  
Author(s):  
Sara Ajina ◽  
Holly Bridge

Damage to the primary visual cortex removes the major input from the eyes to the brain, causing significant visual loss as patients are unable to perceive the side of the world contralateral to the damage. Some patients, however, retain the ability to detect visual information within this blind region; this is known as blindsight. By studying the visual pathways that underlie this residual vision in patients, we can uncover additional aspects of the human visual system that likely contribute to normal visual function but cannot be revealed under physiological conditions. In this review, we discuss the residual abilities and neural activity that have been described in blindsight and the implications of these findings for understanding the intact system.


Author(s):  
Norman Yujen Teng

Tye argues that visual mental images have their contents encoded in topographically organized regions of the visual cortex, which support depictive representations; therefore, visual mental images rely at least in part on depictive representations. This argument, I contend, does not support its conclusion. I propose that we divide the problem about the depictive nature of mental imagery into two parts: one concerns the format of image representation and the other the conditions by virtue of which a representation becomes a depictive representation. Regarding the first part of the question, I argue that there exists a topographic format in the brain but that does not imply that there exists a depictive format of image representation. My answer to the second part of the question is that one needs a content analysis of a certain sort of topographic representations in order to make sense of depictive mental representations, and a topographic representation becomes a depictive representation by virtue of its content rather than its form.


Sign in / Sign up

Export Citation Format

Share Document