scholarly journals Emotion Schemas are Embedded in the Human Visual System

2018 ◽  
Author(s):  
Philip A. Kragel ◽  
Marianne Reddan ◽  
Kevin S. LaBar ◽  
Tor D. Wager

AbstractTheorists have suggested that emotions are canonical responses to situations ancestrally linked to survival. If so, then emotions may be afforded by features of the sensory environment. However, few computationally explicit models describe how combinations of stimulus features evoke different emotions. Here we develop a convolutional neural network that accurately decodes images into 11 distinct emotion categories. We validate the model using over 25,000 images and movies and show that image content is sufficient to predict the category and valence of human emotion ratings. In two fMRI studies, we demonstrate that patterns of human visual cortex activity encode emotion category-related model output and can decode multiple categories of emotional experience. These results suggest that rich, category-specific emotion representations are embedded within the human visual system.

2019 ◽  
Vol 5 (7) ◽  
pp. eaaw4358 ◽  
Author(s):  
Philip A. Kragel ◽  
Marianne C. Reddan ◽  
Kevin S. LaBar ◽  
Tor D. Wager

Theorists have suggested that emotions are canonical responses to situations ancestrally linked to survival. If so, then emotions may be afforded by features of the sensory environment. However, few computational models describe how combinations of stimulus features evoke different emotions. Here, we develop a convolutional neural network that accurately decodes images into 11 distinct emotion categories. We validate the model using more than 25,000 images and movies and show that image content is sufficient to predict the category and valence of human emotion ratings. In two functional magnetic resonance imaging studies, we demonstrate that patterns of human visual cortex activity encode emotion category–related model output and can decode multiple categories of emotional experience. These results suggest that rich, category-specific visual features can be reliably mapped to distinct emotions, and they are coded in distributed representations within the human visual system.


2016 ◽  
Vol 23 (5) ◽  
pp. 529-541 ◽  
Author(s):  
Sara Ajina ◽  
Holly Bridge

Damage to the primary visual cortex removes the major input from the eyes to the brain, causing significant visual loss as patients are unable to perceive the side of the world contralateral to the damage. Some patients, however, retain the ability to detect visual information within this blind region; this is known as blindsight. By studying the visual pathways that underlie this residual vision in patients, we can uncover additional aspects of the human visual system that likely contribute to normal visual function but cannot be revealed under physiological conditions. In this review, we discuss the residual abilities and neural activity that have been described in blindsight and the implications of these findings for understanding the intact system.


2021 ◽  
Author(s):  
Peter J. Kohler ◽  
Alasdair D. F. Clarke

AbstractSymmetries are present at many scales in images of natural scenes. A large body of literature has demonstrated contributions of symmetry to numerous domains of visual perception. The four fundamental symmetries, reflection, rotation, translation and glide reflection, can be combined in exactly 17 distinct ways. These wallpaper groups represent the complete set of symmetries in 2D images and have recently found use in the vision science community as an ideal stimulus set for studying the perception of symmetries in textures. The goal of the current study is to provide a more comprehensive description of responses to symmetry in the human visual system, by collecting both brain imaging (Steady-State Visual Evoked Potentials measured using high-density EEG) and behavioral (symmetry detection thresholds) data using the entire set of wallpaper groups. This allows us to probe the hierarchy of complexity among wallpaper groups, in which simpler groups are subgroups of more complex ones. We find that this hierarchy is preserved almost perfectly in both behavior and brain activity: A multi-level Bayesian GLM indicates that for most of the 63 subgroup relationships, subgroups produce lower amplitude responses in visual cortex (posterior probability: > 0.95 for 56 of 63) and require longer presentation durations to be reliably detected (posterior probability: > 0.95 for 49 of 63). This systematic pattern is seen only in visual cortex and only in components of the brain response known to be symmetric-specific. Our results show that representations of symmetries in the human brain are precise and rich in detail, and that this precision is reflected in behavior. These findings expand our understanding of symmetry perception, and open up new avenues for research on how fine-grained representations of regular textures contribute to natural vision.


Author(s):  
Joel Dapello ◽  
Tiago Marques ◽  
Martin Schrimpf ◽  
Franziska Geiger ◽  
David D. Cox ◽  
...  

AbstractCurrent state-of-the-art object recognition models are largely based on convolutional neural network (CNN) architectures, which are loosely inspired by the primate visual system. However, these CNNs can be fooled by imperceptibly small, explicitly crafted perturbations, and struggle to recognize objects in corrupted images that are easily recognized by humans. Here, by making comparisons with primate neural data, we first observed that CNN models with a neural hidden layer that better matches primate primary visual cortex (V1) are also more robust to adversarial attacks. Inspired by this observation, we developed VOneNets, a new class of hybrid CNN vision models. Each VOneNet contains a fixed weight neural network front-end that simulates primate V1, called the VOneBlock, followed by a neural network back-end adapted from current CNN vision models. The VOneBlock is based on a classical neuroscientific model of V1: the linear-nonlinear-Poisson model, consisting of a biologically-constrained Gabor filter bank, simple and complex cell nonlinearities, and a V1 neuronal stochasticity generator. After training, VOneNets retain high ImageNet performance, but each is substantially more robust, outperforming the base CNNs and state-of-the-art methods by 18% and 3%, respectively, on a conglomerate benchmark of perturbations comprised of white box adversarial attacks and common image corruptions. Finally, we show that all components of the VOneBlock work in synergy to improve robustness. While current CNN architectures are arguably brain-inspired, the results presented here demonstrate that more precisely mimicking just one stage of the primate visual system leads to new gains in ImageNet-level computer vision applications.


Author(s):  
Shin Kobayashi ◽  
Shigeru Okabayashi ◽  
Isao Horiba ◽  
Noboru Sugie ◽  
Hiroaki Kudo ◽  
...  

Author(s):  
X. J. Li ◽  
H. W. Yan ◽  
S. W. Yang ◽  
L. Kang ◽  
X. M. Lu

The paper proposes a novel pansharpening method based on the pulse-coupled neural network segmentation. In the new method, uniform injection gains of each region are estimated through PCNN segmentation rather than through a simple square window. Since PCNN segmentation agrees with the human visual system, the proposed method shows better spectral consistency. Our experiments, which have been carried out for both suburban and urban datasets, demonstrate that the proposed method outperforms other methods in multispectral pansharpening.


Author(s):  
Andreas J Keller ◽  
Morgane M Roth ◽  
Massimo Scanziani

We sense our environment through pathways linking sensory organs to the brain. In the visual system, these feedforward pathways define the classical feedforward receptive field (ffRF), the area in space where visual stimuli excite a neuron1. The visual system also uses visual context, the visual scene surrounding a stimulus, to predict the content of the stimulus2, and accordingly, neurons have been found that are excited by stimuli outside their ffRF3–8. The mechanisms generating excitation to stimuli outside the ffRF are, however, unclear. Here we show that feedback projections onto excitatory neurons in mouse primary visual cortex (V1) generate a second receptive field driven by stimuli outside the ffRF. Stimulating this feedback receptive field (fbRF) elicits slow and delayed responses compared to ffRF stimulation. These responses are preferentially reduced by anesthesia and, importantly, by silencing higher visual areas (HVAs). Feedback inputs from HVAs have scattered receptive fields relative to their putative V1 targets enabling the generation of the fbRF. Neurons with fbRFs are located in cortical layers receiving strong feedback projections and are absent in the main input layer, consistent with a laminar processing hierarchy. The fbRF and the ffRF are mutually antagonistic since large, uniform stimuli, covering both, suppress responses. While somatostatin-expressing inhibitory neurons are driven by these large stimuli, parvalbumin and vasoactive-intestinal-peptide-expressing inhibitory neurons have antagonistic fbRF and ffRF, similar to excitatory neurons. Therefore, feedback projections may enable neurons to use context to predict information missing from the ffRF and to report differences in stimulus features across visual space, regardless if excitation occurs inside or outside the ffRF. We have identified a fbRF which, by complementing the ffRF, may contribute to predictive processing.


Author(s):  
Yaghoub Pourasad

<p>Identify objects based on modeling the human visual system, as an effective method in intelligent identification, has attracted the attention of many researchers. Although the machines have high computational speed but are very weak as compared to humans in terms of diagnosis. Experience has shown that in many areas of image processing, algorithms that have biological backing had more simplicity and better performance. The human visual system, first select the main parts of the image which is provided by the visual featured model, then pays to object recognition which is a hierarchical operations according to this, HMAX model is also provided. HMAX object recognition model from the group of hierarchical models without feedback that its structure and parameters selected based on biological characteristics of the visual cortex. This model is a hierarchical model neural network with four layers, is composed of alternating layers that are simple and complex. Due to the high complexity of the human visual system is virtually impossible to replicate it. For each of the above, separate models have been proposed but in the human visual system, this operation is performed seamlessly, thus, by combining the principles of these models is expected to be closer to the human visual system and obtain a higher recognition rate. In this paper, we introduce an architecture to classify images based on a combination of previous work is based on the basic operation of the visual cortex. According to the results presented, the proposed model compared with the main HMAX model has a much higher recognition rate. Simulations was performed on the database of Caltech101.</p>


Sign in / Sign up

Export Citation Format

Share Document