scholarly journals Limits on visual awareness of object targets in the context of other object category masks: Investigating bottlenecks in the continuous flash suppression paradigm with hand and tool stimuli

2018 ◽  
Author(s):  
Regine Zopf ◽  
Stefan R. Schweinberger ◽  
Anina N. Rich

AbstractOur capacity to become aware of visual stimuli is limited. Investigating these limits, Cohen et al. (2015, Journal of Cognitive Neuroscience) found that certain object categories (e.g., faces) were more effective in blocking awareness of other categories (e.g., buildings) than other combinations (e.g., cars/chairs) in the continuous flash suppression (CFS) task. They also found that more category-pair representational similarity in higher visual cortex was related to longer category-pair breakthrough times suggesting a high-level representational architecture bottleneck for visual awareness. As the cortical representations of hands and tools overlap, these categories are ideal to test this further. We conducted CFS experiments and predicted longer breakthrough times for hands/tools compared to other pairs. In contrast to these predictions, participants were generally faster at detecting targets masked by hands or tools compared to other mask categories when giving manual (Experiment 1) or vocal responses (Experiment 2). Furthermore, we found the same inefficient mask effect for hands in the context of the categories used by Cohen et al. (2015) and found a similar behavioural pattern as the original paper (Experiment 3). Exploring potential low-level explanations, we found that the category average for edges (e.g. hands have less detail compared to cars) was the best predictor for the data. However, these category-specific image characteristics could not completely account for the Cohen et al. (2015) category pattern or for the hand/tool effects. Thus, several low- and high-level object category-specific limits for visual awareness are plausible and more investigations are needed to further tease these apart.

Emotion ◽  
2017 ◽  
Vol 17 (8) ◽  
pp. 1199-1207 ◽  
Author(s):  
Timo Stein ◽  
Caitlyn Grubb ◽  
Maria Bertrand ◽  
Seh Min Suh ◽  
Sara C. Verosky

2017 ◽  
Vol 117 (1) ◽  
pp. 388-402 ◽  
Author(s):  
Michael A. Cohen ◽  
George A. Alvarez ◽  
Ken Nakayama ◽  
Talia Konkle

Visual search is a ubiquitous visual behavior, and efficient search is essential for survival. Different cognitive models have explained the speed and accuracy of search based either on the dynamics of attention or on similarity of item representations. Here, we examined the extent to which performance on a visual search task can be predicted from the stable representational architecture of the visual system, independent of attentional dynamics. Participants performed a visual search task with 28 conditions reflecting different pairs of categories (e.g., searching for a face among cars, body among hammers, etc.). The time it took participants to find the target item varied as a function of category combination. In a separate group of participants, we measured the neural responses to these object categories when items were presented in isolation. Using representational similarity analysis, we then examined whether the similarity of neural responses across different subdivisions of the visual system had the requisite structure needed to predict visual search performance. Overall, we found strong brain/behavior correlations across most of the higher-level visual system, including both the ventral and dorsal pathways when considering both macroscale sectors as well as smaller mesoscale regions. These results suggest that visual search for real-world object categories is well predicted by the stable, task-independent architecture of the visual system. NEW & NOTEWORTHY Here, we ask which neural regions have neural response patterns that correlate with behavioral performance in a visual processing task. We found that the representational structure across all of high-level visual cortex has the requisite structure to predict behavior. Furthermore, when directly comparing different neural regions, we found that they all had highly similar category-level representational structures. These results point to a ubiquitous and uniform representational structure in high-level visual cortex underlying visual object processing.


2020 ◽  
Vol 10 (15) ◽  
pp. 5333
Author(s):  
Anam Manzoor ◽  
Waqar Ahmad ◽  
Muhammad Ehatisham-ul-Haq ◽  
Abdul Hannan ◽  
Muhammad Asif Khan ◽  
...  

Emotions are a fundamental part of human behavior and can be stimulated in numerous ways. In real-life, we come across different types of objects such as cake, crab, television, trees, etc., in our routine life, which may excite certain emotions. Likewise, object images that we see and share on different platforms are also capable of expressing or inducing human emotions. Inferring emotion tags from these object images has great significance as it can play a vital role in recommendation systems, image retrieval, human behavior analysis and, advertisement applications. The existing schemes for emotion tag perception are based on the visual features, like color and texture of an image, which are poorly affected by lightning conditions. The main objective of our proposed study is to address this problem by introducing a novel idea of inferring emotion tags from the images based on object-related features. In this aspect, we first created an emotion-tagged dataset from the publicly available object detection dataset (i.e., “Caltech-256”) using subject evaluation from 212 users. Next, we used a convolutional neural network-based model to automatically extract the high-level features from object images for recognizing nine (09) emotion categories, such as amusement, awe, anger, boredom, contentment, disgust, excitement, fear, and sadness. Experimental results on our emotion-tagged dataset endorse the success of our proposed idea in terms of accuracy, precision, recall, specificity, and F1-score. Overall, the proposed scheme achieved an accuracy rate of approximately 85% and 79% using top-level and bottom-level emotion tagging, respectively. We also performed a gender-based analysis for inferring emotion tags and observed that male and female subjects have discernment in emotions perception concerning different object categories.


2013 ◽  
Vol 24 (11) ◽  
pp. 2859-2872 ◽  
Author(s):  
Joseph E. Dunsmoor ◽  
Philip A. Kragel ◽  
Alex Martin ◽  
Kevin S. LaBar

Author(s):  
Xiayu Chen ◽  
Ming Zhou ◽  
Zhengxin Gong ◽  
Wei Xu ◽  
Xingyu Liu ◽  
...  

Deep neural networks (DNNs) have attained human-level performance on dozens of challenging tasks via an end-to-end deep learning strategy. Deep learning allows data representations that have multiple levels of abstraction; however, it does not explicitly provide any insights into the internal operations of DNNs. Deep learning's success is appealing to neuroscientists not only as a method for applying DNNs to model biological neural systems but also as a means of adopting concepts and methods from cognitive neuroscience to understand the internal representations of DNNs. Although general deep learning frameworks, such as PyTorch and TensorFlow, could be used to allow such cross-disciplinary investigations, the use of these frameworks typically requires high-level programming expertise and comprehensive mathematical knowledge. A toolbox specifically designed as a mechanism for cognitive neuroscientists to map both DNNs and brains is urgently needed. Here, we present DNNBrain, a Python-based toolbox designed for exploring the internal representations of DNNs as well as brains. Through the integration of DNN software packages and well-established brain imaging tools, DNNBrain provides application programming and command line interfaces for a variety of research scenarios. These include extracting DNN activation, probing and visualizing DNN representations, and mapping DNN representations onto the brain. We expect that our toolbox will accelerate scientific research by both applying DNNs to model biological neural systems and utilizing paradigms of cognitive neuroscience to unveil the black box of DNNs.


2010 ◽  
Vol 22 (6) ◽  
pp. 1235-1243 ◽  
Author(s):  
Marieke L. Schölvinck ◽  
Geraint Rees

Motion-induced blindness (MIB) is a visual phenomenon in which highly salient visual targets spontaneously disappear from visual awareness (and subsequently reappear) when superimposed on a moving background of distracters. Such fluctuations in awareness of the targets, although they remain physically present, provide an ideal paradigm to study the neural correlates of visual awareness. Existing behavioral data on MIB are consistent both with a role for structures early in visual processing and with involvement of high-level visual processes. To further investigate this issue, we used high field functional MRI to investigate signals in human low-level visual cortex and motion-sensitive area V5/MT while participants reported disappearance and reappearance of an MIB target. Surprisingly, perceptual invisibility of the target was coupled to an increase in activity in low-level visual cortex plus area V5/MT compared with when the target was visible. This increase was largest in retinotopic regions representing the target location. One possibility is that our findings result from an active process of completion of the field of distracters that acts locally in the visual cortex, coupled to a more global process that facilitates invisibility in general visual cortex. Our findings show that the earliest anatomical stages of human visual cortical processing are implicated in MIB, as with other forms of bistable perception.


Sign in / Sign up

Export Citation Format

Share Document