scholarly journals Spatial Attention Enhances the Neural Representation of Invisible Signals Embedded in Noise

2017 ◽  
Author(s):  
Cooper A. Smout ◽  
Jason B. Mattingley

AbstractRecent evidence suggests that voluntary spatial attention can affect neural processing of visual stimuli that do not enter conscious awareness (i.e. invisible stimuli), supporting the notion that attention and awareness are dissociable processes (Watanabe et al., 2011; Wyart, Dehaene, & Tallon-Baudry, 2012). To date, however, no study has demonstrated that these effects reflect enhancement of the neural representation of invisible stimuli per se, as opposed to other neural processes not specifically tied to the stimulus in question. In addition, it remains unclear whether spatial attention can modulate neural representations of invisible stimuli in direct competition with highly salient and visible stimuli. Here we developed a novel electroencephalography (EEG) frequency-tagging paradigm to obtain a continuous readout of human brain activity associated with visible and invisible signals embedded in dynamic noise. Participants (N = 23) detected occasional contrast changes in one of two flickering image streams on either side of fixation. Each image stream contained a visible or invisible signal embedded in every second noise image, the visibility of which was titrated and checked using a two-interval forced-choice detection task. Steady-state visual-evoked potentials (SSVEPs) were computed from EEG data at the signal and noise frequencies of interest. Cluster-based permutation analyses revealed significant neural responses to both visible and invisible signals across posterior scalp electrodes. Control analyses revealed that these responses did not reflect a subharmonic response to noise stimuli. In line with previous findings, spatial attention increased the neural representation of visible signals. Crucially, spatial attention also increased the neural representation of invisible signals. As such, the present results replicate and extend previous studies by demonstrating that attention can modulate the neural representation of invisible signals that are in direct competition with highly salient masking stimuli.

2018 ◽  
Vol 30 (8) ◽  
pp. 1119-1129 ◽  
Author(s):  
Cooper A. Smout ◽  
Jason B. Mattingley

Recent evidence suggests that voluntary spatial attention can affect neural processing of visual stimuli that do not enter conscious awareness (i.e., invisible stimuli), supporting the notion that attention and awareness are dissociable processes [Wyart, V., Dehaene, S., & Tallon-Baudry, C. Early dissociation between neural signatures of endogenous spatial attention and perceptual awareness during visual masking. Frontiers in Human Neuroscience, 6, 1–14, 2012; Watanabe, M., Cheng, K., Murayama, Y., Ueno, K., Asamizuya, T., Tanaka, K., et al. Attention but not awareness modulates the BOLD signal in the human V1 during binocular suppression. Science, 334, 829–831, 2011]. To date, however, no study has demonstrated that these effects reflect enhancement of the neural representation of invisible stimuli per se, as opposed to other neural processes not specifically tied to the stimulus in question. In addition, it remains unclear whether spatial attention can modulate neural representations of invisible stimuli in direct competition with highly salient and visible stimuli. Here we developed a novel EEG frequency-tagging paradigm to obtain a continuous readout of human brain activity associated with visible and invisible signals embedded in dynamic noise. Participants ( n = 23) detected occasional contrast changes in one of two flickering image streams on either side of fixation. Each image stream contained a visible or invisible signal embedded in every second noise image, the visibility of which was titrated and checked using a two-interval forced-choice detection task. Steady-state visual-evoked potentials were computed from EEG data at the signal and noise frequencies of interest. Cluster-based permutation analyses revealed significant neural responses to both visible and invisible signals across posterior scalp electrodes. Control analyses revealed that these responses did not reflect a subharmonic response to noise stimuli. In line with previous findings, spatial attention increased the neural representation of visible signals. Crucially, spatial attention also increased the neural representation of invisible signals. As such, the present results replicate and extend previous studies by demonstrating that attention can modulate the neural representation of invisible signals that are in direct competition with highly salient masking stimuli.


2021 ◽  
Vol 15 ◽  
Author(s):  
Chi Zhang ◽  
Xiao-Han Duan ◽  
Lin-Yuan Wang ◽  
Yong-Li Li ◽  
Bin Yan ◽  
...  

Despite the remarkable similarities between convolutional neural networks (CNN) and the human brain, CNNs still fall behind humans in many visual tasks, indicating that there still exist considerable differences between the two systems. Here, we leverage adversarial noise (AN) and adversarial interference (AI) images to quantify the consistency between neural representations and perceptual outcomes in the two systems. Humans can successfully recognize AI images as the same categories as their corresponding regular images but perceive AN images as meaningless noise. In contrast, CNNs can recognize AN images similar as corresponding regular images but classify AI images into wrong categories with surprisingly high confidence. We use functional magnetic resonance imaging to measure brain activity evoked by regular and adversarial images in the human brain, and compare it to the activity of artificial neurons in a prototypical CNN—AlexNet. In the human brain, we find that the representational similarity between regular and adversarial images largely echoes their perceptual similarity in all early visual areas. In AlexNet, however, the neural representations of adversarial images are inconsistent with network outputs in all intermediate processing layers, providing no neural foundations for the similarities at the perceptual level. Furthermore, we show that voxel-encoding models trained on regular images can successfully generalize to the neural responses to AI images but not AN images. These remarkable differences between the human brain and AlexNet in representation-perception association suggest that future CNNs should emulate both behavior and the internal neural presentations of the human brain.


2021 ◽  
Author(s):  
Rohan Saha ◽  
Jennifer Campbell ◽  
Janet F. Werker ◽  
Alona Fyshe

Infants start developing rudimentary language skills and can start understanding simple words well before their first birthday. This development has also been shown primarily using Event Related Potential (ERP) techniques to find evidence of word comprehension in the infant brain. While these works validate the presence of semantic representations of words (word meaning) in infants, they do not tell us about the mental processes involved in the manifestation of these semantic representations or the content of the representations. To this end, we use a decoding approach where we employ machine learning techniques on Electroencephalography (EEG) data to predict the semantic representations of words found in the brain activity of infants. We perform multiple analyses to explore word semantic representations in two groups of infants (9-month-old and 12-month-old). Our analyses show significantly above chance decodability of overall word semantics, word animacy, and word phonetics. As we analyze brain activity, we observe that participants in both age groups show signs of word comprehension immediately after word onset, marked by our model's significantly above chance word prediction accuracy. We also observed strong neural representations of word phonetics in the brain data for both age groups, some likely correlated to word decoding accuracy and others not. Lastly, we discover that the neural representations of word semantics are similar in both infant age groups. Our results on word semantics, phonetics, and animacy decodability, give us insights into the evolution of neural representation of word meaning in infants.


2021 ◽  
Author(s):  
Sheena Waters ◽  
Elise Kanber ◽  
Nadine Lavan ◽  
Michel Belyk ◽  
Daniel Carey ◽  
...  

Humans have a remarkable capacity to finely control the muscles of the larynx, via distinct patterns of cortical topography and innervation that may underpin our sophisticated vocal capabilities compared with non-human primates. Here, we investigated the behavioural and neural correlates of laryngeal control, and their relationship to vocal expertise, using an imitation task that required adjustments of larynx musculature during speech. Highly-trained human singers and non-singer control participants modulated voice pitch and vocal tract length (VTL) to mimic auditory speech targets, while undergoing real-time anatomical scans of the vocal tract and functional scans of brain activity. Multivariate analyses of speech acoustics, larynx movements and brain activation data were used to quantify vocal modulation behaviour, and to search for neural representations of the two modulated vocal parameters during the preparation and execution of speech. We found that singers showed more accurate task-relevant modulations of speech pitch and VTL (i.e. larynx height, as measured with vocal tract MRI) during speech imitation; this was accompanied by stronger representation of VTL within a region of right dorsal somatosensory cortex. Our findings suggest a common neural basis for enhanced vocal control in speech and song.


2021 ◽  
Author(s):  
Ze Fu ◽  
Xiaosha Wang ◽  
Xiaoying Wang ◽  
Huichao Yang ◽  
Jiahuan Wang ◽  
...  

A critical way for humans to acquire, represent and communicate information is through language, yet the underlying computation mechanisms through which language contributes to our word meaning representations are poorly understood. We compared three major types of word computation mechanisms from large language corpus (simple co-occurrence, graph-space relations and neural-network-vector-embedding relations) in terms of the association of words’ brain activity patterns, measured by two functional magnetic resonance imaging (fMRI) experiments. Word relations derived from a graph-space representation, and not neural-network-vector-embedding, had unique explanatory power for the neural activity patterns in brain regions that have been shown to be particularly sensitive to language processes, including the anterior temporal lobe (capturing graph-common-neighbors), inferior frontal gyrus, and posterior middle/inferior temporal gyrus (capturing graph-shortest-path). These results were robust across different window sizes and graph sizes and were relatively specific to language inputs. These findings highlight the role of cumulative language inputs in organizing word meaning neural representations and provide a mathematical model to explain how different brain regions capture different types of language-derived information.


2020 ◽  
Author(s):  
Tomoyasu Horikawa ◽  
Yukiyasu Kamitani

SummaryVisual image reconstruction from brain activity produces images whose features are consistent with the neural representations in the visual cortex given arbitrary visual instances [1–3], presumably reflecting the person’s visual experience. Previous reconstruction studies have been concerned either with how stimulus images are faithfully reconstructed or with whether mentally imagined contents can be reconstructed in the absence of external stimuli. However, many lines of vision research have demonstrated that even stimulus perception is shaped both by stimulus-induced processes and top-down processes. In particular, attention (or the lack of it) is known to profoundly affect visual experience [4–8] and brain activity [9–21]. Here, to investigate how top-down attention impacts the neural representation of visual images and the reconstructions, we use a state-of-the-art method (deep image reconstruction [3]) to reconstruct visual images from fMRI activity measured while subjects attend to one of two images superimposed with equally weighted contrasts. Deep image reconstruction exploits the hierarchical correspondence between the brain and a deep neural network (DNN) to translate (decode) brain activity into DNN features of multiple layers, and then create images that are consistent with the decoded DNN features [3, 22, 23]. Using the deep image reconstruction model trained on fMRI responses to single natural images, we decode brain activity during the attention trials. Behavioral evaluations show that the reconstructions resemble the attended rather than the unattended images. The reconstructions can be modeled by superimposed images with contrasts biased to the attended one, which are comparable to the appearance of the stimuli under attention measured in a separate session. Attentional modulations are found in a broad range of hierarchical visual representations and mirror the brain–DNN correspondence. Our results demonstrate that top-down attention counters stimulus-induced responses and modulate neural representations to render reconstructions in accordance with subjective appearance. The reconstructions appear to reflect the content of visual experience and volitional control, opening a new possibility of brain-based communication and creation.


Author(s):  
Giuseppe Ugazio ◽  
Marcus Grueschow ◽  
Rafael Polania ◽  
Claus Lamm ◽  
Philippe Tobler ◽  
...  

Abstract Moral preferences pervade many aspects of our lives, dictating how we ought to behave, whom we can marry and even what we eat. Despite their relevance, one fundamental question remains unanswered: where do individual moral preferences come from? It is often thought that all types of preferences reflect properties of domain-general neural decision mechanisms that employ a common ‘neural currency’ to value choice options in many different contexts. This view, however, appears at odds with the observation that many humans consider it intuitively wrong to employ the same scale to compare moral value (e.g. of a human life) with material value (e.g. of money). In this paper, we directly test if moral subjective values are represented by similar neural processes as financial subjective values. In a study combining functional magnetic resonance imaging with a novel behavioral paradigm, we identify neural representations of the subjective values of human lives or financial payoffs by means of structurally identical computational models. Correlating isomorphic model variables from both domains with brain activity reveals specific patterns of neural activity that selectively represent values in the moral (right temporo-parietal junction) or financial (ventral-medial prefrontal cortex) domain. Intriguingly, our findings show that human lives and money are valued in (at least partially) distinct neural currencies, supporting theoretical proposals that human moral behavior is guided by processes that are distinct from those underlying behavior driven by personal material benefit.


2022 ◽  
Vol 5 (1) ◽  
Author(s):  
Tomoyasu Horikawa ◽  
Yukiyasu Kamitani

AbstractStimulus images can be reconstructed from visual cortical activity. However, our perception of stimuli is shaped by both stimulus-induced and top-down processes, and it is unclear whether and how reconstructions reflect top-down aspects of perception. Here, we investigate the effect of attention on reconstructions using fMRI activity measured while subjects attend to one of two superimposed images. A state-of-the-art method is used for image reconstruction, in which brain activity is translated (decoded) to deep neural network (DNN) features of hierarchical layers then to an image. Reconstructions resemble the attended rather than unattended images. They can be modeled by superimposed images with biased contrasts, comparable to the appearance during attention. Attentional modulations are found in a broad range of hierarchical visual representations and mirror the brain–DNN correspondence. Our results demonstrate that top-down attention counters stimulus-induced responses, modulating neural representations to render reconstructions in accordance with subjective appearance.


2017 ◽  
Author(s):  
Mari Herigstad ◽  
Olivia Faull ◽  
Anja Hayen ◽  
Eleanor Evans ◽  
Maxine F. Hardinge ◽  
...  

ABSTRACTBackgroundBreathlessness in chronic obstructive pulmonary disease (COPD) is often discordant with airway pathophysiology (“over-perception”). Pulmonary rehabilitation has profound effects upon breathlessness, without influencing lung function. Learned associations can influence brain mechanisms of sensory perception. We therefore hypothesised that improvements in breathlessness with pulmonary rehabilitation may be explained by changing neural representations of learned associations, reducing “over-perception”.MethodsIn 31 patients with COPD, we tested how pulmonary rehabilitation altered the relationship between brain activity during learned associations with a word-cue task (using functional magnetic resonance imaging), clinical, and psychological measures of breathlessness.ResultsImprovements in breathlessness and breathlessness-anxiety correlated with reductions in word-cue related activity in the insula and anterior cingulate cortex (ACC) (breathlessness), and increased activations in attention regulation and motor networks (breathlessness-anxiety). Greater baseline (pre-rehabilitation) activity in the insula, ACC and prefrontal cortex correlated with the magnitude of improvement in breathlessness and breathlessness anxiety.ConclusionsPulmonary rehabilitation reduces the influence of learned associations upon neural processes that generate breathlessness. Patients with stronger word-cue related activity at baseline benefitted more from pulmonary rehabilitation. These findings highlight the importance of targeting learned associations within treatments for COPD, demonstrating how neuroimaging may contribute to patient stratification and more successful personalised therapy.


2020 ◽  
Author(s):  
Isabelle Rosenthal ◽  
Shridhar Singh ◽  
Katherine Hermann ◽  
Dimitrios Pantazis ◽  
Bevil R. Conway

The geometry that describes the relationship among colors is unsettled despite centuries of study. Here we present a new approach, using multivariate analyses of direct measurements of brain activity obtained with magnetoencephalography to reverse-engineer the geometry of the neural representation of color space. The analyses depend upon determining similarity relationships among the neural responses to different colors and assessing how these relationships change in time. To evaluate the approach, we relate patterns of neural activity to universal patterns in color naming. Control experiments showed that responses to color words could not decode activity elicited by color stimuli. The results suggest that three patterns of color naming can be accounted for by decoding the similarity relationships in the neural representation of color: the association of warm colors such as reds and oranges with “light” and cool colors such as blues and greens with “dark”; the greater precision among all languages in naming warm colors compared to cool colors; and the preeminence of red.


Sign in / Sign up

Export Citation Format

Share Document