scholarly journals Nonverbal category knowledge limits the amount of information encoded in object representations: EEG evidence from 12-month-old infants

2021 ◽  
Vol 8 (3) ◽  
Author(s):  
Barbara Pomiechowska ◽  
Teodora Gliga

To what extent does language shape how we think about the world? Studies suggest that linguistic symbols expressing conceptual categories (‘apple’, ‘squirrel’) make us focus on categorical information (e.g. that you saw a squirrel) and disregard individual information (e.g. whether that squirrel had a long or short tail). Across two experiments with preverbal infants, we demonstrated that it is not language but nonverbal category knowledge that determines what information is packed into object representations. Twelve-month-olds ( N = 48) participated in an electroencephalography (EEG) change-detection task involving objects undergoing a brief occlusion. When viewing objects from unfamiliar categories, infants detected both across- and within-category changes, as evidenced by their negative central wave (Nc) event-related potential. Conversely, when viewing objects from familiar categories, they did not respond to within-category changes, which indicates that nonverbal category knowledge interfered with the representation of individual surface features necessary to detect such changes. Furthermore, distinct patterns of γ and α oscillations between familiar and unfamiliar categories were evident before and during occlusion, suggesting that categorization had an influence on the format of recruited object representations. Thus, we show that nonverbal category knowledge has rapid and enduring effects on object representation and discuss their functional significance for generic knowledge acquisition in the absence of language.

2015 ◽  
Vol 114 (5) ◽  
pp. 2637-2648 ◽  
Author(s):  
Fabrice Arcizet ◽  
Koorosh Mirpour ◽  
Daniel J. Foster ◽  
Caroline J. Charpentier ◽  
James W. Bisley

When looking around at the world, we can only attend to a limited number of locations. The lateral intraparietal area (LIP) is thought to play a role in guiding both covert attention and eye movements. In this study, we tested the involvement of LIP in both mechanisms with a change detection task. In the task, animals had to indicate whether an element changed during a blank in the trial by making a saccade to it. If no element changed, they had to maintain fixation. We examine how the animal's behavior is biased based on LIP activity prior to the presentation of the stimulus the animal must respond to. When the activity was high, the animal was more likely to make an eye movement toward the stimulus, even if there was no change; when the activity was low, the animal either had a slower reaction time or maintained fixation, even if a change occurred. We conclude that LIP activity is involved in both covert and overt attention, but when decisions about eye movements are to be made, this role takes precedence over guiding covert attention.


2015 ◽  
Vol 282 (1819) ◽  
pp. 20151683 ◽  
Author(s):  
Dora Kampis ◽  
Eugenio Parise ◽  
Gergely Csibra ◽  
Ágnes Melinda Kovács

A major feat of social beings is to encode what their conspecifics see, know or believe. While various non-human animals show precursors of these abilities, humans perform uniquely sophisticated inferences about other people's mental states. However, it is still unclear how these possibly human-specific capacities develop and whether preverbal infants, similarly to adults, form representations of other agents' mental states, specifically metarepresentations. We explored the neurocognitive bases of eight-month-olds' ability to encode the world from another person's perspective, using gamma-band electroencephalographic activity over the temporal lobes, an established neural signature for sustained object representation after occlusion. We observed such gamma-band activity when an object was occluded from the infants' perspective, as well as when it was occluded only from the other person (study 1), and also when subsequently the object disappeared, but the person falsely believed the object to be present (study 2). These findings suggest that the cognitive systems involved in representing the world from infants' own perspective are also recruited for encoding others' beliefs. Such results point to an early-developing, powerful apparatus suitable to deal with multiple concurrent representations, and suggest that infants can have a metarepresentational understanding of other minds even before the onset of language.


2018 ◽  
Author(s):  
William Xiang Quan Ngiam ◽  
Kimberley L. C. Khaw ◽  
Alex O. Holcombe ◽  
Patrick T. Goodbourn

Visual working memory (VWM) is limited in both the capacity of information it can retain and the rate at which it encodes that information. We examined the influence of stimulus complexity on these two limitations of VWM. Observers performed a change-detection task with English letters of various fonts, or letters from unfamiliar alphabets. Average perimetric complexity (κ)—an objective correlate of the number of features comprising each letter—differed among the fonts and alphabets. Varying the time between the memory array and mask, we used change-detection performance to estimate the number of items held in VWM (K) as a function of encoding time. For all alphabets, K increased over 270 ms (indicating the rate of encoding) before reaching an asymptote (indicating capacity). We found that rate and capacity for each alphabet were unrelated to complexity: Performance was best modelled by assuming that both were limited by number of items (K), rather than by number of features (K × κ). We also found a higher encoding rate and capacity for familiar alphabets (~45 items/sec; ~4 items) than for unfamiliar alphabets (~12 items/sec; ~1.5 items). We then compared the familiar English alphabet to an unfamiliar artificial character set matched in complexity. Again, rate and capacity was higher for the familiar than for the unfamiliar stimuli. We conclude that rate and capacity for encoding into visual working memory is determined by the number of familiar feature-integrated object representations.


PeerJ ◽  
2018 ◽  
Vol 6 ◽  
pp. e5601 ◽  
Author(s):  
Chao Gu ◽  
Zhong-Xu Liu ◽  
Rosemary Tannock ◽  
Steven Woltering

Individuals with Attention-Deficit Hyperactivity Disorder (ADHD) are often characterized by deficits in working memory (WM), which manifest in academic, professional, and mental health difficulties. To better understand the underlying mechanisms of these presumed WM deficits, we compared adults with ADHD to their peers on behavioral and neural indices of WM. We used a visuospatial change detection task with distractors which was designed to assess the brain’s ability to effectively filter out distractors from WM, in addition to testing for effects of WM load. Twenty-seven unmedicated adults with ADHD were compared to 27 matched peers on event-related potential (ERP) measures of WM, i.e., the contralateral delay activity (CDA). Despite severe impairments in everyday life functioning, findings showed no difference in deficits in behavioral tests of working memory for adults with ADHD compared to their peers. Interestingly, there were differences in neural activity between individuals with ADHD and their peers showing that the CDA of individuals with ADHD did not distinguish between high, distractor, and low memory load conditions. These data suggest, in the face of comparable behavioral performance, a difference in neural processing efficiency, wherein the brains of individuals with ADHD may not be as selective in the allocation of neural resources to perform a WM task.


2006 ◽  
Vol 27 (4) ◽  
pp. 218-228 ◽  
Author(s):  
Paul Rodway ◽  
Karen Gillies ◽  
Astrid Schepman

This study examined whether individual differences in the vividness of visual imagery influenced performance on a novel long-term change detection task. Participants were presented with a sequence of pictures, with each picture and its title displayed for 17  s, and then presented with changed or unchanged versions of those pictures and asked to detect whether the picture had been changed. Cuing the retrieval of the picture's image, by presenting the picture's title before the arrival of the changed picture, facilitated change detection accuracy. This suggests that the retrieval of the picture's representation immunizes it against overwriting by the arrival of the changed picture. The high and low vividness participants did not differ in overall levels of change detection accuracy. However, in replication of Gur and Hilgard (1975) , high vividness participants were significantly more accurate at detecting salient changes to pictures compared to low vividness participants. The results suggest that vivid images are not characterised by a high level of detail and that vivid imagery enhances memory for the salient aspects of a scene but not all of the details of a scene. Possible causes of this difference, and how they may lead to an understanding of individual differences in change detection, are considered.


Author(s):  
Mitchell R. P. LaPointe ◽  
Rachael Cullen ◽  
Bianca Baltaretu ◽  
Melissa Campos ◽  
Natalie Michalski ◽  
...  

Author(s):  
Elise L. Radtke ◽  
Ulla Martens ◽  
Thomas Gruber

AbstractWe applied high-density EEG to examine steady-state visual evoked potentials (SSVEPs) during a perceptual/semantic stimulus repetition design. SSVEPs are evoked oscillatory cortical responses at the same frequency as visual stimuli flickered at this frequency. In repetition designs, stimuli are presented twice with the repetition being task irrelevant. The cortical processing of the second stimulus is commonly characterized by decreased neuronal activity (repetition suppression). The behavioral consequences of stimulus repetition were examined in a companion reaction time pre-study using the same experimental design as the EEG study. During the first presentation of a stimulus, we confronted participants with drawings of familiar object images or object words, respectively. The second stimulus was either a repetition of the same object image (perceptual repetition; PR) or an image depicting the word presented during the first presentation (semantic repetition; SR)—all flickered at 15 Hz to elicit SSVEPs. The behavioral study revealed priming effects in both experimental conditions (PR and SR). In the EEG, PR was associated with repetition suppression of SSVEP amplitudes at left occipital and repetition enhancement at left temporal electrodes. In contrast, SR was associated with SSVEP suppression at left occipital and central electrodes originating in bilateral postcentral and occipital gyri, right middle frontal and right temporal gyrus. The conclusion of the presented study is twofold. First, SSVEP amplitudes do not only index perceptual aspects of incoming sensory information but also semantic aspects of cortical object representation. Second, our electrophysiological findings can be interpreted as neuronal underpinnings of perceptual and semantic priming.


Sign in / Sign up

Export Citation Format

Share Document