scholarly journals Metamemory and memory discrepancies in directed forgetting of emotional information

2021 ◽  
Vol 17 (1) ◽  
pp. 44-52
Author(s):  
Dicle Çapan ◽  
Simay Ikier

Directed Forgetting (DF) studies show that it is possible to exert cognitive control to intentionally forget information. The aim of the present study was to investigate how aware individuals are of the control they have over what they remember and forget when the information is emotional. Participants were presented with positive, negative and neutral photographs, and each photograph was followed by either a Remember or a Forget instruction. Then, for each photograph, participants provided Judgments of Learning (JOLs) by indicating their likelihood of recognizing that item on a subsequent test. In the recognition phase, participants were asked to indicate all old items, irrespective of instruction. Remember items had higher JOLs than Forget items for all item types, indicating that participants believe they can intentionally forget even emotional information—which is not the case based on the actual recognition results. DF effect, which was calculated by subtracting recognition for Forget items from Remember ones was only significant for neutral items. Emotional information disrupted cognitive control, eliminating the DF effect. Response times for JOLs showed that evaluation of emotional information, especially negatively emotional information takes longer, and thus is more difficult. For both Remember and Forget items, JOLs reflected sensitivity to emotionality of the items, with emotional items receiving higher JOLs than the neutral ones. Actual recognition confirmed better recognition for only negative items but not for positive ones. JOLs also reflected underestimation of actual recognition performance. Discrepancies in metacognitive judgments due to emotional valence as well as the reasons for underestimation are discussed.

2019 ◽  
Vol 72 (12) ◽  
pp. 2833-2847 ◽  
Author(s):  
Jasmine Virhia ◽  
Sonja A Kotz ◽  
Patti Adank

Observing someone speak automatically triggers cognitive and neural mechanisms required to produce speech, a phenomenon known as automatic imitation. Automatic imitation of speech can be measured using the Stimulus Response Compatibility (SRC) paradigm that shows facilitated response times (RTs) when responding to a prompt (e.g., say aa) in the presence of a congruent distracter (a video of someone saying aa), compared with responding in the presence of an incongruent distracter (a video of someone saying oo). Current models of the relation between emotion and cognitive control suggest that automatic imitation can be modulated by varying the stimulus-driven task aspects, that is, the distracter’s emotional valence. It is unclear how the emotional state of the observer affects automatic imitation. The current study explored independent effects of emotional valence of the distracter (Stimulus-driven Dependence) and the observer’s emotional state (State Dependence) on automatic imitation of speech. Participants completed an SRC paradigm for visual speech stimuli. They produced a prompt superimposed over a neutral or emotional (happy or angry) distracter video. State Dependence was manipulated by asking participants to speak the prompt in a neutral or emotional (happy or angry) voice. Automatic imitation was facilitated for emotional prompts, but not for emotional distracters, thus implying a facilitating effect of State Dependence. The results are interpreted in the context of theories of automatic imitation and cognitive control, and we suggest that models of automatic imitation are to be modified to accommodate for state-dependent and stimulus-driven dependent effects.


2018 ◽  
Vol 10 (1) ◽  
pp. 19
Author(s):  
Yasuhiro Takeshima

Previous studies have not yet investigated sufficiently the relationship between visual processing and negative emotional valence intensity, which is the degree of a dimensional component included in emotional information. In Experiment 1, participants performed a visual search task with three valence levels: neutral (control) and high- and low-intensity negative emotional valence stimuli (both angry faces). Results indicated that response times for high-intensity negative emotional valence stimuli were shorter than low-intensity ones. In Experiment 2, participants were asked to detect a target face among successively presented faces. Facial stimuli were the same as in Experiment 1. Results revealed that accuracy was higher for angry faces than for neutral faces. However, performance did not differ as a function of negative emotional valence intensity. Overall, the task performance differences between negative emotional valence intensities were observed in visual search, but not in attentional blink. Therefore, negative emotional valence intensity likely contributes to the process of efficient visual information encoding.


2021 ◽  
pp. 136700692110165
Author(s):  
Sijia Hao ◽  
Lijuan Liang ◽  
Jue Wang ◽  
Huanhuan Liu ◽  
Baoguo Chen

Objectives: An experiment was conducted to explore how emotional valence of contexts and exposure frequency of novel words affect second language (L2) contextual word learning. Methodology: Chinese native speakers who learned English in a formal classroom setting were asked to read English paragraphs with different emotional valence (positive, negative or neutral) across five different days. These paragraphs were embedded with pseudowords. During this learning process, form recognition test and meaning recall test were carried out for these pseudowords. Data and analysis: Data were analyzed using mixed-model ANOVA. Accuracy for each task was compared among the three kinds of emotional contexts. Findings/Conclusions: In the form recognition test, the accuracy in the negative context was higher than in the positive and neutral contexts, and the pseudowords were acquired much earlier. In the meaning recall test, the accuracy in the positive and negative contexts was higher than that in the neutral context. Accuracy increased gradually with the increase of exposure frequency of the pseudowords. More importantly, we found that less exposure times were needed for emotional context relative to neutral context in contextual word learning. Originality: This may be the first study to explore the influence of emotional valence and exposure frequency on L2 contextual word learning. Significance/Implications: This study underlined the importance of emotional information in L2 contextual word learning and contributed to the understanding of how emotional information and exposure frequency functions in this learning process.


2019 ◽  
Vol 33 (5) ◽  
pp. 943-951
Author(s):  
Stephen A. Dewhurst ◽  
Rachel J. Anderson ◽  
David Howe ◽  
Peter J. Clough

2006 ◽  
Vol 95 (2) ◽  
pp. 995-1007 ◽  
Author(s):  
Rory Sayres ◽  
Kalanit Grill-Spector

Object-selective cortical regions exhibit a decreased response when an object stimulus is repeated [repetition suppression (RS)]. RS is often associated with priming: reduced response times and increased accuracy for repeated stimuli. It is unknown whether RS reflects stimulus-specific repetition, the associated changes in response time, or the combination of the two. To address this question, we performed a rapid event-related functional MRI (fMRI) study in which we measured BOLD signal in object-selective cortex, as well as object recognition performance, while we manipulated stimulus repetition. Our design allowed us to examine separately the roles of response time and repetition in explaining RS. We found that repetition played a robust role in explaining RS: repeated trials produced weaker BOLD responses than nonrepeated trials, even when comparing trials with matched response times. In contrast, response time played a weak role in explaining RS when repetition was controlled for: it explained BOLD responses only for one region of interest (ROI) and one experimental condition. Thus repetition suppression seems to be mostly driven by repetition rather than performance changes. We further examined whether RS reflects processes occurring at the same time as recognition or after recognition by manipulating stimulus presentation duration. In one experiment, durations were longer than required for recognition (2 s), whereas in a second experiment, durations were close to the minimum time required for recognition (85–101 ms). We found significant RS for brief presentations (albeit with a reduced magnitude), which again persisted when controlling for performance. This suggests a substantial amount of RS occurs during recognition.


2015 ◽  
Vol 112 (10) ◽  
pp. 3116-3121 ◽  
Author(s):  
Tomáš Sieger ◽  
Tereza Serranová ◽  
Filip Růžička ◽  
Pavel Vostatek ◽  
Jiří Wild ◽  
...  

Both animal studies and studies using deep brain stimulation in humans have demonstrated the involvement of the subthalamic nucleus (STN) in motivational and emotional processes; however, participation of this nucleus in processing human emotion has not been investigated directly at the single-neuron level. We analyzed the relationship between the neuronal firing from intraoperative microrecordings from the STN during affective picture presentation in patients with Parkinson’s disease (PD) and the affective ratings of emotional valence and arousal performed subsequently. We observed that 17% of neurons responded to emotional valence and arousal of visual stimuli according to individual ratings. The activity of some neurons was related to emotional valence, whereas different neurons responded to arousal. In addition, 14% of neurons responded to visual stimuli. Our results suggest the existence of neurons involved in processing or transmission of visual and emotional information in the human STN, and provide evidence of separate processing of the affective dimensions of valence and arousal at the level of single neurons as well.


2020 ◽  
Author(s):  
Gabriella Vigliocco ◽  
Marta Ponari ◽  
Courtenay Norbury

A recent study by Ponari et al. (2017), showed that emotional valence (i.e., whether a word evokes positive, negative or no affect) predicts age-of-acquisition ratings, and that up to the age of 8-9, children know abstract emotional words better than neutral ones. On the basis of these findings, emotional valence has been argued to provide a bootstrapping mechanism for the acquisition of abstract concepts. However, no previous work has directly assessed whether words’ valence, or valence of the context in which words are used, facilitates learning of unknown abstract words. Here, we investigate whether valence supports acquisition of novel abstract concepts. Seven to 10 years old children were taught novel abstract words and concepts (words typically learnt at an older age and that children did not know); words were either valenced (positive or negative) or neutral. We also manipulated the context in which words were presented: for one group of children, the teaching strategy emphasised emotional information; for the other, it emphasised encyclopaedic, non-emotional information. Abstract words with emotional valence were learnt better than neutral abstract words by children up to the age of 8-9, replicating previous findings; no effect of teaching strategy was found. These results indicate that emotional valence supports abstract concepts acquisition, and further suggest that it is the valence information intrinsic to the word’s meaning to have a role, rather than the valence of the context in which the word is learnt.


2014 ◽  
Vol 28 (1) ◽  
pp. 11-21
Author(s):  
Paul Roux ◽  
Damien Vistoli ◽  
Anne Christophe ◽  
Christine Passerieux ◽  
Eric Brunet-Gouet

The present study investigated the ERP correlates of the integration of emotional prosody to the emotional meaning of a spoken word. Thirty-four nonclinical participants listened to negative and positive words that were spoken with an angry or happy prosody and classified the emotional valence of the word meaning while ignoring emotional prosody. Social anhedonia was also self-rated by the subjects. Compared to congruent trials, incongruent ones elicited slower and less accurate behavioral responses, and a smaller P300 component at the brain response level. The present data suggest that vocal emotional information is salient enough to be integrated early in verbal processing. The P300 amplitude modulation by the prosody-meaning congruency positively correlated with the social anhedonia score, suggesting that the sensitivity of the electrical brain response to emotional prosody increased with social anhedonia. Interpretations of this result in terms of emotional processing in social anhedonia are discussed.


2021 ◽  
Vol 12 ◽  
Author(s):  
Dolores Villalobos ◽  
Javier Pacios ◽  
Carmelo Vázquez

Research traditions on cognition and depression focus on relatively unconnected aspects of cognitive functioning. On one hand, the neuropsychological perspective has concentrated on cognitive control difficulties as a prominent feature of this condition. On the other hand, the clinical psychology perspective has focused on cognitive biases and repetitive negative patterns of thinking (i.e., rumination) for emotional information. A review of the literature from both fields reveals that difficulties are more evident for mood-congruent materials, suggesting that cognitive control difficulties interact with cognitive biases to hinder cognitive switching, working memory updating, and inhibition of irrelevant information. Connecting research from these two traditions, we propose a novel integrative cognitive model of depression in which the interplay between mood-congruent cognitive control difficulties, cognitive biases, and rumination may ultimately lead to ineffective emotion-regulation strategies to downregulate negative mood and upregulate positive mood.


2021 ◽  
Author(s):  
Paola Escudero ◽  
Eline Adrianne Smit ◽  
Anthony Angwin

In recent years, cross-situational word learning (CSWL) paradigms have shown that novel words can be learned through implicit statistical learning. So far, CSWL studies using adult populations have focused on the presentation of spoken words (auditory information), however, words can also be learned through their written form (orthographic information). This study compares auditory and orthographic presentation of novel words with different degrees of phonological overlap using the CSWL paradigm. Additionally, we also present a lab-based and online-based approach to testing behavioural experiments. Due to the COVID-19 pandemic, lab testing was prematurely terminated, and testing was continued online using a newly created online testing protocol. Analyses first compared accuracy and response times across modalities, with our findings showing better and faster recognition performance for CSWL when novel words are presented through their written (orthographic condition) than through their spoken forms (auditory condition). As well, Bayesian modelling found that accuracy for the auditory condition was higher online compared to the lab-based experiment, whereas performance in the orthography condition was high in both experiments and generally outperformed the auditory condition. We discuss the implications of our findings for modality of presentation, as well as the benefits of our online testing protocol and its implementation for future research.


Sign in / Sign up

Export Citation Format

Share Document