scholarly journals Cognitive Emotional Regulation Model in Human-Robot Interaction

2015 ◽  
Vol 2015 ◽  
pp. 1-8 ◽  
Author(s):  
Xin Liu ◽  
Lun Xie ◽  
Anqi Liu ◽  
Dan Li

This paper integrated Gross cognitive process into the HMM (hidden Markov model) emotional regulation method and implemented human-robot emotional interaction with facial expressions and behaviors. Here, energy was the psychological driving force of emotional transition in the cognitive emotional model. The input facial expression was translated into external energy by expression-emotion mapping. Robot’s next emotional state was determined by the cognitive energy (the stimulus after cognition) and its own current emotional energy’s size and source’s position. The two random quantities in emotional transition process—the emotional family and the specific emotional state in the AVS (arousal-valence-stance) 3D space—were used to simulate human emotion selection. The model had been verified by an emotional robot with 10 degrees of freedom and more than 100 kinds of facial expressions. Experimental results show that the emotional regulation model does not simply provide the typical classification and jump in terms of a set of emotional labels but that it operates in a 3D emotional space enabling a wide range of intermediary emotional states to be obtained. So the robot with cognitive emotional regulation model is more intelligent and real; moreover it can give full play to its emotional diversification in the interaction.

2018 ◽  
Vol 7 (3) ◽  
pp. 84-99
Author(s):  
L.Y. Demidova ◽  
N.V. Dvoryanchikov

This article highlights the problem of emotional perception in pedophilia (ICD-10) / pedophilia disorder (ICD-11). In present paper, emotional perception is considered as abilities of recognizing and identifying a wide range of mental states like emotions, affects, moods, feelings. The assumption about relations of alexithymia and disturbances in the recognition of emotions, perspective taking, empathy with pedophilia and regulatory mechanisms of activity verified empirically. Two groups of persons accused of sexual crimes are compared: 44 people with pedophilia, 32 people without the disorder; also 95 persons who haven't been accused were examined for the control group; as well intra-group comparison of pedophilic persons with egosyntonic and egodystonic attitude toward sexual drive was made. Contradictions of earlier studies are resolved in the result: it is shown that in pedophilia the ability of understanding emotional states remains normal at first sight (in comparison with the deficits found in the accused without pedophilia). However, the group with pedophilia is characterized by extremely high level of alexithymia and based on this the consistently conclusion is made about disturbances of emotional regulation in egosyntonic form of this disorder.


Mathematics ◽  
2021 ◽  
Vol 9 (6) ◽  
pp. 593
Author(s):  
Yinsheng Li ◽  
Wei Zheng

Music can regulate and improve the emotions of the brain. Traditional emotional regulation approaches often adopt complete music. As is well-known, complete music may vary in pitch, volume, and other ups and downs. An individual’s emotions may also adopt multiple states, and music preference varies from person to person. Therefore, traditional music regulation methods have problems, such as long duration, variable emotional states, and poor adaptability. In view of these problems, we use different music processing methods and stacked sparse auto-encoder neural networks to identify and regulate the emotional state of the brain in this paper. We construct a multi-channel EEG sensor network, divide brainwave signals and the corresponding music separately, and build a personalized reconfigurable music-EEG library. The 17 features in the EEG signal are extracted as joint features, and the stacked sparse auto-encoder neural network is used to classify the emotions, in order to establish a music emotion evaluation index. According to the goal of emotional regulation, music fragments are selected from the personalized reconfigurable music-EEG library, then reconstructed and combined for emotional adjustment. The results show that, compared with complete music, the reconfigurable combined music was less time-consuming for emotional regulation (76.29% less), and the number of irrelevant emotional states was reduced by 69.92%. In terms of adaptability to different participants, the reconfigurable music improved the recognition rate of emotional states by 31.32%.


2020 ◽  
Vol 13 (4) ◽  
pp. 4-24 ◽  
Author(s):  
V.A. Barabanschikov ◽  
E.V. Suvorova

The article is devoted to the results of approbation of the Geneva Emotion Recognition Test (GERT), a Swiss method for assessing dynamic emotional states, on Russian sample. Identification accuracy and the categorical fields’ structure of emotional expressions of a “living” face are analysed. Similarities and differences in the perception of affective groups of dynamic emotions in the Russian and Swiss samples are considered. A number of patterns of recognition of multi-modal expressions with changes in valence and arousal of emotions are described. Differences in the perception of dynamics and statics of emotional expressions are revealed. GERT method confirmed it’s high potential for solving a wide range of academic and applied problems.


2011 ◽  
pp. 175-200 ◽  
Author(s):  
Kostas Karpouzis ◽  
Amaryllis Raouzaiou ◽  
Athanasios Drosopoulos ◽  
Spiros Ioannou ◽  
Themis Balomenos ◽  
...  

This chapter presents a holistic approach to emotion modeling and analysis and their applications in Man-Machine Interaction applications. Beginning from a symbolic representation of human emotions found in this context, based on their expression via facial expressions and hand gestures, we show that it is possible to transform quantitative feature information from video sequences to an estimation of a user’s emotional state. While these features can be used for simple representation purposes, in our approach they are utilized to provide feedback on the users’ emotional state, hoping to provide next-generation interfaces that are able to recognize the emotional states of their users.


2018 ◽  
Author(s):  
Argus J Athanas ◽  
Jamison M McCorrison ◽  
Susan Smalley ◽  
Jamie Price ◽  
Jim Grady ◽  
...  

BACKGROUND The use of smartphone apps to monitor and deliver health care guidance and interventions has received considerable attention recently, particularly with regard to behavioral disorders, stress relief, negative emotional state, and poor mood in general. Unfortunately, there is little research investigating the long-term and repeated effects of apps meant to impact mood and emotional state. OBJECTIVE We aimed to investigate the effects of both immediate point-of-intervention and long-term use (ie, at least 10 engagements) of a guided meditation and mindfulness smartphone app on users’ emotional states. Data were collected from users of a mobile phone app developed by the company Stop, Breathe & Think (SBT) for achieving emotional wellness. To explore the long-term effects, we assessed changes in the users’ basal emotional state before they completed an activity (eg, a guided meditation). We also assessed the immediate effects of the app on users’ emotional states from preactivity to postactivity. METHODS The SBT app collects information on the emotional state of the user before and after engagement in one or several mediation and mindfulness activities. These activities are recommended and provided by the app based on user input. We considered data on over 120,000 users of the app who collectively engaged in over 5.5 million sessions with the app during an approximate 2-year period. We focused our analysis on users who had at least 10 engagements with the app over an average of 6 months. We explored the changes in the emotional well-being of individuals with different emotional states at the time of their initial engagement with the app using mixed-effects models. In the process, we compared 2 different methods of classifying emotional states: (1) an expert-defined a priori mood classification and (2) an empirically driven cluster-based classification. RESULTS We found that among long-term users of the app, there was an association between the length of use and a positive change in basal emotional state (4% positive mood increase on a 2-point scale every 10 sessions). We also found that individuals who were anxious or depressed tended to have a favorable long-term emotional transition (eg, from a sad emotional state to a happier emotional state) after using the app for an extended period (the odds ratio for achieving a positive emotional state was 3.2 and 6.2 for anxious and depressed individuals, respectively, compared with users with fewer sessions). CONCLUSIONS Our analyses provide evidence for an association between both immediate and long-term use of an app providing guided meditations and improvements in the emotional state.


2015 ◽  
Vol 11 (2) ◽  
pp. 57-67 ◽  
Author(s):  
S.V. Malanov

The paper focuses on the development of higher mental functions responsible for emotional regulation and describes a study revealing the stages in which young children acquire certain language means that help them to identify emotional states and understand emotional relationships. The sample of the study consisted of 94 children aged from 1.8 to 7.5 years. The outcomes suggest that at the age of 1.8 the children begin to acquire skills necessary for recognizing emotions basing on what the adults tell them. The revealed general tendency in the development of object reference of words is as follows: at first the children are able to identify emotional states and relationships according to the context of intersubjective interactions; later, basing on facial expressions; and finally, basing on pictograms of facial expressions. At the age of 2.4 years the children begin to employ language tools for identifying and naming emotions in others by themselves. Basing on their experience of analyzing emotions in others, the children then gradually develop the skills for identifying and partly rec¬ognizing their own emotions. Such skills may actually be found in some children at the age of 3.5 years, but usually it is not until the age of 7.5 that they can be observed in most preschoolers.


2019 ◽  
Author(s):  
Gunnar Schmidtmann ◽  
Ben J. Jennings ◽  
Dasha A. Sandra ◽  
Jordan Pollock ◽  
Ian Gold

Current databases of facial expressions of mental states typically represent only a small subset of expressions, usually covering the basic emotions (fear, disgust, surprise, happiness, sadness, and anger). To overcome these limitations, we introduce a new database of pictures of facial expressions reflecting the richness of mental states. 93 expressions of mental states were interpreted by two professional actors and high-quality pictures were taken under controlled conditions in front and side view. The database was validated with two different experiments (N=65). First, a four-alternative forced choice paradigm was employed to test the ability of participants to correctly select a term associated with each expression. In a second experiment, we employed a paradigm that did not rely on any semantic information. The task was to locate each face within a two-dimensional space of valence and arousal (mental state – space) employing a “point-and-click” paradigm. Results from both experiments demonstrate that subjects can reliably recognize a great diversity of emotional states from facial expressions. Interestingly, while subjects’ performance was better for front view images, the advantage over the side view was not dramatic. To our knowledge, this is the first demonstration of the high degree of accuracy human viewers exhibit when identifying complex mental states from only partially visible facial features. The McGill Face Database provides a wide range of facial expressions that can be linked to mental state terms and can be accurately characterized in terms of arousal and valence.


2000 ◽  
Vol 29 (544) ◽  
Author(s):  
Dolores Canamero ◽  
Jakob Fredslund

We report work on a LEGO robot capable of displaying several emotional expressions in response to physical contact. Our motivation has been to explore believable emotional exchanges to achieve plausible interaction with a simple robot. We have worked toward this goal in two ways. <p>First, acknowledging the importance of physical manipulation in children's interactions, interaction with the robot is through tactile stimulation; the various kinds of stimulation that can elicit the robot's emotions are grounded in a model of emotion activation based on different stimulation patterns.</p><p>Second, emotional states need to be clearly conveyed. We have drawn inspiration from theories of human basic emotions with associated universal facial expressions, which we have implemented in a caricaturized face. We have conducted experiments on both children and adults to assess the recognizability of these expressions.</p>


Perception ◽  
2020 ◽  
Vol 49 (3) ◽  
pp. 310-329 ◽  
Author(s):  
Gunnar Schmidtmann ◽  
Ben J. Jennings ◽  
Dasha A. Sandra ◽  
Jordan Pollock ◽  
Ian Gold

Current databases of facial expressions represent only a small subset of expressions, usually the basic emotions (fear, disgust, surprise, happiness, sadness, and anger). To overcome these limitations, we introduce a database of pictures of facial expressions reflecting the richness of mental states. A total of 93 expressions of mental states were interpreted by two professional actors, and high-quality pictures were taken under controlled conditions in front and side view. The database was validated in two experiments. First, a four-alternative forced-choice paradigm was employed to test the ability to select a term associated with each expression. Second, the task was to locate each face within a 2-D space of valence and arousal. Results from both experiments demonstrate that subjects can reliably recognize a great diversity of emotional states from facial expressions. While subjects’ performance was better for front view images, the advantage over the side view was not dramatic. This is the first demonstration of the high degree of accuracy human viewers exhibit when identifying complex mental states from only partially visible facial features. The McGill Face Database provides a wide range of facial expressions that can be linked to mental state terms and can be accurately characterized in terms of arousal and valence.


Author(s):  
Kostas Karpouzis ◽  
Amaryllis Raouzaiou ◽  
Athanasios Drosopoulos ◽  
Spiros Ioannou ◽  
Themis Balomenos ◽  
...  

This chapter presents a holistic approach to emotion modeling and analysis and their applications in Man-Machine Interaction applications. Beginning from a symbolic representation of human emotions found in this context, based on their expression via facial expressions and hand gestures, we show that it is possible to transform quantitative feature information from video sequences to an estimation of a user’s emotional state. While these features can be used for simple representation purposes, in our approach they are utilized to provide feedback on the users’ emotional state, hoping to provide next-generation interfaces that are able to recognize the emotional states of their users.


Sign in / Sign up

Export Citation Format

Share Document