scholarly journals An Affective and Cognitive Toy to Support Mood Disorders

Informatics ◽  
2020 ◽  
Vol 7 (4) ◽  
pp. 48
Author(s):  
Esperanza Johnson ◽  
Iván González ◽  
Tania Mondéjar ◽  
Luis Cabañero-Gómez ◽  
Jesús Fontecha ◽  
...  

Affective computing is a branch of artificial intelligence that aims at processing and interpreting emotions. In this study, we implemented sensors/actuators into a stuffed toy mammoth, which allows the toy to have an affective and cognitive basis to its communication. The goal is for therapists to use this as a tool during their therapy sessions that work with patients with mood disorders. The toy detects emotion and provides a dialogue that would guide a session aimed at working with emotional regulation and perception. These technical capabilities are possible by employing IBM Watson’s services, implemented into a Raspberry Pi Zero. In this paper, we delve into its evaluation with neurotypical adolescents, a panel of experts, and other professionals. The evaluation aims were to perform a technical and application validation for use in therapy sessions. The results of the evaluations are generally positive, with an 87% accuracy for emotion recognition, and an average usability score of 77.5 for experts (n = 5), and 64.35 for professionals (n = 23). We add to that information some of the issues encountered, its effects on applicability, and future work to be done.

Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5015
Author(s):  
Muhammad Anas Hasnul ◽  
Nor Azlina Ab. Ab.Aziz ◽  
Salem Alelyani ◽  
Mohamed Mohana ◽  
Azlan Abd. Abd. Aziz

Affective computing is a field of study that integrates human affects and emotions with artificial intelligence into systems or devices. A system or device with affective computing is beneficial for the mental health and wellbeing of individuals that are stressed, anguished, or depressed. Emotion recognition systems are an important technology that enables affective computing. Currently, there are a lot of ways to build an emotion recognition system using various techniques and algorithms. This review paper focuses on emotion recognition research that adopted electrocardiograms (ECGs) as a unimodal approach as well as part of a multimodal approach for emotion recognition systems. Critical observations of data collection, pre-processing, feature extraction, feature selection and dimensionality reduction, classification, and validation are conducted. This paper also highlights the architectures with accuracy of above 90%. The available ECG-inclusive affective databases are also reviewed, and a popularity analysis is presented. Additionally, the benefit of emotion recognition systems towards healthcare systems is also reviewed here. Based on the literature reviewed, a thorough discussion on the subject matter and future works is suggested and concluded. The findings presented here are beneficial for prospective researchers to look into the summary of previous works conducted in the field of ECG-based emotion recognition systems, and for identifying gaps in the area, as well as in developing and designing future applications of emotion recognition systems, especially in improving healthcare.


2021 ◽  
Vol 10 (15) ◽  
pp. e392101522844
Author(s):  
Maíra Araújo de Santana ◽  
Clarisse Lins de Lima ◽  
Arianne Sarmento Torcate ◽  
Flávio Secco Fonseca ◽  
Wellington Pinheiro dos Santos

Music therapy is an effective tool to slow down the progress of dementia since interaction with music may evoke emotions that stimulates brain areas responsible for memory. This therapy is most successful when therapists provide adequate and personalized stimuli for each patient. This personalization is often hard. Thus, Artificial Intelligence (AI) methods may help in this task. This paper brings a systematic review of the literature in the field of affective computing in the context of music therapy. We particularly aim to assess AI methods to perform automatic emotion recognition applied to Human-Machine Musical Interfaces (HMMI). To perform the review, we conducted an automatic search in five of the main scientific databases on the fields of intelligent computing, engineering, and medicine. We search all papers released from 2016 and 2020, whose metadata, title or abstract contains the terms defined in the search string. The systematic review protocol resulted in the inclusion of 144 works from the 290 publications returned from the search. Through this review of the state-of-the-art, it was possible to list the current challenges in the automatic recognition of emotions. It was also possible to realize the potential of automatic emotion recognition to build non-invasive assistive solutions based on human-machine musical interfaces, as well as the artificial intelligence techniques in use in emotion recognition from multimodality data. Thus, machine learning for recognition of emotions from different data sources can be an important approach to optimize the clinical goals to be achieved through music therapy.


2019 ◽  
Vol 19 (1) ◽  
pp. 10-14
Author(s):  
Ryan Scott ◽  
Malcolm Le Lievre

Purpose The purpose of this paper is to explore insights methodology and technology by using behavioral to create a mind-set change in the way people work, especially in the age of artificial intelligence (AI). Design/methodology/approach The approach is to examine how AI is driving workplace change, introduce the idea that most organizations have untapped analytics, add the idea of what we know future work will look like and look at how greater, data-driven human behavioral insights will help prepare future human-to-human work and inform people’s work with and alongside AI. Findings Human (behavioral) intelligence will be an increasingly crucial part of behaviorally smart organizations, from hiring to placement to adaptation to team building, compliance and more. These human capability insights will, among other things, better prepare people and organizations for changing work roles, including working with and alongside AI and similar tech innovation. Research limitations/implications No doubt researchers across the private, public and nonprofit sectors will want to further study the nexus of human capability, behavioral insights technology and AI, but it is clear that such work is already underway and can prove even more valuable if adopted on a broader, deeper level. Practical implications Much “people data” inside organizations is currently not being harvested. Validated, scalable processes exist to mine that data and leverage it to help organizations of all types and sizes be ready for the future, particularly in regard to the marriage of human capability and AI. Social implications In terms of human capability and AI, individuals, teams, organizations, customers and other stakeholders will all benefit. The investment of time and other resources is minimal, but must include C-suite buy in. Originality/value Much exists on the softer aspects of the marriage of human capability and AI and other workplace advancements. What has been lacking – until now – is a 1) practical, 2) validated and 3) scalable behavioral insights tech form that quantifiably informs how people and AI will work in the future, especially side by side.


i-com ◽  
2020 ◽  
Vol 19 (2) ◽  
pp. 139-151
Author(s):  
Thomas Schmidt ◽  
Miriam Schlindwein ◽  
Katharina Lichtner ◽  
Christian Wolff

AbstractDue to progress in affective computing, various forms of general purpose sentiment/emotion recognition software have become available. However, the application of such tools in usability engineering (UE) for measuring the emotional state of participants is rarely employed. We investigate if the application of sentiment/emotion recognition software is beneficial for gathering objective and intuitive data that can predict usability similar to traditional usability metrics. We present the results of a UE project examining this question for the three modalities text, speech and face. We perform a large scale usability test (N = 125) with a counterbalanced within-subject design with two websites of varying usability. We have identified a weak but significant correlation between text-based sentiment analysis on the text acquired via thinking aloud and SUS scores as well as a weak positive correlation between the proportion of neutrality in users’ voice and SUS scores. However, for the majority of the output of emotion recognition software, we could not find any significant results. Emotion metrics could not be used to successfully differentiate between two websites of varying usability. Regression models, either unimodal or multimodal could not predict usability metrics. We discuss reasons for these results and how to continue research with more sophisticated methods.


Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5135
Author(s):  
Ngoc-Dau Mai ◽  
Boon-Giin Lee ◽  
Wan-Young Chung

In this research, we develop an affective computing method based on machine learning for emotion recognition using a wireless protocol and a wearable electroencephalography (EEG) custom-designed device. The system collects EEG signals using an eight-electrode placement on the scalp; two of these electrodes were placed in the frontal lobe, and the other six electrodes were placed in the temporal lobe. We performed experiments on eight subjects while they watched emotive videos. Six entropy measures were employed for extracting suitable features from the EEG signals. Next, we evaluated our proposed models using three popular classifiers: a support vector machine (SVM), multi-layer perceptron (MLP), and one-dimensional convolutional neural network (1D-CNN) for emotion classification; both subject-dependent and subject-independent strategies were used. Our experiment results showed that the highest average accuracies achieved in the subject-dependent and subject-independent cases were 85.81% and 78.52%, respectively; these accuracies were achieved using a combination of the sample entropy measure and 1D-CNN. Moreover, our study investigates the T8 position (above the right ear) in the temporal lobe as the most critical channel among the proposed measurement positions for emotion classification through electrode selection. Our results prove the feasibility and efficiency of our proposed EEG-based affective computing method for emotion recognition in real-world applications.


2020 ◽  
Vol 110 (03) ◽  
pp. 108-112
Author(s):  
Simon Schumacher ◽  
Bastian Pokorni

Das Future Work Lab ist ein Innovationslabor für Arbeit, Mensch und Technik am Standort Stuttgart mit Fokus auf Künstlicher Intelligenz (KI) und vernetzter Arbeitsorganisation. Ein zentraler Bestandteil ist das Framework kognitive Produktionsarbeit 4.0, das als Referenzmodell für das Themenfeld Produktionsarbeit 4.0 dienen soll. Ein entsprechendes Konzept wurde in einem interdisziplinären Projektteam entwickelt. In diesem Beitrag wird das Grobmodell vorgestellt und die weitere Forschungsagenda präsentiert.   The Future Work Lab is an innovation lab for work, people and technology in Stuttgart, Germany with a focus on artificial intelligence and interconnected work organisation. A key component consists of the framework for cognitive production work 4.0, which will serve as a reference model for the research topics. A corresponding concept was developed in an interdisciplinary project team. In this article the raw model is introduced and the further research agenda is presented.


2021 ◽  
Vol 11 (22) ◽  
pp. 10540
Author(s):  
Navjot Rathour ◽  
Zeba Khanam ◽  
Anita Gehlot ◽  
Rajesh Singh ◽  
Mamoon Rashid ◽  
...  

There is a significant interest in facial emotion recognition in the fields of human–computer interaction and social sciences. With the advancements in artificial intelligence (AI), the field of human behavioral prediction and analysis, especially human emotion, has evolved significantly. The most standard methods of emotion recognition are currently being used in models deployed in remote servers. We believe the reduction in the distance between the input device and the server model can lead us to better efficiency and effectiveness in real life applications. For the same purpose, computational methodologies such as edge computing can be beneficial. It can also encourage time-critical applications that can be implemented in sensitive fields. In this study, we propose a Raspberry-Pi based standalone edge device that can detect real-time facial emotions. Although this edge device can be used in variety of applications where human facial emotions play an important role, this article is mainly crafted using a dataset of employees working in organizations. A Raspberry-Pi-based standalone edge device has been implemented using the Mini-Xception Deep Network because of its computational efficiency in a shorter time compared to other networks. This device has achieved 100% accuracy for detecting faces in real time with 68% accuracy, i.e., higher than the accuracy mentioned in the state-of-the-art with the FER 2013 dataset. Future work will implement a deep network on Raspberry-Pi with an Intel Movidious neural compute stick to reduce the processing time and achieve quick real time implementation of the facial emotion recognition system.


Sign in / Sign up

Export Citation Format

Share Document