scholarly journals Development and Progress in Sensors and Technologies for Human Emotion Recognition

Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5554 ◽  
Author(s):  
Shantanu Pal ◽  
Subhas Mukhopadhyay ◽  
Nagender Suryadevara

With the advancement of human-computer interaction, robotics, and especially humanoid robots, there is an increasing trend for human-to-human communications over online platforms (e.g., zoom). This has become more significant in recent years due to the Covid-19 pandemic situation. The increased use of online platforms for communication signifies the need to build efficient and more interactive human emotion recognition systems. In a human emotion recognition system, the physiological signals of human beings are collected, analyzed, and processed with the help of dedicated learning techniques and algorithms. With the proliferation of emerging technologies, e.g., the Internet of Things (IoT), future Internet, and artificial intelligence, there is a high demand for building scalable, robust, efficient, and trustworthy human recognition systems. In this paper, we present the development and progress in sensors and technologies to detect human emotions. We review the state-of-the-art sensors used for human emotion recognition and different types of activity monitoring. We present the design challenges and provide practical references of such human emotion recognition systems in the real world. Finally, we discuss the current trends in applications and explore the future research directions to address issues, e.g., scalability, security, trust, privacy, transparency, and decentralization.

2020 ◽  
Vol 2020 ◽  
pp. 1-19
Author(s):  
Nazmi Sofian Suhaimi ◽  
James Mountstephens ◽  
Jason Teo

Emotions are fundamental for human beings and play an important role in human cognition. Emotion is commonly associated with logical decision making, perception, human interaction, and to a certain extent, human intelligence itself. With the growing interest of the research community towards establishing some meaningful “emotional” interactions between humans and computers, the need for reliable and deployable solutions for the identification of human emotional states is required. Recent developments in using electroencephalography (EEG) for emotion recognition have garnered strong interest from the research community as the latest developments in consumer-grade wearable EEG solutions can provide a cheap, portable, and simple solution for identifying emotions. Since the last comprehensive review was conducted back from the years 2009 to 2016, this paper will update on the current progress of emotion recognition using EEG signals from 2016 to 2019. The focus on this state-of-the-art review focuses on the elements of emotion stimuli type and presentation approach, study size, EEG hardware, machine learning classifiers, and classification approach. From this state-of-the-art review, we suggest several future research opportunities including proposing a different approach in presenting the stimuli in the form of virtual reality (VR). To this end, an additional section devoted specifically to reviewing only VR studies within this research domain is presented as the motivation for this proposed new approach using VR as the stimuli presentation device. This review paper is intended to be useful for the research community working on emotion recognition using EEG signals as well as for those who are venturing into this field of research.


Inventions ◽  
2021 ◽  
Vol 6 (4) ◽  
pp. 65
Author(s):  
Smita Khade ◽  
Swati Ahirrao ◽  
Shraddha Phansalkar ◽  
Ketan Kotecha ◽  
Shilpa Gite ◽  
...  

Biometrics is progressively becoming vital due to vulnerabilities of traditional security systems leading to frequent security breaches. Biometrics is an automated device that studies human beings’ physiological and behavioral features for their unique classification. Iris-based authentication offers stronger, unique, and contactless identification of the user. Iris liveness detection (ILD) confronts challenges such as spoofing attacks with contact lenses, replayed video, and print attacks, etc. Many researchers focus on ILD to guard the biometric system from attack. Hence, it is vital to study the prevailing research explicitly associated with the ILD to address how developing technologies can offer resolutions to lessen the evolving threats. An exhaustive survey of papers on the biometric ILD was performed by searching the most applicable digital libraries. Papers were filtered based on the predefined inclusion and exclusion criteria. Thematic analysis was performed for scrutinizing the data extracted from the selected papers. The exhaustive review now outlines the different feature extraction techniques, classifiers, datasets and presents their critical evaluation. Importantly, the study also discusses the projects, research works for detecting the iris spoofing attacks. The work then realizes in the discovery of the research gaps and challenges in the field of ILD. Many works were restricted to handcrafted methods of feature extraction, which are confronted with bigger feature sizes. The study discloses that dep learning based automated ILD techniques shows higher potential than machine learning techniques. Acquiring an ILD dataset that addresses all the common Iris spoofing attacks is also a need of the time. The survey, thus, opens practical challenges in the field of ILD from data collection to liveness detection and encourage future research.


2015 ◽  
Vol 11 (5) ◽  
pp. 1-27 ◽  
Author(s):  
Sarbani Ghosh ◽  
Samir Bandyopadhyay

Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5015
Author(s):  
Muhammad Anas Hasnul ◽  
Nor Azlina Ab. Ab.Aziz ◽  
Salem Alelyani ◽  
Mohamed Mohana ◽  
Azlan Abd. Abd. Aziz

Affective computing is a field of study that integrates human affects and emotions with artificial intelligence into systems or devices. A system or device with affective computing is beneficial for the mental health and wellbeing of individuals that are stressed, anguished, or depressed. Emotion recognition systems are an important technology that enables affective computing. Currently, there are a lot of ways to build an emotion recognition system using various techniques and algorithms. This review paper focuses on emotion recognition research that adopted electrocardiograms (ECGs) as a unimodal approach as well as part of a multimodal approach for emotion recognition systems. Critical observations of data collection, pre-processing, feature extraction, feature selection and dimensionality reduction, classification, and validation are conducted. This paper also highlights the architectures with accuracy of above 90%. The available ECG-inclusive affective databases are also reviewed, and a popularity analysis is presented. Additionally, the benefit of emotion recognition systems towards healthcare systems is also reviewed here. Based on the literature reviewed, a thorough discussion on the subject matter and future works is suggested and concluded. The findings presented here are beneficial for prospective researchers to look into the summary of previous works conducted in the field of ECG-based emotion recognition systems, and for identifying gaps in the area, as well as in developing and designing future applications of emotion recognition systems, especially in improving healthcare.


Electronics ◽  
2021 ◽  
Vol 10 (11) ◽  
pp. 1289
Author(s):  
Navjot Rathour ◽  
Sultan S. Alshamrani ◽  
Rajesh Singh ◽  
Anita Gehlot ◽  
Mamoon Rashid ◽  
...  

Facial emotion recognition (FER) is the procedure of identifying human emotions from facial expressions. It is often difficult to identify the stress and anxiety levels of an individual through the visuals captured from computer vision. However, the technology enhancements on the Internet of Medical Things (IoMT) have yielded impressive results from gathering various forms of emotional and physical health-related data. The novel deep learning (DL) algorithms are allowing to perform application in a resource-constrained edge environment, encouraging data from IoMT devices to be processed locally at the edge. This article presents an IoMT based facial emotion detection and recognition system that has been implemented in real-time by utilizing a small, powerful, and resource-constrained device known as Raspberry-Pi with the assistance of deep convolution neural networks. For this purpose, we have conducted one empirical study on the facial emotions of human beings along with the emotional state of human beings using physiological sensors. It then proposes a model for the detection of emotions in real-time on a resource-constrained device, i.e., Raspberry-Pi, along with a co-processor, i.e., Intel Movidius NCS2. The facial emotion detection test accuracy ranged from 56% to 73% using various models, and the accuracy has become 73% performed very well with the FER 2013 dataset in comparison to the state of art results mentioned as 64% maximum. A t-test is performed for extracting the significant difference in systolic, diastolic blood pressure, and the heart rate of an individual watching three different subjects (angry, happy, and neutral).


2021 ◽  
Author(s):  
Fumiya Yonemitsu ◽  
Kyoshiro Sasaki ◽  
Akihiko Gobara ◽  
Yuki Yamada

Technological advances in robotics have already produced robots that are indistinguishable from human beings. This technology is overcoming the uncanny valley, which refers to the unpleasant feelings that arise from humanoid robots that are similar in appearance to real humans to some extent. If humanoid robots with the same appearance are mass-produced and become commonplace, we may encounter circumstances in which people or human-like products have faces with the exact same appearance in the future. This leads to the following question: what impressions do clones elicit? To respond to this question, we examined what impressions images of people with the same face (clone images) induce. In the six studies we conducted, we consistently reported that clone images elicited higher eeriness than individuals with different faces; we named this new phenomenon the clone devaluation effect. We found that the clone devaluation effect reflected the perceived improbability of facial duplication. Moreover, this phenomenon was related to distinguishableness of each face, the duplication of identity, the background scene in observing clone faces, and avoidance reactions based on disgust sensitivity. These findings suggest that the clone devaluation effect is a product of multiple processes related to memory, emotion, and face recognition systems.


Author(s):  
Pavitra Patel ◽  
A. A. Chaudhari ◽  
M. A. Pund ◽  
D. H. Deshmukh

<p>Speech emotion recognition is an important issue which affects the human machine interaction. Automatic recognition of human emotion in speech aims at recognizing the underlying emotional state of a speaker from the speech signal. Gaussian mixture models (GMMs) and the minimum error rate classifier (i.e. Bayesian optimal classifier) are popular and effective tools for speech emotion recognition. Typically, GMMs are used to model the class-conditional distributions of acoustic features and their parameters are estimated by the expectation maximization (EM) algorithm based on a training data set. In this paper, we introduce a boosting algorithm for reliably and accurately estimating the class-conditional GMMs. The resulting algorithm is named the Boosted-GMM algorithm. Our speech emotion recognition experiments show that the emotion recognition rates are effectively and significantly boosted by the Boosted-GMM algorithm as compared to the EM-GMM algorithm.<br />During this interaction, human beings have some feelings that they want to convey to their communication partner with whom they are communicating, and then their communication partner may be the human or machine. This work dependent on the emotion recognition of the human beings from their speech signal<br />Emotion recognition from the speaker’s speech is very difficult because of the following reasons: Because of the existence of the different sentences, speakers, speaking styles, speaking rates accosting variability was introduced. The same utterance may show different emotions. Therefore it is very difficult to differentiate these portions of utterance. Another problem is that emotion expression is depending on the speaker and his or her culture and environment. As the culture and environment gets change the speaking style also gets change, which is another challenge in front of the speech emotion recognition system.</p>


Author(s):  
Lucky Tater ◽  
Sunidhi Pranjale ◽  
Siddhant Lade ◽  
Aaryaneil Nimbalkar, P. N. Mahalle ◽  

2020 ◽  
Vol 16 ◽  

Different currencies are being processed in money exchange shops and banks around the globe on a daily basis, where money exchange and transfer takes place. Identifying different currency is a difficult task and can lead to financial loss. There are approximately 180 currencies being used around the world, and each of them differ in color, size and texture. Thus, to correctly identify different currencies, a currency recognition systems needs to be designed. In this paper, we propose the design of an AlexNet based currency recognition system to recognize different international currency notes. We use 10-fold Cross Validation to obtain the cross-validation results of the AlexNet model. The features for the Alex model is extracted from the images back and front of each currency note. We also explore and implement deep learning models to compare the performance of the AlexNet model.


Sign in / Sign up

Export Citation Format

Share Document