Cognitive Robotics on 5G Networks

2021 ◽  
Vol 21 (4) ◽  
pp. 1-18
Author(s):  
Zhihan Lv ◽  
Liang Qiao ◽  
Qingjun Wang

Emotional cognitive ability is a key technical indicator to measure the friendliness of interaction. Therefore, this research aims to explore robots with human emotion cognitively. By discussing the prospects of 5G technology and cognitive robots, the main direction of the study is cognitive robots. For the emotional cognitive robots, the analysis logic similar to humans is difficult to imitate; the information processing levels of robots are divided into three levels in this study: cognitive algorithm, feature extraction, and information collection by comparing human information processing levels. In addition, a multi-scale rectangular direction gradient histogram is used for facial expression recognition, and robust principal component analysis algorithm is used for facial expression recognition. In the pictures where humans intuitively feel smiles in sad emotions, the proportion of emotions obtained by the method in this study are as follows: calmness accounted for 0%, sadness accounted for 15.78%, fear accounted for 0%, happiness accounted for 76.53%, disgust accounted for 7.69%, anger accounted for 0%, and astonishment accounted for 0%. In the recognition of micro-expressions, humans intuitively feel negative emotions such as surprise and fear, and the proportion of emotions obtained by the method adopted in this study are as follows: calmness accounted for 32.34%, sadness accounted for 34.07%, fear accounted for 6.79%, happiness accounted for 0%, disgust accounted for 0%, anger accounted for 13.91%, and astonishment accounted for 15.89%. Therefore, the algorithm explored in this study can realize accuracy in cognition of emotions. From the preceding research results, it can be seen that the research method in this study can intuitively reflect the proportion of human expressions, and the recognition methods based on facial expressions and micro-expressions have good recognition effects, which is in line with human intuitive experience.

JOUTICA ◽  
2021 ◽  
Vol 6 (2) ◽  
pp. 484
Author(s):  
Resty Wulanningrum ◽  
Anggi Nur Fadzila ◽  
Danar Putra Pamungkas

Manusia secara alami menggunakan ekspresi wajah untuk berkomunikasi dan menunjukan emosi mereka dalam berinteraksi sosial. Ekspresi wajah termasuk kedalam komunikasi non-verbal yang dapat menyampaikan keadaan emosi seseorang kepada orang yang telah mengamatinya. Penelitian ini menggunakan metode Principal Component Analysis (PCA) untuk proses ekstraksi ciri pada citra ekspresi dan metode Convolutional Neural Network (CNN) sebagai prosesi klasifikasi emosi, dengan menggunakan data Facial Expression Recognition-2013 (FER-2013) dilakukan proses training dan testing untuk menghasilkan nilai akurasi dan pengenalan emosi wajah. Hasil pengujian akhir mendapatkan nilai akurasi pada metode PCA sebesar 59,375% dan nilai akurasi pada pengujian metode CNN sebesar 59,386%.


Author(s):  
Gopal Krishan Prajapat ◽  
Rakesh Kumar

Facial feature extraction and recognition plays a prominent role in human non-verbal interaction and it is one of the crucial factors among pose, speech, facial expression, behaviour and actions which are used in conveying information about the intentions and emotions of a human being. In this article an extended local binary pattern is used for the feature extraction process and a principal component analysis (PCA) is used for dimensionality reduction. The projections of the sample and model images are calculated and compared by Euclidean distance method. The combination of extended local binary pattern and PCA (ELBP+PCA) improves the accuracy of the recognition rate and also diminishes the evaluation complexity. The evaluation of proposed facial expression recognition approach will focus on the performance of the recognition rate. A series of tests are performed for the validation of algorithms and to compare the accuracy of the methods on the JAFFE, Extended Cohn-Kanade images database.


2017 ◽  
Vol 120 (3) ◽  
pp. 391-407 ◽  
Author(s):  
Ying Ge ◽  
Xiaofang Zhong ◽  
Wenbo Luo

Internet addition affects facial expression recognition of individuals. However, evidences of facial expression recognition from different types of addicts are insufficient. The present study addressed the question by adopting eye-movement analytical method and focusing on the difference in facial expression recognition between internet-addicted and non-internet-addicted urban left-behind children in China. Sixty 14-year-old Chinese participants performed tasks requiring absolute recognition judgment and relative recognition judgment. The results show that the information processing mode adopted by the internet-addicted involved earlier gaze acceleration, longer fixation durations, lower fixation counts, and uniform extraction of pictorial information. The information processing mode of the non-addicted showed the opposite pattern. Moreover, recognition and processing of negative emotion pictures were relatively complex, and it was especially difficult for urban internet-addicted left-behind children to process negative emotion pictures in fine judgment and processing stage of recognition on differences as demonstrated by longer fixation duration and inadequate fixation counts.


2019 ◽  
Vol 9 (21) ◽  
pp. 4678 ◽  
Author(s):  
Daniel Canedo ◽  
António J. R. Neves

Emotion recognition has attracted major attention in numerous fields because of its relevant applications in the contemporary world: marketing, psychology, surveillance, and entertainment are some examples. It is possible to recognize an emotion through several ways; however, this paper focuses on facial expressions, presenting a systematic review on the matter. In addition, 112 papers published in ACM, IEEE, BASE and Springer between January 2006 and April 2019 regarding this topic were extensively reviewed. Their most used methods and algorithms will be firstly introduced and summarized for a better understanding, such as face detection, smoothing, Principal Component Analysis (PCA), Local Binary Patterns (LBP), Optical Flow (OF), Gabor filters, among others. This review identified a clear difficulty in translating the high facial expression recognition (FER) accuracy in controlled environments to uncontrolled and pose-variant environments. The future efforts in the FER field should be put into multimodal systems that are robust enough to face the adversities of real world scenarios. A thorough analysis on the research done on FER in Computer Vision based on the selected papers is presented. This review aims to not only become a reference for future research on emotion recognition, but also to provide an overview of the work done in this topic for potential readers.


2018 ◽  
Vol 27 (08) ◽  
pp. 1850121 ◽  
Author(s):  
Zhe Sun ◽  
Zheng-Ping Hu ◽  
Raymond Chiong ◽  
Meng Wang ◽  
Wei He

Recent research has demonstrated the effectiveness of deep subspace learning networks, including the principal component analysis network (PCANet) and linear discriminant analysis network (LDANet), since they can extract high-level features and better represent abstract semantics of given data. However, their representation does not consider the nonlinear relationship of data and limits the use of features with nonlinear metrics. In this paper, we propose a novel architecture combining the kernel collaboration representation with deep subspace learning based on the PCANet and LDANet for facial expression recognition. First, the PCANet and LDANet are employed to learn abstract features. These features are then mapped to the kernel space to effectively capture their nonlinear similarities. Finally, we develop a simple yet effective classification method with squared [Formula: see text]-regularization, which improves the recognition accuracy and reduces time complexity. Comprehensive experimental results based on the JAFFE, CK[Formula: see text], KDEF and CMU Multi-PIE datasets confirm that our proposed approach has superior performance not just in terms of accuracy, but it is also robust against block occlusion and varying parameter configurations.


2014 ◽  
Vol 511-512 ◽  
pp. 437-440
Author(s):  
Xiao Xiao Xia ◽  
Zi Lu Ying ◽  
Wen Jin Chu

A new method based on Monogenic Binary Coding (MBC) is proposed for facial expression feature extraction and representation. Firstly, monogenic signal analysis is used to extract multi-scale magnitude, orientation and phase components. Secondly, Monogenic Binary Coding (MBC) is used to encode the monogenic local variation and intensity in local regions of each extracted component in each scale and local histograms are built. Then Blocked Fisher Linear Discrimination (BFLD) is used to reduce the dimensionality of histogram features and to enhance discrimination. Finally the three complementary components are fused for more effective facial expression recognition (FER). Experiment results on Japanese female expression database (JAFFE) show that the performance of the fusion method is even better than state-of-the-art local feature based FER methods such as Local Binary Pattern (LBP)+Sparse Representation (SRC), Local Phase Quantization (LPQ)+SRC ,etc.


Sign in / Sign up

Export Citation Format

Share Document