scholarly journals Recognition of Emotions Conveyed by Touch Through Force-Sensitive Screens: Observational Study of Humans and Machine Learning Techniques (Preprint)

2018 ◽  
Author(s):  
Alicia Heraz ◽  
Manfred Clynes

BACKGROUND Emotions affect our mental health: they influence our perception, alter our physical strength, and interfere with our reason. Emotions modulate our face, voice, and movements. When emotions are expressed through the voice or face, they are difficult to measure because cameras and microphones are not often used in real life in the same laboratory conditions where emotion detection algorithms perform well. With the increasing use of smartphones, the fact that we touch our phones, on average, thousands of times a day, and that emotions modulate our movements, we have an opportunity to explore emotional patterns in passive expressive touches and detect emotions, enabling us to empower smartphone apps with emotional intelligence. OBJECTIVE In this study, we asked 2 questions. (1) As emotions modulate our finger movements, will humans be able to recognize emotions by only looking at passive expressive touches? (2) Can we teach machines how to accurately recognize emotions from passive expressive touches? METHODS We were interested in 8 emotions: anger, awe, desire, fear, hate, grief, laughter, love (and no emotion). We conducted 2 experiments with 2 groups of participants: good imagers and emotionally aware participants formed group A, with the remainder forming group B. In the first experiment, we video recorded, for a few seconds, the expressive touches of group A, and we asked group B to guess the emotion of every expressive touch. In the second experiment, we trained group A to express every emotion on a force-sensitive smartphone. We then collected hundreds of thousands of their touches, and applied feature selection and machine learning techniques to detect emotions from the coordinates of participant’ finger touches, amount of force, and skin area, all as functions of time. RESULTS We recruited 117 volunteers: 15 were good imagers and emotionally aware (group A); the other 102 participants formed group B. In the first experiment, group B was able to successfully recognize all emotions (and no emotion) with a high 83.8% (769/918) accuracy: 49.0% (50/102) of them were 100% (450/450) correct and 25.5% (26/102) were 77.8% (182/234) correct. In the second experiment, we achieved a high 91.11% (2110/2316) classification accuracy in detecting all emotions (and no emotion) from 9 spatiotemporal features of group A touches. CONCLUSIONS Emotions modulate our touches on force-sensitive screens, and humans have a natural ability to recognize other people’s emotions by watching prerecorded videos of their expressive touches. Machines can learn the same emotion recognition ability and do better than humans if they are allowed to continue learning on new data. It is possible to enable force-sensitive screens to recognize users’ emotions and share this emotional insight with users, increasing users’ emotional awareness and allowing researchers to design better technologies for well-being.

10.2196/10104 ◽  
2018 ◽  
Vol 5 (3) ◽  
pp. e10104 ◽  
Author(s):  
Alicia Heraz ◽  
Manfred Clynes

Background Emotions affect our mental health: they influence our perception, alter our physical strength, and interfere with our reason. Emotions modulate our face, voice, and movements. When emotions are expressed through the voice or face, they are difficult to measure because cameras and microphones are not often used in real life in the same laboratory conditions where emotion detection algorithms perform well. With the increasing use of smartphones, the fact that we touch our phones, on average, thousands of times a day, and that emotions modulate our movements, we have an opportunity to explore emotional patterns in passive expressive touches and detect emotions, enabling us to empower smartphone apps with emotional intelligence. Objective In this study, we asked 2 questions. (1) As emotions modulate our finger movements, will humans be able to recognize emotions by only looking at passive expressive touches? (2) Can we teach machines how to accurately recognize emotions from passive expressive touches? Methods We were interested in 8 emotions: anger, awe, desire, fear, hate, grief, laughter, love (and no emotion). We conducted 2 experiments with 2 groups of participants: good imagers and emotionally aware participants formed group A, with the remainder forming group B. In the first experiment, we video recorded, for a few seconds, the expressive touches of group A, and we asked group B to guess the emotion of every expressive touch. In the second experiment, we trained group A to express every emotion on a force-sensitive smartphone. We then collected hundreds of thousands of their touches, and applied feature selection and machine learning techniques to detect emotions from the coordinates of participant’ finger touches, amount of force, and skin area, all as functions of time. Results We recruited 117 volunteers: 15 were good imagers and emotionally aware (group A); the other 102 participants formed group B. In the first experiment, group B was able to successfully recognize all emotions (and no emotion) with a high 83.8% (769/918) accuracy: 49.0% (50/102) of them were 100% (450/450) correct and 25.5% (26/102) were 77.8% (182/234) correct. In the second experiment, we achieved a high 91.11% (2110/2316) classification accuracy in detecting all emotions (and no emotion) from 9 spatiotemporal features of group A touches. Conclusions Emotions modulate our touches on force-sensitive screens, and humans have a natural ability to recognize other people’s emotions by watching prerecorded videos of their expressive touches. Machines can learn the same emotion recognition ability and do better than humans if they are allowed to continue learning on new data. It is possible to enable force-sensitive screens to recognize users’ emotions and share this emotional insight with users, increasing users’ emotional awareness and allowing researchers to design better technologies for well-being.


2021 ◽  
Vol 14 (3) ◽  
pp. 1-21
Author(s):  
Roy Abitbol ◽  
Ilan Shimshoni ◽  
Jonathan Ben-Dov

The task of assembling fragments in a puzzle-like manner into a composite picture plays a significant role in the field of archaeology as it supports researchers in their attempt to reconstruct historic artifacts. In this article, we propose a method for matching and assembling pairs of ancient papyrus fragments containing mostly unknown scriptures. Papyrus paper is manufactured from papyrus plants and therefore portrays typical thread patterns resulting from the plant’s stems. The proposed algorithm is founded on the hypothesis that these thread patterns contain unique local attributes such that nearby fragments show similar patterns reflecting the continuations of the threads. We posit that these patterns can be exploited using image processing and machine learning techniques to identify matching fragments. The algorithm and system which we present support the quick and automated classification of matching pairs of papyrus fragments as well as the geometric alignment of the pairs against each other. The algorithm consists of a series of steps and is based on deep-learning and machine learning methods. The first step is to deconstruct the problem of matching fragments into a smaller problem of finding thread continuation matches in local edge areas (squares) between pairs of fragments. This phase is solved using a convolutional neural network ingesting raw images of the edge areas and producing local matching scores. The result of this stage yields very high recall but low precision. Thus, we utilize these scores in order to conclude about the matching of entire fragments pairs by establishing an elaborate voting mechanism. We enhance this voting with geometric alignment techniques from which we extract additional spatial information. Eventually, we feed all the data collected from these steps into a Random Forest classifier in order to produce a higher order classifier capable of predicting whether a pair of fragments is a match. Our algorithm was trained on a batch of fragments which was excavated from the Dead Sea caves and is dated circa the 1st century BCE. The algorithm shows excellent results on a validation set which is of a similar origin and conditions. We then tried to run the algorithm against a real-life set of fragments for which we have no prior knowledge or labeling of matches. This test batch is considered extremely challenging due to its poor condition and the small size of its fragments. Evidently, numerous researchers have tried seeking matches within this batch with very little success. Our algorithm performance on this batch was sub-optimal, returning a relatively large ratio of false positives. However, the algorithm was quite useful by eliminating 98% of the possible matches thus reducing the amount of work needed for manual inspection. Indeed, experts that reviewed the results have identified some positive matches as potentially true and referred them for further investigation.


2021 ◽  
Vol 3 ◽  
Author(s):  
Syem Ishaque ◽  
Naimul Khan ◽  
Sri Krishnan

Heart rate variability (HRV) is the rate of variability between each heartbeat with respect to time. It is used to analyse the Autonomic Nervous System (ANS), a control system used to modulate the body's unconscious action such as cardiac function, respiration, digestion, blood pressure, urination, and dilation/constriction of the pupil. This review article presents a summary and analysis of various research works that analyzed HRV associated with morbidity, pain, drowsiness, stress and exercise through signal processing and machine learning methods. The points of emphasis with regards to HRV research as well as the gaps associated with processes which can be improved to enhance the quality of the research have been discussed meticulously. Restricting the physiological signals to Electrocardiogram (ECG), Electrodermal activity (EDA), photoplethysmography (PPG), and respiration (RESP) analysis resulted in 25 articles which examined the cause and effect of increased/reduced HRV. Reduced HRV was generally associated with increased morbidity and stress. High HRV normally indicated good health, and in some instances, it could signify clinical events of interest such as drowsiness. Effective analysis of HRV during ambulatory and motion situations such as exercise, video gaming, and driving could have a significant impact toward improving social well-being. Detection of HRV in motion is far from perfect, situations involving exercise or driving reported accuracy as high as 85% and as low as 59%. HRV detection in motion can be improved further by harnessing the advancements in machine learning techniques.


Author(s):  
Hesham M. Al-Ammal

Detection of anomalies in a given data set is a vital step in several applications in cybersecurity; including intrusion detection, fraud, and social network analysis. Many of these techniques detect anomalies by examining graph-based data. Analyzing graphs makes it possible to capture relationships, communities, as well as anomalies. The advantage of using graphs is that many real-life situations can be easily modeled by a graph that captures their structure and inter-dependencies. Although anomaly detection in graphs dates back to the 1990s, recent advances in research utilized machine learning methods for anomaly detection over graphs. This chapter will concentrate on static graphs (both labeled and unlabeled), and the chapter summarizes some of these recent studies in machine learning for anomaly detection in graphs. This includes methods such as support vector machines, neural networks, generative neural networks, and deep learning methods. The chapter will reflect the success and challenges of using these methods in the context of graph-based anomaly detection.


Processes ◽  
2020 ◽  
Vol 8 (4) ◽  
pp. 448
Author(s):  
Alessandro Tonacci ◽  
Alessandro Dellabate ◽  
Andrea Dieni ◽  
Lorenzo Bachi ◽  
Francesco Sansone ◽  
...  

Nowadays, psychological stress represents a burdensome condition affecting an increasing number of subjects, in turn putting into practice several strategies to cope with this issue, including the administration of relaxation protocols, often performed in non-structured environments, like workplaces, and constrained within short times. Here, we performed a quick relaxation protocol based on a short audio and video, and analyzed physiological signals related to the autonomic nervous system (ANS) activity, including electrocardiogram (ECG) and galvanic skin response (GSR). Based on the features extracted, machine learning was applied to discriminate between subjects benefitting from the protocol and those with negative or no effects. Twenty-four healthy volunteers were enrolled for the protocol, equally and randomly divided into Group A, performing an audio-video + video-only relaxation, and Group B, performing an audio-video + audio-only protocol. From the ANS point of view, Group A subjects displayed a significant difference in the heart rate variability-related parameter SDNN across the test phases, whereas both groups displayed a different GSR response, albeit at different levels, with Group A displaying greater differences across phases with respect to Group B. Overall, the majority of the volunteers enrolled self-reported an improvement of their well-being status, according to structured questionnaires. The use of neural networks helped in discriminating those with a positive effect of the relaxation protocol from those with a negative/neutral impact based on basal autonomic features with a 79.2% accuracy. The results obtained demonstrated a significant heterogeneity in autonomic effects of the relaxation, highlighting the importance of maintaining a structured, well-defined protocol to produce significant benefits at the ANS level. Machine learning approaches can be useful to predict the outcome of such protocols, therefore providing subjects less prone to positive responses with personalized advice that could improve the effect of such protocols on self-relaxation perception.


Complexity ◽  
2019 ◽  
Vol 2019 ◽  
pp. 1-10 ◽  
Author(s):  
Rafael Vega Vega ◽  
Héctor Quintián ◽  
Carlos Cambra ◽  
Nuño Basurto ◽  
Álvaro Herrero ◽  
...  

Present research proposes the application of unsupervised and supervised machine-learning techniques to characterize Android malware families. More precisely, a novel unsupervised neural-projection method for dimensionality-reduction, namely, Beta Hebbian Learning (BHL), is applied to visually analyze such malware. Additionally, well-known supervised Decision Trees (DTs) are also applied for the first time in order to improve characterization of such families and compare the original features that are identified as the most important ones. The proposed techniques are validated when facing real-life Android malware data by means of the well-known and publicly available Malgenome dataset. Obtained results support the proposed approach, confirming the validity of BHL and DTs to gain deep knowledge on Android malware.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Koen I. Neijenhuijs ◽  
Carel F. W. Peeters ◽  
Henk van Weert ◽  
Pim Cuijpers ◽  
Irma Verdonck-de Leeuw

Abstract Purpose Knowledge regarding symptom clusters may inform targeted interventions. The current study investigated symptom clusters among cancer survivors, using machine learning techniques on a large data set. Methods Data consisted of self-reports of cancer survivors who used a fully automated online application ‘Oncokompas’ that supports them in their self-management. This is done by 1) monitoring their symptoms through patient reported outcome measures (PROMs); and 2) providing a personalized overview of supportive care options tailored to their scores, aiming to reduce symptom burden and improve health-related quality of life. In the present study, data on 26 generic symptoms (physical and psychosocial) were used. Results of the PROM of each symptom are presented to the user as a no well-being risk, moderate well-being risk, or high well-being risk score. Data of 1032 cancer survivors were analysed using Hierarchical Density-Based Spatial Clustering of Applications with Noise (HDBSCAN) on high risk scores and moderate-to-high risk scores separately. Results When analyzing the high risk scores, seven clusters were extracted: one main cluster which contained most frequently occurring physical and psychosocial symptoms, and six subclusters with different combinations of these symptoms. When analyzing moderate-to-high risk scores, three clusters were extracted: two main clusters were identified, which separated physical symptoms (and their consequences) and psycho-social symptoms, and one subcluster with only body weight issues. Conclusion There appears to be an inherent difference on the co-occurrence of symptoms dependent on symptom severity. Among survivors with high risk scores, the data showed a clustering of more connections between physical and psycho-social symptoms in separate subclusters. Among survivors with moderate-to-high risk scores, we observed less connections in the clustering between physical and psycho-social symptoms.


Symmetry ◽  
2020 ◽  
Vol 12 (4) ◽  
pp. 499 ◽  
Author(s):  
Iqbal H. Sarker ◽  
Yoosef B. Abushark ◽  
Asif Irshad Khan

This paper mainly formulates the problem of predicting context-aware smartphone apps usage based on machine learning techniques. In the real world, people use various kinds of smartphone apps differently in different contexts that include both the user-centric context and device-centric context. In the area of artificial intelligence and machine learning, decision tree model is one of the most popular approaches for predicting context-aware smartphone usage. However, real-life smartphone apps usage data may contain higher dimensions of contexts, which may cause several issues such as increases model complexity, may arise over-fitting problem, and consequently decreases the prediction accuracy of the context-aware model. In order to address these issues, in this paper, we present an effective principal component analysis (PCA) based context-aware smartphone apps prediction model, “ContextPCA” using decision tree machine learning classification technique. PCA is an unsupervised machine learning technique that can be used to separate symmetric and asymmetric components, and has been adopted in our “ContextPCA” model, in order to reduce the context dimensions of the original data set. The experimental results on smartphone apps usage datasets show that “ContextPCA” model effectively predicts context-aware smartphone apps in terms of precision, recall, f-score and ROC values in various test cases.


2019 ◽  
Vol 20 (S19) ◽  
Author(s):  
Hsin-Yao Wang ◽  
Wen-Chi Li ◽  
Kai-Yao Huang ◽  
Chia-Ru Chung ◽  
Jorng-Tzong Horng ◽  
...  

Abstract Background Group B streptococcus (GBS) is an important pathogen that is responsible for invasive infections, including sepsis and meningitis. GBS serotyping is an essential means for the investigation of possible infection outbreaks and can identify possible sources of infection. Although it is possible to determine GBS serotypes by either immuno-serotyping or geno-serotyping, both traditional methods are time-consuming and labor-intensive. In recent years, the matrix-assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF MS) has been reported as an effective tool for the determination of GBS serotypes in a more rapid and accurate manner. Thus, this work aims to investigate GBS serotypes by incorporating machine learning techniques with MALDI-TOF MS to carry out the identification. Results In this study, a total of 787 GBS isolates, obtained from three research and teaching hospitals, were analyzed by MALDI-TOF MS, and the serotype of the GBS was determined by a geno-serotyping experiment. The peaks of mass-to-charge ratios were regarded as the attributes to characterize the various serotypes of GBS. Machine learning algorithms, such as support vector machine (SVM) and random forest (RF), were then used to construct predictive models for the five different serotypes (Types Ia, Ib, III, V, and VI). After optimization of feature selection and model generation based on training datasets, the accuracies of the selected models attained 54.9–87.1% for various serotypes based on independent testing data. Specifically, for the major serotypes, namely type III and type VI, the accuracies were 73.9 and 70.4%, respectively. Conclusion The proposed models have been adopted to implement a web-based tool (GBSTyper), which is now freely accessible at http://csb.cse.yzu.edu.tw/GBSTyper/, for providing efficient and effective detection of GBS serotypes based on a MALDI-TOF MS spectrum. Overall, this work has demonstrated that the combination of MALDI-TOF MS and machine intelligence could provide a practical means of clinical pathogen testing.


2021 ◽  
pp. 1-13
Author(s):  
Qing Zhou ◽  
Xi Shi ◽  
Liang Ge

The early warning of mental disorders is of great importance for the psychological well-being of college students. The accuracy of conventional scaling methods on questionnaires is generally low in predicting mental disorders, as the questionnaires contain much noise, and the processing on the questionnaires is rudimentary. To address this problem, we propose a novel anomaly detection framework on questionnaires, which represents each questionnaire as a document, and applies keyword extraction and machine learning techniques to detect abnormal questionnaires. We also propose a new keyword statistic for the calculation of option significance and three interpretable machine learning models for the calculation of question significance. Experiments demonstrate the effectiveness of our proposed methods.


Sign in / Sign up

Export Citation Format

Share Document