scholarly journals Automatic vs. Human Recognition of Pain Intensity from Facial Expression on the X-ITE Pain Database

Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 3273
Author(s):  
Ehsan Othman ◽  
Philipp Werner ◽  
Frerk Saxen ◽  
Ayoub Al-Hamadi ◽  
Sascha Gruss ◽  
...  

Prior work on automated methods demonstrated that it is possible to recognize pain intensity from frontal faces in videos, while there is an assumption that humans are very adept at this task compared to machines. In this paper, we investigate whether such an assumption is correct by comparing the results achieved by two human observers with the results achieved by a Random Forest classifier (RFc) baseline model (called RFc-BL) and by three proposed automated models. The first proposed model is a Random Forest classifying descriptors of Action Unit (AU) time series; the second is a modified MobileNetV2 CNN classifying face images that combine three points in time; and the third is a custom deep network combining two CNN branches using the same input as for MobileNetV2 plus knowledge of the RFc. We conduct experiments with X-ITE phasic pain database, which comprises videotaped responses to heat and electrical pain stimuli, each of three intensities. Distinguishing these six stimulation types plus no stimulation was the main 7-class classification task for the human observers and automated approaches. Further, we conducted reduced 5-class and 3-class classification experiments, applied Multi-task learning, and a newly suggested sample weighting method. Experimental results show that the pain assessments of the human observers are significantly better than guessing and perform better than the automatic baseline approach (RFc-BL) by about 1%; however, the human performance is quite poor due to the challenge that pain that is ethically allowed to be induced in experimental studies often does not show up in facial reaction. We discovered that downweighting those samples during training improves the performance for all samples. The proposed RFc and two-CNNs models (using the proposed sample weighting) significantly outperformed the human observer by about 6% and 7%, respectively.

2021 ◽  
Author(s):  
Ehsan Othman ◽  
Philipp Werner ◽  
Frerk Saxen ◽  
Ayoub Al-Hamadi ◽  
Sascha Gruss ◽  
...  

Abstract Automatic systems enable continuous monitoring of patients' pain intensity as shown in prior studies. Facial expression and physiological data such as electrodermal activity (EDA) are very informative for pain recognition. The features extracted from EDA indicate the stress and anxiety caused by different levels of pain. In this paper, we investigate using the EDA modality and fusing two modalities (frontal RGB video and EDA) for continuous pain intensity recognition with the X-ITE Pain Database. Further, we compare the performance of automated models before and after reducing the imbalance problem in heat and electrical pain datasets that include phasic (short) and tonic (long) stimuli. We use three distinct real-time methods: A Random Forest (RF) baseline methods [Random Forest classifier (RFc) and Random Forest regression (RFr)], Long-Short Term Memory Network (LSTM), and LSTM using sample weighting method (called LSTM-SW). Experimental results (1) report the first results of continuous pain intensity recognition using EDA data on the X-ITE Pain Database, (2) show that LSTM and LSTM-SW outperform guessing and baseline methods (RFc and RFr), (3) confirm that the electrodermal activity (EDA) with most models is the best, (4) show the fusion of the output of two LSTM models using facial expression and EDA data (called Decision Fusion = DF). The DF improves results further with some datasets (e.g. Heat Phasic Dataset (HTD)).


2016 ◽  
Vol 43 (1) ◽  
pp. 54-74 ◽  
Author(s):  
Baojun Ma ◽  
Hua Yuan ◽  
Ye Wu

Clustering is a powerful unsupervised tool for sentiment analysis from text. However, the clustering results may be affected by any step of the clustering process, such as data pre-processing strategy, term weighting method in Vector Space Model and clustering algorithm. This paper presents the results of an experimental study of some common clustering techniques with respect to the task of sentiment analysis. Different from previous studies, in particular, we investigate the combination effects of these factors with a series of comprehensive experimental studies. The experimental results indicate that, first, the K-means-type clustering algorithms show clear advantages on balanced review datasets, while performing rather poorly on unbalanced datasets by considering clustering accuracy. Second, the comparatively newly designed weighting models are better than the traditional weighting models for sentiment clustering on both balanced and unbalanced datasets. Furthermore, adjective and adverb words extraction strategy can offer obvious improvements on clustering performance, while strategies of adopting stemming and stopword removal will bring negative influences on sentiment clustering. The experimental results would be valuable for both the study and usage of clustering methods in online review sentiment analysis.


2020 ◽  
Vol 10 (23) ◽  
pp. 8346
Author(s):  
Ni Jiang ◽  
Feihong Yu

Cell counting is a fundamental part of biomedical and pathological research. Predicting a density map is the mainstream method to count cells. As an easy-trained and well-generalized model, the random forest is often used to learn the cell images and predict the density maps. However, it cannot predict the data that are beyond the training data, which may result in underestimation. To overcome this problem, we propose a cell counting framework to predict the density map by detecting cells. The cell counting framework contains two parts: the training data preparation and the detection framework. The former makes sure that the cells can be detected even when overlapping, and the latter makes sure the count result accurate and robust. The proposed method uses multiple random forests to predict various probability maps where the cells can be detected by Hessian matrix. Take all the detection results into consideration to get the density map and achieve better performance. We conducted experiments on three public cell datasets. Experimental results showed that the proposed model performs better than the traditional random forest (RF) in terms of accuracy and robustness, and even superior to some state-of-the-art deep learning models. Especially when the training data are small, which is the usual case in cell counting, the count errors on VGG cells, and MBM cells were decreased from 3.4 to 2.9, from 11.3 to 9.3, respectively. The proposed model can obtain the lowest count error and achieves state-of-the-art.


2014 ◽  
Vol 6 (1) ◽  
pp. 1032-1035 ◽  
Author(s):  
Ramzi Suleiman

The research on quasi-luminal neutrinos has sparked several experimental studies for testing the "speed of light limit" hypothesis. Until today, the overall evidence favors the "null" hypothesis, stating that there is no significant difference between the observed velocities of light and neutrinos. Despite numerous theoretical models proposed to explain the neutrinos behavior, no attempt has been undertaken to predict the experimentally produced results. This paper presents a simple novel extension of Newton's mechanics to the domain of relativistic velocities. For a typical neutrino-velocity experiment, the proposed model is utilized to derive a general expression for . Comparison of the model's prediction with results of six neutrino-velocity experiments, conducted by five collaborations, reveals that the model predicts all the reported results with striking accuracy. Because in the proposed model, the direction of the neutrino flight matters, the model's impressive success in accounting for all the tested data, indicates a complete collapse of the Lorentz symmetry principle in situation involving quasi-luminal particles, moving in two opposite directions. This conclusion is support by previous findings, showing that an identical Sagnac effect to the one documented for radial motion, occurs also in linear motion.


2020 ◽  
Vol 27 (3) ◽  
pp. 178-186 ◽  
Author(s):  
Ganesan Pugalenthi ◽  
Varadharaju Nithya ◽  
Kuo-Chen Chou ◽  
Govindaraju Archunan

Background: N-Glycosylation is one of the most important post-translational mechanisms in eukaryotes. N-glycosylation predominantly occurs in N-X-[S/T] sequon where X is any amino acid other than proline. However, not all N-X-[S/T] sequons in proteins are glycosylated. Therefore, accurate prediction of N-glycosylation sites is essential to understand Nglycosylation mechanism. Objective: In this article, our motivation is to develop a computational method to predict Nglycosylation sites in eukaryotic protein sequences. Methods: In this article, we report a random forest method, Nglyc, to predict N-glycosylation site from protein sequence, using 315 sequence features. The method was trained using a dataset of 600 N-glycosylation sites and 600 non-glycosylation sites and tested on the dataset containing 295 Nglycosylation sites and 253 non-glycosylation sites. Nglyc prediction was compared with NetNGlyc, EnsembleGly and GPP methods. Further, the performance of Nglyc was evaluated using human and mouse N-glycosylation sites. Results: Nglyc method achieved an overall training accuracy of 0.8033 with all 315 features. Performance comparison with NetNGlyc, EnsembleGly and GPP methods shows that Nglyc performs better than the other methods with high sensitivity and specificity rate. Conclusion: Our method achieved an overall accuracy of 0.8248 with 0.8305 sensitivity and 0.8182 specificity. Comparison study shows that our method performs better than the other methods. Applicability and success of our method was further evaluated using human and mouse N-glycosylation sites. Nglyc method is freely available at https://github.com/bioinformaticsML/ Ngly.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jared Hamwood ◽  
Beat Schmutz ◽  
Michael J. Collins ◽  
Mark C. Allenby ◽  
David Alonso-Caneiro

AbstractThis paper proposes a fully automatic method to segment the inner boundary of the bony orbit in two different image modalities: magnetic resonance imaging (MRI) and computed tomography (CT). The method, based on a deep learning architecture, uses two fully convolutional neural networks in series followed by a graph-search method to generate a boundary for the orbit. When compared to human performance for segmentation of both CT and MRI data, the proposed method achieves high Dice coefficients on both orbit and background, with scores of 0.813 and 0.975 in CT images and 0.930 and 0.995 in MRI images, showing a high degree of agreement with a manual segmentation by a human expert. Given the volumetric characteristics of these imaging modalities and the complexity and time-consuming nature of the segmentation of the orbital region in the human skull, it is often impractical to manually segment these images. Thus, the proposed method provides a valid clinical and research tool that performs similarly to the human observer.


2021 ◽  
Vol 7 (3) ◽  
pp. 209-219
Author(s):  
Iris J Holzleitner ◽  
Alex L Jones ◽  
Kieran J O’Shea ◽  
Rachel Cassar ◽  
Vanessa Fasolt ◽  
...  

Abstract Objectives A large literature exists investigating the extent to which physical characteristics (e.g., strength, weight, and height) can be accurately assessed from face images. While most of these studies have employed two-dimensional (2D) face images as stimuli, some recent studies have used three-dimensional (3D) face images because they may contain cues not visible in 2D face images. As equipment required for 3D face images is considerably more expensive than that required for 2D face images, we here investigated how perceptual ratings of physical characteristics from 2D and 3D face images compare. Methods We tested whether 3D face images capture cues of strength, weight, and height better than 2D face images do by directly comparing the accuracy of strength, weight, and height ratings of 182 2D and 3D face images taken simultaneously. Strength, height and weight were rated by 66, 59 and 52 raters respectively, who viewed both 2D and 3D images. Results In line with previous studies, we found that weight and height can be judged somewhat accurately from faces; contrary to previous research, we found that people were relatively inaccurate at assessing strength. We found no evidence that physical characteristics could be judged more accurately from 3D than 2D images. Conclusion Our results suggest physical characteristics are perceived with similar accuracy from 2D and 3D face images. They also suggest that the substantial costs associated with collecting 3D face scans may not be justified for research on the accuracy of facial judgments of physical characteristics.


2021 ◽  
Vol 15 (1) ◽  
pp. 151-160
Author(s):  
Hemant P. Kasturiwale ◽  
Sujata N. Kale

The Autonomous Nervous System (ANS) controls the nervous system and Heart Rate Variability (HRV) can be used as a diagnostic tool to diagnose heart defects. HRV can be classified into linear and nonlinear HRV indices which are used mostly to measure the efficiency of the model. For prediction of cardiac diseases, the selection and extraction features of machine learning model are effective. The available model used till date is based on HRV indices to predict the cardiac diseases accurately. The model could hardly throw light on specifics of indices, selection process and stability of the model. The proposed model is developed considering all facet electrocardiogram amplitude (ECG), frequency components, sampling frequency, extraction methods and acquisition techniques. The machine learning based model and its performance shall be tested using the standard BioSignal method, both on the data available and on the data obtained by the author. This is unique model developed by considering the vast number of mixtures sets and more than four complex cardiac classes. The statistical analysis is performed on a variety of databases such as MIT/BIH Normal Sinus Rhythm (NSR), MIT/BIH Arrhythmia (AR) and MIT/BIH Atrial Fibrillation (AF) and Peripheral Pule Analyser using feature compatibility techniques. The classifiers are trained for prediction with approximately 40000 sets of parameters. The proposed model reaches an average accuracy of 97.87 percent and is sensitive and précised. The best features are chosen from the different HRV features that will be used for classification. The present model was checked under all possible subject scenarios, such as the raw database and the non-ECG signal. In this sense, robustness is defined not only by the specificity parameter, but also by other measuring output parameters. Support Vector Machine (SVM), K-nearest Neighbour (KNN), Ensemble Adaboost (EAB) with Random Forest (RF) are tested in a 5% higher precision band and a lower band configuration. The Random Forest has produced better results, and its robustness has been established.


1964 ◽  
Vol 42 (12) ◽  
pp. 1605-1613 ◽  
Author(s):  
R. J. Moore ◽  
G. A. Mulligan

A third 5-year survey made in 1962 of Carduus acanthoides, C. nutans, and their hybrids in Grey Co., Ontario, revealed that a great decrease in these populations had occurred. C. acanthoides and hybrids similar to this species had survived better than C. nutans but very little spread of either species seemed to have occurred in 1957–1962. In experimental plots the hybrid has been made and backcrossed to the parental species. The species differ in chromosome number (C. acanthoides, 2n = 22; C. nutans, 2n = 16) and hybrids have intermediate numbers. Evidence was found from field and experimental studies that the progeny of the F1 hybrid included a greater proportion of seedlings with the higher chromosome numbers than with the lower and intermediate numbers. It is suggested that this selection may operate through the rejection of the longer chromosomes received from C. nutans, which, in certain zygotic combinations may constitute an excess of chromatin lethal to the zygote.


2010 ◽  
Vol 37-38 ◽  
pp. 116-121
Author(s):  
Yu Lan Li ◽  
Bo Li ◽  
Su Jun Luo

In the facility layout decisions, the previous general design principle is to minimize material handling costs, and the objective of these old models only considers the costs of loaded trip, without regard to empty vehicle trip costs, which do not meet the actual demand. In this paper, the unequal-sized unidirectional loop layout problem is analyzed, and the model of facility layout is improved. The objective of the new model is to minimize the total loaded and empty vehicle trip costs. To solve this model, a heuristic algorithm based on partheno-genetic algorithms is designed. Finally, an unequal-sized unidirectional loop layout problem including 12 devices is simulated. Comparison shows that the result obtained using the proposed model is 20.4% better than that obtained using the original model.


Sign in / Sign up

Export Citation Format

Share Document