scholarly journals Development of a Facial Expression Scale Using Farrowing as a Model of Pain in Sows

Animals ◽  
2020 ◽  
Vol 10 (11) ◽  
pp. 2113
Author(s):  
Elena Navarro ◽  
Eva Mainau ◽  
Xavier Manteca

Changes in facial expression have been shown to be a useful tool to assess pain severity in humans and animals, but facial scales have not yet been developed for all species. A facial expression scale in sows was developed using farrowing as a pain model. Five potential facial zones were identified: (i) Tension above eyes, (ii) Snout angle, (iii) Neck tension, (iv) Temporal tension and ear position (v), and Cheek tension. Facial zones were examined through 263 images of a total of 21 sows at farrowing, characterizing moments of non-pain (19 days post-farrowing; score 0), moderate pain (time interval between the delivery of two consecutive piglets; score 1) and severe pain (during active piglet delivery; score 2). Images were evaluated by a “Silver Standard” observer with experience in sows’ facial expressions, and by a group of eight animal welfare scientists, without experience in it, but who received a one-hour training session on how to assess pain in sows’ faces. Intra- and inter-observer reliability of the facial expression ranged from moderate to very good for all facial expression zones, with Tension above eyes, Snout angle, and Neck tension showing the highest reliability. In conclusion, monitoring facial expressions seems to be a useful tool to assess pain caused by farrowing.

Author(s):  
M. Sultan Zia ◽  
Majid Hussain ◽  
M. Arfan Jaffar

Facial expressions recognition is a crucial task in pattern recognition and it becomes even crucial when cross-cultural emotions are encountered. Various studies in the past have shown that all the facial expressions are not innate and universal, but many of them are learned and culture-dependent. Extreme facial expression recognition methods employ different datasets for training and later use it for testing and demostrate high accuracy in recognition. Their performances degrade drastically when expression images are taken from different cultures. Moreover, there are many existing facial expression patterns which cannot be generated and used as training data in single training session. A facial expression recognition system can maintain its high accuracy and robustness globally and for a longer period if the system possesses the ability to learn incrementally. We also propose a novel classification algorithm for multinomial classification problems. It is an efficient classifier and can be a good choice for base classifier in real-time applications. We propose a facial expression recognition system that can learn incrementally. We use Local Binary Pattern (LBP) features to represent the expression space. The performance of the system is tested on static images from six different databases containing expressions from various cultures. The experiments using the incremental learning classification demonstrate promising results.


2020 ◽  
Vol 2 (1) ◽  
pp. 21-30
Author(s):  
Sri Mulyani Nurhayati ◽  
Siti Ulfah Nurjanah

The purpose of this study was to study how to analyze the intervention of giving marital therapy to decrease the intensity of labor in the first stage of the active phase in the Walnut Room of Pelni Hospital in Jakarta. The type of descriptive research chosen for the investigation to be carried out is a case study. The results of the survey before the administration of marital therapy found that subject I had a chance, pain scale 7 (severe pain), looked anxious, tense, pain disappeared, facial expressions grimaced. While in subject II who initially experienced a pain scale of 6 (moderate pain), looked worried, facial expressions appeared to wince. After the intervention of marital therapy, it was found that subject I had a chance, pain scale 6 (moderate pain). Conclusion, treatment of Murottal AlQuran therapy affects decreasing pain intensity. Keywords: Murottal Al-Qur'an, Pain, Childbirth


2021 ◽  
Vol 11 (9) ◽  
pp. 1207
Author(s):  
Frank Lobbezoo ◽  
Xuan Mai Lam ◽  
Savannah de la Mar ◽  
Liza J. M. van de Rijt ◽  
Miriam Kunz ◽  
...  

Background: Observational tools have been developed to assess pain in cognitively impaired individuals. It is not known, however, whether these tools are universal enough so that even pain depicted in print art can be assessed reliably. Therefore, the aim of this study was to assess the reliability in scoring facial expressions of pain in dental print art from the 17th, 18th, and 19th century, using a Short Form of the 15-item Pain Assessment in Impaired Cognition (PAIC15-SF) tool. Methods: Seventeen prints of patients undergoing dental procedures were scored twice by two inexperienced observers and an expert and once by a Gold Standard observer. Results: All observers achieved high intra-observer reliability for all four items of the category “facial expressions” and for three items of the category “body movements” (ICC: 0.748–0.991). The remaining two items of the category “body movements”, viz., “rubbing” and “restlessness”, were excluded from further research because it was not possible to calculate a reliable ICC. Overall, the intra-observer reliability of the expert was higher than that of the inexperienced observers. The inter-observer reliability scores varied from poor to excellent (ICC: 0.000–0.970). In comparison to the Gold Standard, the inter-observer reliability of the expert was higher than that of the inexperienced observers. Conclusion: The PAIC15-SF tool is universal enough even to allow reliable assessment of facial expressions of pain depicted in dental print art.


2020 ◽  
Author(s):  
Jonathan Yi ◽  
Philip Pärnamets ◽  
Andreas Olsson

Responding appropriately to others’ facial expressions is key to successful social functioning. Despite the large body of work on face perception and spontaneous responses to static faces, little is known about responses to faces in dynamic, naturalistic situations, and no study has investigated how goal directed responses to faces are influenced by learning during dyadic interactions. To experimentally model such situations, we developed a novel method based on online integration of electromyography (EMG) signals from the participants’ face (corrugator supercilii and zygomaticus major) during facial expression exchange with dynamic faces displaying happy and angry facial expressions. Fifty-eight participants learned by trial-and-error to avoid receiving aversive stimulation by either reciprocate (congruently) or respond opposite (incongruently) to the expression of the target face. Our results validated our method, showing that participants learned to optimize their facial behavior, and replicated earlier findings of faster and more accurate responses in congruent vs. incongruent conditions. Moreover, participants performed better on trials when confronted with smiling, as compared to frowning, faces, suggesting it might be easier to adapt facial responses to positively associated expressions. Finally, we applied drift diffusion and reinforcement learning models to provide a mechanistic explanation for our findings which helped clarifying the underlying decision-making processes of our experimental manipulation. Our results introduce a new method to study learning and decision-making in facial expression exchange, in which there is a need to gradually adapt facial expression selection to both social and non-social reinforcements.


2020 ◽  
Author(s):  
Joshua W Maxwell ◽  
Eric Ruthruff ◽  
michael joseph

Are facial expressions of emotion processed automatically? Some authors have not found this to be the case (Tomasik et al., 2009). Here we revisited the question with a novel experimental logic – the backward correspondence effect (BCE). In three dual-task studies, participants first categorized a sound (Task 1) and then indicated the location of a target face (Task 2). In Experiment 1, Task 2 required participants to search for one facial expression of emotion (angry or happy). We observed positive BCEs, indicating that facial expressions of emotion bypassed the central attentional bottleneck and thus were processed in a capacity-free, automatic manner. In Experiment 2, we replicated this effect but found that morphed emotional expressions (which were used by Tomasik) were not processed automatically. In Experiment 3, we observed similar BCEs for another type of face processing previously shown to be capacity-free – identification of familiar faces (Jung et al., 2013). We conclude that facial expressions of emotion are identified automatically when sufficiently unambiguous.


2014 ◽  
Vol 4 (1) ◽  
Author(s):  
Maria Frödin ◽  
Margareta Warrén Stomberg

Pain management is an integral challenge in nursing and includes the responsibility of managing patients’ pain, evaluating pain therapy and ensuring the quality of care. The aims of this study were to explore patients’ experiences of pain after lung surgery and evaluate their satisfaction with the postoperative pain management. A descriptive design was used which studied 51 participants undergoing lung surgery. The incidence of moderate postoperative pain varied from 36- 58% among the participants and severe pain from 11-26%, during their hospital stay. Thirty-nine percent had more pain than expected. After three months, 20% experienced moderate pain and 4% experienced severe pain, while after six months, 16% experienced moderate pain. The desired quality of care goal was not fully achieved. We conclude that a large number of patients experienced moderate and severe postoperative pain and more than one third had more pain than expected. However, 88% were satisfied with the pain management. The findings confirm the severity of pain experienced after lung surgery and facilitate the apparent need for the continued improvement of postoperative pain management following this procedure.


2021 ◽  
pp. 174702182199299
Author(s):  
Mohamad El Haj ◽  
Emin Altintas ◽  
Ahmed A Moustafa ◽  
Abdel Halim Boudoukha

Future thinking, which is the ability to project oneself forward in time to pre-experience an event, is intimately associated with emotions. We investigated whether emotional future thinking can activate emotional facial expressions. We invited 43 participants to imagine future scenarios, cued by the words “happy,” “sad,” and “city.” Future thinking was video recorded and analysed with a facial analysis software to classify whether facial expressions (i.e., happy, sad, angry, surprised, scared, disgusted, and neutral facial expression) of participants were neutral or emotional. Analysis demonstrated higher levels of happy facial expressions during future thinking cued by the word “happy” than “sad” or “city.” In contrast, higher levels of sad facial expressions were observed during future thinking cued by the word “sad” than “happy” or “city.” Higher levels of neutral facial expressions were observed during future thinking cued by the word “city” than “happy” or “sad.” In the three conditions, the neutral facial expressions were high compared with happy and sad facial expressions. Together, emotional future thinking, at least for future scenarios cued by “happy” and “sad,” seems to trigger the corresponding facial expression. Our study provides an original physiological window into the subjective emotional experience during future thinking.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Sooyoung Cho ◽  
Youn Jin Kim ◽  
Minjin Lee ◽  
Jae Hee Woo ◽  
Hyun Jung Lee

Abstract Background Pain assessment and management are important in postoperative circumstances as overdosing of opioids can induce respiratory depression and critical consequences. We aimed this study to check the reliability of commonly used pain scales in a postoperative setting among Korean adults. We also intended to determine cut-off points of pain scores between mild and moderate pain and between moderate and severe pain by which can help to decide to use pain medication. Methods A total of 180 adult patients undergoing elective non-cardiac surgery were included. Postoperative pain intensity was rated with a visual analog scale (VAS), numeric rating scale (NRS), faces pain scale revised (FPS-R), and verbal rating scale (VRS). The VRS rated pain according to four grades: none, mild, moderate, and severe. Pain assessments were performed twice: when the patients were alert enough to communicate after arrival at the postoperative care unit (PACU) and 30 min after arrival at the PACU. The levels of agreement among the scores were evaluated using intraclass correlation coefficients (ICCs). The cut-off points were determined by receiver operating characteristic curves. Results The ICCs among the VAS, NRS, and FPS-R were consistently high (0.839–0.945). The pain categories were as follow: mild ≦ 5.3 / moderate 5.4 ~ 7.1 /severe ≧ 7.2 in VAS, mild ≦ 5 / moderate 6 ~ 7 / severe ≧ 8 in NRS, mild ≦ 4 / moderate 6 / severe 8 and 10 in FPS-R. The cut-off points for analgesics request were VAS ≧ 5.5, NRS ≧ 6, FPS-R ≧ 6, and VRS ≧ 2 (moderate or severe pain). Conclusions During the immediate postoperative period, VAS, NRS, and FPS-R were well correlated. The boundary between mild and moderate pain was around five on 10-point scales, and it corresponded to the cut-off point of analgesic request. Healthcare providers should consider VRS and other patient-specific signs to avoid undertreatment of pain or overdosing of pain medication.


2021 ◽  
Vol 11 (4) ◽  
pp. 1428
Author(s):  
Haopeng Wu ◽  
Zhiying Lu ◽  
Jianfeng Zhang ◽  
Xin Li ◽  
Mingyue Zhao ◽  
...  

This paper addresses the problem of Facial Expression Recognition (FER), focusing on unobvious facial movements. Traditional methods often cause overfitting problems or incomplete information due to insufficient data and manual selection of features. Instead, our proposed network, which is called the Multi-features Cooperative Deep Convolutional Network (MC-DCN), maintains focus on the overall feature of the face and the trend of key parts. The processing of video data is the first stage. The method of ensemble of regression trees (ERT) is used to obtain the overall contour of the face. Then, the attention model is used to pick up the parts of face that are more susceptible to expressions. Under the combined effect of these two methods, the image which can be called a local feature map is obtained. After that, the video data are sent to MC-DCN, containing parallel sub-networks. While the overall spatiotemporal characteristics of facial expressions are obtained through the sequence of images, the selection of keys parts can better learn the changes in facial expressions brought about by subtle facial movements. By combining local features and global features, the proposed method can acquire more information, leading to better performance. The experimental results show that MC-DCN can achieve recognition rates of 95%, 78.6% and 78.3% on the three datasets SAVEE, MMI, and edited GEMEP, respectively.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2003 ◽  
Author(s):  
Xiaoliang Zhu ◽  
Shihao Ye ◽  
Liang Zhao ◽  
Zhicheng Dai

As a sub-challenge of EmotiW (the Emotion Recognition in the Wild challenge), how to improve performance on the AFEW (Acted Facial Expressions in the wild) dataset is a popular benchmark for emotion recognition tasks with various constraints, including uneven illumination, head deflection, and facial posture. In this paper, we propose a convenient facial expression recognition cascade network comprising spatial feature extraction, hybrid attention, and temporal feature extraction. First, in a video sequence, faces in each frame are detected, and the corresponding face ROI (range of interest) is extracted to obtain the face images. Then, the face images in each frame are aligned based on the position information of the facial feature points in the images. Second, the aligned face images are input to the residual neural network to extract the spatial features of facial expressions corresponding to the face images. The spatial features are input to the hybrid attention module to obtain the fusion features of facial expressions. Finally, the fusion features are input in the gate control loop unit to extract the temporal features of facial expressions. The temporal features are input to the fully connected layer to classify and recognize facial expressions. Experiments using the CK+ (the extended Cohn Kanade), Oulu-CASIA (Institute of Automation, Chinese Academy of Sciences) and AFEW datasets obtained recognition accuracy rates of 98.46%, 87.31%, and 53.44%, respectively. This demonstrated that the proposed method achieves not only competitive performance comparable to state-of-the-art methods but also greater than 2% performance improvement on the AFEW dataset, proving the significant outperformance of facial expression recognition in the natural environment.


Sign in / Sign up

Export Citation Format

Share Document