scholarly journals Facial Paralysis Detection on Images Using Key Point Analysis

2021 ◽  
Vol 11 (5) ◽  
pp. 2435
Author(s):  
Gemma S. Parra-Dominguez ◽  
Raul E. Sanchez-Yanez ◽  
Carlos H. Garcia-Capulin

The inability to move the muscles of the face on one or both sides is known as facial paralysis, which may affect the ability of the patient to speak, blink, swallow saliva, eat, or communicate through natural facial expressions. The well-being of the patient could also be negatively affected. Computer-based systems as a means to detect facial paralysis are important in the development of standardized tools for medical assessment, treatment, and monitoring; additionally, they are expected to provide user-friendly tools for patient monitoring at home. In this work, a methodology to detect facial paralysis in a face photograph is proposed. A system consisting of three modules—facial landmark extraction, facial measure computation, and facial paralysis classification—was designed. Our facial measures aim to identify asymmetry levels within the face elements using facial landmarks, and a binary classifier based on a multi-layer perceptron approach provides an output label. The Weka suite was selected to design the classifier and implement the learning algorithm. Tests on publicly available databases reveal outstanding classification results on images, showing that our methodology that was used to design a binary classifier can be expanded to other databases with great results, even if the participants do not execute similar facial expressions.

Author(s):  
Julius Yong Wu Jien ◽  
Aslina Baharum ◽  
Shaliza Hayati A. Wahab ◽  
Nordin Saad ◽  
Muhammad Omar ◽  
...  

Face recognition is the use of biometric innovations that can see or validate a person by seeing and investigating designs depending on the shape of the individual. Face recognition is used largely for the purpose of well-being, despite the fact that passion for different areas of use is growing. Overall, face recognition innovations are worth considering because they have the potential for broad legal jurisdiction and different business applications. It is widely used in many spaces. How it works is a product of facial recognition processing facial geometry. The hole between the ear and the good way from the front to the jaw are the main variables. This code distinguishes the highlight of the face that is important for your facial separation and creates your facial expression. Therefore, this study gives an overview of age detection using a different combination of machine learning and image processing methods on the image dataset.


2018 ◽  
Vol 7 (4) ◽  
pp. 2325
Author(s):  
Banita . ◽  
Dr Poonam Tanwar

Face recognition are of great interest to researchers in terms of Image processing and Computer Graphics. In recent years, various factors become popular which clearly affect the face model. Which are ageing, universal facial expressions, and muscle movement. Similarly in terms of medical terminology the facial paralysis can be peripheral or central depending on the level of motor neuron lesion which can be below the nucleus of the nerve or supra nuclear. The various medical therapy used for facial paralysis are electroaccupunture, electrotherapy, laser acupuncture, manual acupuncture which is a traditional form of acupuncture. Imaging plays a great role in evaluation of degree of paralysis and also for faces recognition. There is a wide research in terms of facial expressions and facial recognition but limited research work is available in facial paralysis. House- Brackmann Grading system is one of the simplest and easiest method to evaluate the degree of facial paralysis. During evaluation common facial expressions are recorded and are further evaluated by considering the focal points of the left or the right side of the face. This paper presents the classification of face recognition and its respective fuzzy rules to remove uncertainty in the result after evaluation of facial paralysis.  


2017 ◽  
Author(s):  
Jennifer S Mascaro ◽  
Sean Kelley ◽  
Alana Darcher ◽  
Lobsang Negi ◽  
Carol Worthman ◽  
...  

Increasing data suggest that for medical school students the stress of academic and psychologicaldemands can impair social emotions that are a core aspect of compassion and ultimately physiciancompetence. Few interventions have proven successful for enhancing physician compassion inways that persist in the face of suffering and that enable sustained caretaker well-being. To addressthis issue, the current study was designed to (1) investigate the feasibility of cognitively-basedcompassion training (CBCT) for second-year medical students, and (2) test whether CBCT decreasesdepression, enhances compassion, and improves daily functioning in medical students. Comparedto the wait-list group, students randomized to CBCT reported increased compassion, and decreasedloneliness and depression. Changes in compassion were most robust in individuals reporting highlevels of depression at baseline, suggesting that CBCT may benefit those most in need by breakingthe link between personal suffering and a concomitant drop in compassion


Author(s):  
Lion D. Comfort ◽  
Marian C. Neidert ◽  
Oliver Bozinov ◽  
Luca Regli ◽  
Martin N. Stienen

Abstract Background Complications after neurosurgical operations can have severe impact on patient well-being, which is poorly reflected by current grading systems. The objective of this work was to develop and conduct a feasibility study of a new smartphone application that allows for the longitudinal assessment of postoperative well-being and complications. Methods We developed a smartphone application “Post OP Tracker” according to requirements from clinical experience and tested it on simulated patients. Participants received regular notifications through the app, inquiring them about their well-being and complications that had to be answered according to their assigned scenarios. After a 12-week period, subjects answered a questionnaire about the app’s functionality, user-friendliness, and acceptability. Results A total of 13 participants (mean age 34.8, range 24–68 years, 4 (30.8%) female) volunteered in this feasibility study. Most of them had a professional background in either health care or software development. All participants downloaded, installed, and applied the app for an average of 12.9 weeks. On a scale of 1 (worst) to 4 (best), the app was rated on average 3.6 in overall satisfaction and 3.8 in acceptance. The design achieved a somewhat favorable score of 3.1. One participant (7.7%) reported major technical issues. The gathered patient data can be used to graphically display the simulated outcome and assess the impact of postoperative complications. Conclusions This study suggests the feasibility to longitudinally gather postoperative data on subjective well-being through a smartphone application. Among potential patients, our application indicated to be functional, user-friendly, and well accepted. Using this app-based approach, further studies will enable us to classify postoperative complications according to their impact on the patient’s well-being.


Perception ◽  
2021 ◽  
pp. 030100662110270
Author(s):  
Kennon M. Sheldon ◽  
Ryan Goffredi ◽  
Mike Corcoran

Facial expressions of emotion have important communicative functions. It is likely that mask-wearing during pandemics disrupts these functions, especially for expressions defined by activity in the lower half of the face. We tested this by asking participants to rate both Duchenne smiles (DSs; defined by the mouth and eyes) and non-Duchenne or “social” smiles (SSs; defined by the mouth alone), within masked and unmasked target faces. As hypothesized, masked SSs were rated much lower in “a pleasant social smile” and much higher in “a merely neutral expression,” compared with unmasked SSs. Essentially, masked SSs became nonsmiles. Masked DSs were still rated as very happy and pleasant, although significantly less so than unmasked DSs. Masked DSs and SSs were both rated as displaying more disgust than the unmasked versions.


2021 ◽  
pp. 003329412110184
Author(s):  
Paola Surcinelli ◽  
Federica Andrei ◽  
Ornella Montebarocci ◽  
Silvana Grandi

Aim of the research The literature on emotion recognition from facial expressions shows significant differences in recognition ability depending on the proposed stimulus. Indeed, affective information is not distributed uniformly in the face and recent studies showed the importance of the mouth and the eye regions for a correct recognition. However, previous studies used mainly facial expressions presented frontally and studies which used facial expressions in profile view used a between-subjects design or children faces as stimuli. The present research aims to investigate differences in emotion recognition between faces presented in frontal and in profile views by using a within subjects experimental design. Method The sample comprised 132 Italian university students (88 female, Mage = 24.27 years, SD = 5.89). Face stimuli displayed both frontally and in profile were selected from the KDEF set. Two emotion-specific recognition accuracy scores, viz., frontal and in profile, were computed from the average of correct responses for each emotional expression. In addition, viewing times and response times (RT) were registered. Results Frontally presented facial expressions of fear, anger, and sadness were significantly better recognized than facial expressions of the same emotions in profile while no differences were found in the recognition of the other emotions. Longer viewing times were also found when faces expressing fear and anger were presented in profile. In the present study, an impairment in recognition accuracy was observed only for those emotions which rely mostly on the eye regions.


2021 ◽  
pp. 174702182199299
Author(s):  
Mohamad El Haj ◽  
Emin Altintas ◽  
Ahmed A Moustafa ◽  
Abdel Halim Boudoukha

Future thinking, which is the ability to project oneself forward in time to pre-experience an event, is intimately associated with emotions. We investigated whether emotional future thinking can activate emotional facial expressions. We invited 43 participants to imagine future scenarios, cued by the words “happy,” “sad,” and “city.” Future thinking was video recorded and analysed with a facial analysis software to classify whether facial expressions (i.e., happy, sad, angry, surprised, scared, disgusted, and neutral facial expression) of participants were neutral or emotional. Analysis demonstrated higher levels of happy facial expressions during future thinking cued by the word “happy” than “sad” or “city.” In contrast, higher levels of sad facial expressions were observed during future thinking cued by the word “sad” than “happy” or “city.” Higher levels of neutral facial expressions were observed during future thinking cued by the word “city” than “happy” or “sad.” In the three conditions, the neutral facial expressions were high compared with happy and sad facial expressions. Together, emotional future thinking, at least for future scenarios cued by “happy” and “sad,” seems to trigger the corresponding facial expression. Our study provides an original physiological window into the subjective emotional experience during future thinking.


Author(s):  
Yannick van Hierden ◽  
Timo Dietrich ◽  
Sharyn Rundle-Thiele

In recent years, the relevance of eHealth interventions has become increasingly evident. However, a sequential procedural application to cocreating eHealth interventions is currently lacking. This paper demonstrates the implementation of a participatory design (PD) process to inform the design of an eHealth intervention aiming to enhance well-being. PD sessions were conducted with 57 people across four sessions. Within PD sessions participants experienced prototype activities, provided feedback and designed program interventions. A 5-week eHealth well-being intervention focusing on lifestyle, habits, physical activity, and meditation was proposed. The program is suggested to be delivered through online workshops and online community interaction. A five-step PD process emerged; namely, (1) collecting best practices, (2) participatory discovery, (3) initial proof-of-concept, (4) participatory prototyping, and (5) pilot intervention proof-of-concept finalisation. Health professionals, behaviour change practitioners and program planners can adopt this process to ensure end-user cocreation using the five-step process. The five-step PD process may help to create user-friendly programs.


Animals ◽  
2021 ◽  
Vol 11 (2) ◽  
pp. 442
Author(s):  
Meiqing Wang ◽  
Ali Youssef ◽  
Mona Larsen ◽  
Jean-Loup Rault ◽  
Daniel Berckmans ◽  
...  

Heart rate (HR) is a vital bio-signal that is relatively easy to monitor with contact sensors and is related to a living organism’s state of health, stress and well-being. The objective of this study was to develop an algorithm to extract HR (in beats per minute) of an anesthetized and a resting pig from raw video data as a first step towards continuous monitoring of health and welfare of pigs. Data were obtained from two experiments, wherein the pigs were video recorded whilst wearing an electrocardiography (ECG) monitoring system as gold standard (GS). In order to develop the algorithm, this study used a bandpass filter to remove noise. Then, a short-time Fourier transform (STFT) method was tested by evaluating different window sizes and window functions to accurately identify the HR. The resulting algorithm was first tested on videos of an anesthetized pig that maintained a relatively constant HR. The GS HR measurements for the anesthetized pig had a mean value of 71.76 bpm and standard deviation (SD) of 3.57 bpm. The developed algorithm had 2.33 bpm in mean absolute error (MAE), 3.09 bpm in root mean square error (RMSE) and 67% in HR estimation error below 3.5 bpm (PE3.5). The sensitivity of the algorithm was then tested on the video of a non-anaesthetized resting pig, as an animal in this state has more fluctuations in HR than an anaesthetized pig, while motion artefacts are still minimized due to resting. The GS HR measurements for the resting pig had a mean value of 161.43 bpm and SD of 10.11 bpm. The video-extracted HR showed a performance of 4.69 bpm in MAE, 6.43 bpm in RMSE and 57% in PE3.5. The results showed that HR monitoring using only the green channel of the video signal was better than using three color channels, which reduces computing complexity. By comparing different regions of interest (ROI), the region around the abdomen was found physiologically better than the face and front leg parts. In summary, the developed algorithm based on video data has potential to be used for contactless HR measurement and may be applied on resting pigs for real-time monitoring of their health and welfare status, which is of significant interest for veterinarians and farmers.


Sign in / Sign up

Export Citation Format

Share Document