facial expressivity
Recently Published Documents


TOTAL DOCUMENTS

43
(FIVE YEARS 1)

H-INDEX

13
(FIVE YEARS 0)

Author(s):  
Adrianna M. Ratajska ◽  
Anne N. Nisenzon ◽  
Francesca V. Lopez ◽  
Alexandra L. Clark ◽  
Didem Gokcay ◽  
...  

2020 ◽  
Author(s):  
Isaac Galatzer-Levy ◽  
Anzar Abbas ◽  
Vidya Koesmahargyo ◽  
Vijay Yadav ◽  
Mercedez Perez-Rodrigueq ◽  
...  

BACKGROUND Machine learning-based facial and vocal measurements have demonstrated relationships with schizophrenia diagnosis and severity. Here, we determine their accuracy when acquired through automated assessments conducted remotely through smartphones. Demonstrating utility and validity of remote and automated assessments conducted outside of controlled experimental settings can facilitate scaling such measurement tools to aid in risk assessment and tracking of treatment response in difficult to engage populations. OBJECTIVE We aim to assess the accuracy of these facial and vocal markers through remote assessments and compare them with traditional clinical assessments of schizophrenia severity. METHODS Measurements of facial and vocal characteristics including facial expressivity, vocal acoustics, and speech prevalence were assessed in 20 schizophrenia patients over the course of 2 weeks in response to two classes of prompts previously utilized in experimental laboratory assessments: evoked prompts, where subjects are guided to produce specific facial expressions and phonations, and spontaneous prompts, where subjects are presented stimuli in the form of emotionally evocative imagery and asked to freely respond. Facial and vocal measurements were assessed in relation to schizophrenia symptom severity using the Positive and Negative Syndrome Scale. RESULTS Vocal markers including speech prevalence, vocal jitter, fundamental frequency, and vocal intensity demonstrated specificity as markers of negative symptom severity while measurement of facial expressivity demonstrated itself as a robust marker of overall schizophrenia severity. CONCLUSIONS Established facial and vocal measurements, collected remotely in schizophrenia patients via smartphones in response to automated task prompts, demonstrated accuracy as markers of schizophrenia severity. Clinical implications are discussed.


2020 ◽  
Author(s):  
Isaac R. Galatzer-Levy ◽  
Anzar Abbas ◽  
Vidya Koesmahargyo ◽  
Vijay Yadav ◽  
M. Mercedes Perez-Rodriguez ◽  
...  

AbstractBackgroundMachine learning-based facial and vocal measurements have demonstrated relationships with schizophrenia diagnosis and severity. Here, we determine their accuracy of when acquired through automated assessments conducted remotely through smartphones. Demonstrating utility and validity of remote and automated assessments conducted outside of controlled experimental settings can facilitate scaling such measurement tools to aid in risk assessment and tracking of treatment response in difficult to engage populations.MethodsMeasurements of facial and vocal characteristics including facial expressivity, vocal acoustics, and speech prevalence were assessed in 20 schizophrenia patients over the course of 2 weeks in response to two classes of prompts previously utilized in experimental laboratory assessments: evoked prompts, where subjects are guided to produce specific facial expressions and phonations, and spontaneous prompts, where subjects are presented stimuli in the form of emotionally evocative imagery and asked to freely respond. Facial and vocal measurements were assessed in relation to schizophrenia symptom severity using the Positive and Negative Syndrome Scale.ResultsVocal markers including speech prevalence, vocal jitter, fundamental frequency, and vocal intensity demonstrated specificity as markers of negative symptom severity while measurement of facial expressivity demonstrated itself as a robust marker of overall schizophrenia severity.ConclusionEstablished facial and vocal measurements, collected remotely in schizophrenia patients via smartphones in response to automated task prompts, demonstrated accuracy as markers of schizophrenia severity. Clinical implications are discussed.


2020 ◽  
Author(s):  
Isaac Galatzer-Levy ◽  
Anzar Abbas ◽  
Anja Ries ◽  
Stephanie Homan ◽  
Laura Sels ◽  
...  

BACKGROUND Multiple symptoms of suicide risk are assessed based on visual and auditory information including flattened affect, reduced movement, and slowed speech. Objective quantification of such symptomatology from novel data sources can increase the sensitivity, scalability, and timeliness of suicide risk assessment. OBJECTIVE The goal of this work was to determine if key indicators of the suicide severity could be measured in an objective and automated manner using video data captured during clinical interviews that provided structured questions, but were otherwise kept deliberately open to mimic psychiatric interviewing in routine care. METHODS In the current study we utilized video to quantify facial, vocal, and movement markers associated with mood, emotion, and motor functioning from a structured clinical conversation in 20 patients admitted to a psychiatric hospital following a suicide risk attempt. Measures were calculated using open source deep learning algorithms for processing facial expressivity, head movement, and vocal characteristics. Derived digital measures of flattened affect, reduced movement, and slowed speech were compared to suicide severity using the Beck Suicide Scale (BSS), controlling for age and gender using multiple linear regression. RESULTS Suicide severity was associated with multiple visual and auditory markers including speech prevalence (β = -0.68; P = .017, r2 = .40, overall expressivity (β = -0.46; P = 0.10, r2 = .27), and head movement measured as head pitch variability (β = -1.24; P = .006, r2 = .48) and head yaw variability (β = -0.54; p = .055, r2 = .32). CONCLUSIONS Digital measurements of facial affect, movement, and speech prevalence demonstrated strong effect sizes and significant linear associations with severity of suicidal ideation.


2020 ◽  
Author(s):  
Isaac R. Galatzer-Levy ◽  
Anzar Abbas ◽  
Anja Ries ◽  
Stephanie Homan ◽  
Vidya Koesmahargyo ◽  
...  

AbstractBackgroundMultiple symptoms of suicide risk are assessed based on visual and auditory information including flattened affect, reduced movement, and slowed speech. Objective quantification of such symptomatology from novel data sources can increase the sensitivity, scalability, and timeliness of suicide risk assessment.MethodsIn the current study we utilized video to quantify facial, vocal, and movement markers associated with mood, emotion, and motor functioning from a structured clinical conversation in 20 patients admitted to a psychiatric hospital following a suicide risk attempt. Measures were calculated using open source deep learning algorithms for processing facial expressivity, head movement, and vocal characteristics. Derived digital measures of flattened affect, reduced movement, and slowed speech were compared to suicide severity using the Beck Suicide Scale (BSS), controlling for age and gender using multiple linear regression.ResultsSuicide severity was associated with multiple visual and auditory markers including speech prevalence (β = −0.68; P =.017, r2 =.40, overall expressivity (β = −0.46; P = 0.10, r2 =.27), and head movement measured as head pitch variability (β = −1.24; P =.006, r2 =.48) and head yaw variability (β = −0.54; p =.055, r2 =.32).ConclusionsDigital measurements of facial affect, movement, and speech prevalence demonstrated strong effect sizes and significant linear associations with severity of suicidal ideation.


10.2196/24727 ◽  
2020 ◽  
Author(s):  
Radia Zeghari ◽  
Alexandra König ◽  
Rachid Guerchouche ◽  
Garima Sharma ◽  
Jyoti Joshi ◽  
...  

2020 ◽  
Author(s):  
Radia Zeghari ◽  
Alexandra König ◽  
Rachid Guerchouche ◽  
Garima Sharma ◽  
Jyoti Joshi ◽  
...  

BACKGROUND Neurocognitive disorders are often accompanied by behavioral symptoms such as anxiety, depression, and/or apathy. These symptoms can occur very early in the disease progression, and are often difficult to detect and quantify in non-specialized clinical settings. We focus in this study on apathy, one of the most common and debilitating neuropsychiatric symptoms in neurocognitive disorders. Specifically, we investigated whether facial expressivity extracted trough computer vision software correlates with the severity of apathy symptoms in elderly subjects with neurocognitive disorders. OBJECTIVE Specifically, we investigated whether facial expressivity extracted trough computer vision software correlates with the severity of apathy symptoms in elderly subjects with neurocognitive disorders. METHODS 63 subjects (38 females and 25 males) with neurocognitive disorder participated in the study. Apathy was assessed using the Apathy Inventory (AI), a scale which comprises three domains of apathy: loss of interest, loss of initiation and emotional blunting. The higher the scales score, the more severe the apathy symptoms. Participants were asked to recall a positive and a negative event of their life, while their voice and face were recorded through a tablet device. Action Units (AU), which are basic facial movements, were extracted using OpenFace 2.0. 17 AUs (intensity and presence) for each frame of the video were extracted in both positive and negative storytelling. Average intensity and frequency of AU activation were calculated for each participant in each video. Partial correlations (controlling for the level of depression and cognitive impairment) were performed between these indexes and AI subscales. RESULTS Results showed that AU intensity and frequency were negatively correlated with apathy scales scores, in particular with the emotional blunting component. The more severe the apathy symptoms, the less expressivity was displayed from participants while recalling an emotional event in specific emotional and non-emotional AUs. Different AUs showed significant correlations depending on gender of the participant and the task (positive vs negative story), suggesting the importance to assess independently male and female participants. CONCLUSIONS Our study suggest the interest of employing computer vision based facial analysis to quantify facial expressivity and assess the severity of apathy symptoms in subjects with Neurocognitive Disorders. This may thus represent a useful tool for a preliminary apathy assessment in non-specialized settings, and could be used to complement classical clinical scales. Future studies including larger samples should confirm the clinical relevance of this kind on instrument. CLINICALTRIAL N/A


2019 ◽  
Vol 1 ◽  
pp. 147-155
Author(s):  
B. Polityńska ◽  
O. Pokorska ◽  
A. Łukaszyk-Spryszak ◽  
A. Kowalewicz

</br>Communication difficulties in Parkinson’s disease (PD) arise not only as the result of the motor symptoms of the disorder, but also as a consequence of cognitive and affective impairments which are recognised as being part of the disease process. These changes are thought to account for much of the stigma associated with the condition, thereby complicating the ability of patients to inter-relate with others, including their closest family. This inevitably affects quality of life for both the patient and those family members involved in his/her care. <br/>The present paper presents an analysis of how the deficits in motor and cognitive function associated with PD in the form of reduced facial expressivity, altered language skills, motor and cognitive slowness and disturbances in the pragmatic aspects of language affect the communication abilities of patients with the disorder and give rise to stigmatisation, which in turn impacts the disability seen in PD.


2019 ◽  
Vol 128 (4) ◽  
pp. 341-351 ◽  
Author(s):  
Tina Gupta ◽  
Claudia M. Haase ◽  
Gregory P. Strauss ◽  
Alex S. Cohen ◽  
Vijay A. Mittal

CNS Spectrums ◽  
2019 ◽  
Vol 24 (1) ◽  
pp. 204-205
Author(s):  
Mina Boazak ◽  
Robert Cotes

AbstractIntroductionFacial expressivity in schizophrenia has been a topic of clinical interest for the past century. Besides the schizophrenia sufferers difficulty decoding the facial expressions of others, they often have difficulty encoding facial expressions. Traditionally, evaluations of facial expressions have been conducted by trained human observers using the facial action coding system. The process was slow and subject to intra and inter-observer variability. In the past decade the traditional facial action coding system developed by Ekman has been adapted for use in affective computing. Here we assess the applications of this adaptation for schizophrenia, the findings of current groups, and the future role of this technology.Materials and MethodsWe review the applications of computer vision technology in schizophrenia using pubmed and google scholar search criteria of “computer vision” AND “Schizophrenia” from January of 2010 to June of 2018.ResultsFive articles were selected for inclusion representing 1 case series and 4 case-control analysis. Authors assessed variations in facial action unit presence, intensity, various measures of length of activation, action unit clustering, congruence, and appropriateness. Findings point to variations in each of these areas, except action unit appropriateness, between control and schizophrenia patients. Computer vision techniques were also demonstrated to have high accuracy in classifying schizophrenia from control patients, reaching an AUC just under 0.9 in one study, and to predict psychometric scores, reaching pearson’s correlation values of under 0.7.DiscussionOur review of the literature demonstrates agreement in findings of traditional and contemporary assessment techniques of facial expressivity in schizophrenia. Our findings also demonstrate that current computer vision techniques have achieved capacity to differentiate schizophrenia from control populations and to predict psychometric scores. Nevertheless, the predictive accuracy of these technologies leaves room for growth. On analysis our group found two modifiable areas that may contribute to improving algorithm accuracy: assessment protocol and feature inclusion. Based on our review we recommend assessment of facial expressivity during a period of silence in addition to an assessment during a clinically structured interview utilizing emotionally evocative questions. Furthermore, where underfit is a problem we recommend progressive inclusion of features including action unit activation, intensity, action unit rate of onset and offset, clustering (including richness, distribution, and typicality), and congruence. Inclusion of each of these features may improve algorithm predictive accuracy.ConclusionWe review current applications of computer vision in the assessment of facial expressions in schizophrenia. We present the results of current innovative works in the field and discuss areas for continued development.


Sign in / Sign up

Export Citation Format

Share Document