scholarly journals Rapid and Accurate Behavioral Health Diagnostic Screening: Initial Validation Study of a Web-Based, Self-Report Tool (the SAGE-SR) (Preprint)

2017 ◽  
Author(s):  
Benjamin Brodey ◽  
Susan E Purcell ◽  
Karen Rhea ◽  
Philip Maier ◽  
Michael First ◽  
...  

BACKGROUND The Structured Clinical Interview for DSM (SCID) is considered the gold standard assessment for accurate, reliable psychiatric diagnoses; however, because of its length, complexity, and training required, the SCID is rarely used outside of research. OBJECTIVE This paper aims to describe the development and initial validation of a Web-based, self-report screening instrument (the Screening Assessment for Guiding Evaluation-Self-Report, SAGE-SR) based on the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5) and the SCID-5-Clinician Version (CV) intended to make accurate, broad-based behavioral health diagnostic screening more accessible within clinical care. METHODS First, study staff drafted approximately 1200 self-report items representing individual granular symptoms in the diagnostic criteria for the 8 primary SCID-CV modules. An expert panel iteratively reviewed, critiqued, and revised items. The resulting items were iteratively administered and revised through 3 rounds of cognitive interviewing with community mental health center participants. In the first 2 rounds, the SCID was also administered to participants to directly compare their Likert self-report and SCID responses. A second expert panel evaluated the final pool of items from cognitive interviewing and criteria in the DSM-5 to construct the SAGE-SR, a computerized adaptive instrument that uses branching logic from a screener section to administer appropriate follow-up questions to refine the differential diagnoses. The SAGE-SR was administered to healthy controls and outpatient mental health clinic clients to assess test duration and test-retest reliability. Cutoff scores for screening into follow-up diagnostic sections and criteria for inclusion of diagnoses in the differential diagnosis were evaluated. RESULTS The expert panel reduced the initial 1200 test items to 664 items that panel members agreed collectively represented the SCID items from the 8 targeted modules and DSM criteria for the covered diagnoses. These 664 items were iteratively submitted to 3 rounds of cognitive interviewing with 50 community mental health center participants; the expert panel reviewed session summaries and agreed on a final set of 661 clear and concise self-report items representing the desired criteria in the DSM-5. The SAGE-SR constructed from this item pool took an average of 14 min to complete in a nonclinical sample versus 24 min in a clinical sample. Responses to individual items can be combined to generate DSM criteria endorsements and differential diagnoses, as well as provide indices of individual symptom severity. Preliminary measures of test-retest reliability in a small, nonclinical sample were promising, with good to excellent reliability for screener items in 11 of 13 diagnostic screening modules (intraclass correlation coefficient [ICC] or kappa coefficients ranging from .60 to .90), with mania achieving fair test-retest reliability (ICC=.50) and other substance use endorsed too infrequently for analysis. CONCLUSIONS The SAGE-SR is a computerized adaptive self-report instrument designed to provide rigorous differential diagnostic information to clinicians.

2019 ◽  
Vol 25 (1) ◽  
pp. 91-104 ◽  
Author(s):  
Chris Margaret Aanondsen ◽  
Thomas Jozefiak ◽  
Kerstin Heiling ◽  
Tormod Rimehaug

Abstract The majority of studies on mental health in deaf and hard-of-hearing (DHH) children report a higher level of mental health problems. Inconsistencies in reports of prevalence of mental health problems have been found to be related to a number of factors such as language skills, cognitive ability, heterogeneous samples as well as validity problems caused by using written measures designed for typically hearing children. This study evaluates the psychometric properties of the self-report version of the Strengths and Difficulties Questionnaire (SDQ) in Norwegian Sign Language (NSL; SDQ-NSL) and in written Norwegian (SDQ-NOR). Forty-nine DHH children completed the SDQ-NSL as well as the SDQ-NOR in randomized order and their parents completed the parent version of the SDQ-NOR and a questionnaire on hearing and language-related information. Internal consistency was examined using Dillon–Goldstein’s rho, test–retest reliability using intraclass correlations, construct validity by confirmatory factor analysis (CFA), and partial least squares structural equation modeling. Internal consistency and test–retest reliability were established as acceptable to good. CFA resulted in a best fit for the proposed five-factor model for both versions, although not all fit indices reached acceptable levels. The reliability and validity of the SDQ-NSL seem promising even though the validation was based on a small sample size.


Assessment ◽  
2016 ◽  
Vol 25 (1) ◽  
pp. 3-13 ◽  
Author(s):  
David F. Tolin ◽  
Christina Gilliam ◽  
Bethany M. Wootton ◽  
William Bowe ◽  
Laura B. Bragdon ◽  
...  

Three hundred sixty-two adult patients were administered the Diagnostic Interview for Anxiety, Mood, and OCD and Related Neuropsychiatric Disorders (DIAMOND). Of these, 121 provided interrater reliability data, and 115 provided test–retest reliability data. Participants also completed a battery of self-report measures that assess symptoms of anxiety, mood, and obsessive-compulsive and related disorders. Interrater reliability of DIAMOND anxiety, mood, and obsessive-compulsive and related diagnoses ranged from very good to excellent. Test–retest reliability of DIAMOND diagnoses ranged from good to excellent. Convergent validity was established by significant between-group comparisons on applicable self-report measures for nearly all diagnoses. The results of the present study indicate that the DIAMOND is a promising semistructured diagnostic interview for DSM-5 disorders.


2009 ◽  
Vol 11 (3) ◽  
pp. e35 ◽  
Author(s):  
Janet Brigham ◽  
Christina N Lessov-Schlaggar ◽  
Harold S Javitz ◽  
Ruth E Krasnow ◽  
Mary McElroy ◽  
...  

2020 ◽  
Author(s):  
Fredrik Söderqvist ◽  
Peter Larm

Abstract BackgroundThe Mental Health Continuum – Short form (MHC-SF) is a self-report measure that has been increasingly used to monitor mental well-being at the population level. The aim of this study was to evaluate, for the first time, the psychometric properties of the MHC-SF in a Swedish population, more specifically adolescents. MethodsFirst, the evaluation was performed by examining face validity and test–retest reliability obtained in a pre-study (n = 93). Then using data from the Survey of Adolescent Life in Vestmanland 2020 (n = 3880; participation rate = 71%; females = 51%; mean age = 16.23 years), we performed confirmatory factor analysis on different factor structures based on theory and previous research. Model-based estimates were calculated for assessing the internal reliability of the factor structure with the best fit. Convergent validity was assessed by bivariate as well as model-based correlations, and test–retest reliability was evaluated by intra-class correlation coefficients. ResultsThis study on Swedish adolescents found that the MHC-SF is essentially unidimensional and best described with a bifactor model consisting of a dominant general well-being factor and three specific group factors of emotional, social and psychological well-being. Its overall reliability and the reliability of the general well-being factor were good to excellent, while the reliability of its subscales (specific group factor) was poor, and thus should not be used alone. Test–retest reliability of the total scale was good, and convergent validity was supported by strong to very strong correlations with the Short Warwick–Edinburg Mental Well-being Scale. ConclusionsIn conclusion, we consider the Swedish MHC-SF to be a psychometrically sound instrument for monitoring overall mental well-being in Swedish adolescents.


Author(s):  
Fredrik Söderqvist ◽  
Peter Larm

AbstractThe Mental Health Continuum – Short form (MHC-SF) is a self-report measure that has been increasingly used to monitor mental well-being at the population level. The aim of this study was to evaluate, for the first time, the psychometric properties of the MHC-SF in a population of Swedish adolescents. First, the evaluation was performed by examining face validity and test–retest reliability obtained in a pre-study. Then using data from the Survey of Adolescent Life in Vestmanland 2020 (n = 3880) we performed confirmatory factor analysis on different factor structures based on theory and previous research. Model-based estimates were calculated for assessing the internal reliability of the factor structure with the best fit. Convergent validity was assessed by bivariate as well as model-based correlations, and test–retest reliability was evaluated by intra-class correlation coefficients. The results show that the MHC-SF is best described with a bifactor model consisting of a dominant general well-being factor and three specific group factors of emotional, social and psychological well-being. Its overall reliability was high to very high, while the reliability of its subscales was low. A practical implication of the latter is that the subcales should not be used on their own because they are more likely to reliably measure the general well-being factor than the specific group factors. Test–retest reliability of the total scale was acceptable, and convergent validity was supported. In conclusion, we consider the Swedish MHC-SF to be a psychometrically sound instrument for monitoring overall mental well-being in Swedish adolescents.


2021 ◽  
Author(s):  
Jennifer Newson ◽  
Vladyslav Pastukh ◽  
Tara Thiagarajan

Background: The MHQ is an assessment of mental health and wellbeing that comprehensively covers symptoms across 10 major psychiatric disorders as defined by the DSM-5, in addition to constructs defined by RDoC and positive dimensions of mental function using a novel life-impact scale. An overall measure of mental wellbeing, the Mental Health Quotient or MHQ, is computed based on these elements using a nonlinear transformation of the scale followed by a rescaling. The MHQ has been deployed as part of the Mental Health Million Project as a freely available anonymous online assessment that, on completion, provides a score to the individual that places them on a spectrum from Distressed to Thriving along with a personal report spanning their various dimensions of mental wellbeing with strategies for improvement. Since its launch in April 2020 over 200,000 people have taken the MHQ. Here we provide various demonstrations of the reliability and validity of the MHQ.Objective: This paper outlines the reliability and validity of the Mental Health Quotient (MHQ), including construct validity of the life impact scale, sample and test-retest reliability of the assessment and criterion validation of the MHQ with respect to productivity loss and clinical burden.Methods: To assess sample reliability, random demographically matched samples of 11,033 people were compared from within the same 6-month period. Test retest reliability was determined using the subset of individuals who had taken the assessment twice at least 3 days apart (N=1907). In addition, a subset of respondents (N=4,247 or 7,625) were asked additional questions (along with the standard MHQ questions) on symptom frequency and severity for an example symptom (Feelings of Sadness, Distress or Hopelessness), days of work missed in the past month, and days with reduced productivity. In addition, elements with high negative life impact considered to meet the threshold to be considered a ‘symptom’ were mapped to the criteria for each of 10 major DSM-5 based mental health disorders to calculate the clinical burden (N=174,618).Results: Distinct samples collected during the same period had indistinguishable MHQ distributions and average ratings for each of the 47 elements, demonstrating the reliability of the assessment and MHQ scores were correlated with r=0.84 between retakes. The life impact rating was correlated with both frequency and severity of symptoms and mean values had a clear linear relationship with an R2>0.99. Furthermore, aggregate MHQ scores were systematically related to both productivity and clinical burden. At one end of the scale, those in the Distressed category had an average productivity loss of 15.2±0.5 days per month with 89.08% (8,986/10,087) mapping to 1 or more DSM-5 based clinical disorders. In contrast those at the other end of the scale, in the Thriving category, had an average productivity loss of 1.3±0.1 and 0.00% (1/24,365) had any DSM-5 based clinical disorder.Conclusions: The MHQ is a valid and reliable assessment of mental wellbeing when delivered anonymously online


2000 ◽  
Vol 16 (1) ◽  
pp. 53-58 ◽  
Author(s):  
Hans Ottosson ◽  
Martin Grann ◽  
Gunnar Kullgren

Summary: Short-term stability or test-retest reliability of self-reported personality traits is likely to be biased if the respondent is affected by a depressive or anxiety state. However, in some studies, DSM-oriented self-reported instruments have proved to be reasonably stable in the short term, regardless of co-occurring depressive or anxiety disorders. In the present study, we examined the short-term test-retest reliability of a new self-report questionnaire for personality disorder diagnosis (DIP-Q) on a clinical sample of 30 individuals, having either a depressive, an anxiety, or no axis-I disorder. Test-retest scorings from subjects with depressive disorders were mostly unstable, with a significant change in fulfilled criteria between entry and retest for three out of ten personality disorders: borderline, avoidant and obsessive-compulsive personality disorder. Scorings from subjects with anxiety disorders were unstable only for cluster C and dependent personality disorder items. In the absence of co-morbid depressive or anxiety disorders, mean dimensional scores of DIP-Q showed no significant differences between entry and retest. Overall, the effect from state on trait scorings was moderate, and it is concluded that test-retest reliability for DIP-Q is acceptable.


Author(s):  
Di Long ◽  
Suzanne Polinder ◽  
Gouke J. Bonsel ◽  
Juanita A. Haagsma

Abstract Purpose To assess the test–retest reliability of the EQ-5D-5L and the reworded Quality of Life After Traumatic Brain Injury Overall Scale (QOLIBRI-OS) for the general population of Italy, the Netherlands, and the United Kingdom (UK). Methods The sample contains 1864 members of the general population (aged 18–75 years) of Italy, the Netherlands, and the UK who completed a web-based questionnaire at two consecutive time points. The survey included items on gender, age, level of education, occupational status, household annual income, chronic health status, and the EQ-5D-5L and reworded QOLIBRI-OS instrument. Test–retest reliability of the EQ-5D-5L dimensions, EQ-5D-5L summary index, EQ VAS, reworded QOLIBRI-OS dimensions and reworded QOLIBRI-OS level sum score was examined by Gwet’s Agreement Coefficient (Gwet’s AC) and Intraclass Correlation Coefficient (ICC). Results Gwet’s AC ranged from 0.64 to 0.97 for EQ-5D-5L dimensions. The ICC ranged from 0.73 to 0.84 for the EQ-5D-5L summary index and 0.61 to 0.68 for EQ VAS in the three countries. Gwet’s AC ranged from 0.35 to 0.55 for reworded QOLIBRI-OS dimensions in the three countries. The ICC ranged from 0.69 to 0.77 for reworded QOLIBRI-OS level sum score. Conclusion Test–retest reliability of the EQ-5D-5L administered via a web-based questionnaire was substantial to almost perfect for the EQ-5D-5L dimensions, good for EQ-5D-5L summary index, and moderate for the EQ VAS. However, test–retest reliability was less satisfactory for the reworded QOLIBRI-OS. This indicates that the web-based EQ-5D-5L is a reliable instrument for the general population, but further research of the reworded QOLIBRI-OS is required.


2021 ◽  
Vol 24 ◽  
Author(s):  
Anna Torres-Giménez ◽  
Alba Roca-Lecumberri ◽  
Bàrbara Sureda ◽  
Susana Andrés-Perpiña ◽  
Bruma Palacios-Hernández ◽  
...  

Abstract The aim of the present study was to validate the Spanish Postpartum Bonding Questionnaire (PBQ) against external criteria of bonding disorder, as well as to establish its test-retest reliability. One hundred fifty-six postpartum women consecutively recruited from a perinatal mental health outpatient unit completed the PBQ at 4–6 weeks postpartum. Four weeks later, all mothers completed again the PBQ and were interviewed using the Birmingham Interview for Maternal Mental Health to establish the presence of a bonding disorder. Receiver operating characteristic curve analysis revealed an area under the curve (AUC) value for the PBQ total score of 0.93, 95% CI [0.88, 0.98], with the optimal cut-off of 13 for detecting bonding disorders (sensitivity: 92%, specificity: 87%). Optimal cut-off scores for each scale were also obtained. The test-retest reliability coefficients were moderate to good. Our data confirm the validity of PBQ for detecting bonding disorders in Spanish population.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Adam Polnay ◽  
Helen Walker ◽  
Christopher Gallacher

Purpose Relational dynamics between patients and staff in forensic settings can be complicated and demanding for both sides. Reflective practice groups (RPGs) bring clinicians together to reflect on these dynamics. To date, evaluation of RPGs has lacked quantitative focus and a suitable quantitative tool. Therefore, a self-report tool was designed. This paper aims to pilot The Relational Aspects of CarE (TRACE) scale with clinicians in a high-secure hospital and investigate its psychometric properties. Design/methodology/approach A multi-professional sample of 80 clinicians were recruited, completing TRACE and attitudes to personality disorder questionnaire (APDQ). Exploratory factor analysis (EFA) determined factor structure and internal consistency of TRACE. A subset was selected to measure test–retest reliability. TRACE was cross-validated against the APDQ. Findings EFA found five factors underlying the 20 TRACE items: “awareness of common responses,” “discussing and normalising feelings;” “utilising feelings,” “wish to care” and “awareness of complicated affects.” This factor structure is complex, but items clustered logically to key areas originally used to generate items. Internal consistency (α = 0.66, 95% confidence interval (CI) = 0.55–0.76) demonstrated borderline acceptability. TRACE demonstrated good test–retest reliability (intra-class correlation = 0.94, 95% CI = 0.78–0.98) and face validity. TRACE indicated a slight negative correlation with APDQ. A larger data set is needed to substantiate these preliminary findings. Practical implications Early indications suggested TRACE was valid and reliable, suitable to measure the effectiveness of reflective practice. Originality/value The TRACE was a distinctive measure that filled a methodological gap in the literature.


Sign in / Sign up

Export Citation Format

Share Document