Development and psychometric evaluation of the Clericalism Observer Rating Scale.

2020 ◽  
Vol 7 (4) ◽  
pp. 310-325
Author(s):  
Martin J. Burnham ◽  
Jesse Fox ◽  
Leo Mickey Fenzel ◽  
Joseph Stewart-Sicking ◽  
Stephen Sivo
Author(s):  
Yannik Terhorst ◽  
Paula Philippi ◽  
Lasse Sander ◽  
Dana Schultchen ◽  
Sarah Paganini ◽  
...  

BACKGROUND Mobile health apps (MHA) have the potential to improve health care. The commercial MHA market is rapidly growing, but the content and quality of available MHA are unknown. Consequently, instruments of high psychometric quality for the assessment of the quality and content of MHA are highly needed. The Mobile Application Rating Scale (MARS) is one of the most widely used tools to evaluate the quality of MHA in various health domains. Only few validation studies investigating its psychometric quality exist with selected samples of MHAs. No study has evaluated the construct validity of the MARS and concurrent validity to other instruments. OBJECTIVE This study evaluates the construct validity, concurrent validity, reliability, and objectivity, of the MARS. METHODS MARS scoring data was pooled from 15 international app quality reviews to evaluate the psychometric properties of the MARS. The MARS measures app quality across four dimensions: engagement, functionality, aesthetics and information quality. App quality is determined for each dimension and overall. Construct validity was evaluated by assessing related competing confirmatory models that were explored by confirmatory factor analysis (CFA). A combination of non-centrality (RMSEA), incremental (CFI, TLI) and residual (SRMR) fit indices was used to evaluate the goodness of fit. As a measure of concurrent validity, the correlations between the MARS and 1) another quality assessment tool called ENLIGHT, and 2) user star-rating extracted from app stores were investigated. Reliability was determined using Omega. Objectivity was assessed in terms of intra-class correlation. RESULTS In total, MARS ratings from 1,299 MHA covering 15 different health domains were pooled for the analysis. Confirmatory factor analysis confirmed a bifactor model with a general quality factor and an additional factor for each subdimension (RMSEA=0.074, TLI=0.922, CFI=0.940, SRMR=0.059). Reliability was good to excellent (Omega 0.79 to 0.93). Objectivity was high (ICC=0.82). The overall MARS rating was positively associated with ENLIGHT (r=0.91, P<0.01) and user-ratings (r=0.14, P<0.01). CONCLUSIONS he psychometric evaluation of the MARS demonstrated its suitability for the quality assessment of MHAs. As such, the MARS could be used to make the quality of MHA transparent to health care stakeholders and patients. Future studies could extend the present findings by investigating the re-test reliability and predictive validity of the MARS.


2021 ◽  
Author(s):  
Julija Stelmokas ◽  
Amber D. Rochette ◽  
Robert J. Spencer ◽  
Lisa Manderino ◽  
Alexandra Sciaky ◽  
...  

2017 ◽  
Vol 13 (4) ◽  
pp. 305-311
Author(s):  
Tracy Hellem ◽  
Lindsay Scholl ◽  
Young-Hoon Sung ◽  
Hayden Ferguson ◽  
Erin McGlade ◽  
...  

2019 ◽  
Vol 30 (7) ◽  
pp. 934-947 ◽  
Author(s):  
Simone Jennissen ◽  
Mary Beth Connolly Gibbons ◽  
Paul Crits-Christoph ◽  
Julia Huber ◽  
Christoph Nikendei ◽  
...  

2012 ◽  
Vol 94 (1) ◽  
pp. 82-91 ◽  
Author(s):  
Ilona Papousek ◽  
Kai Ruggeri ◽  
Daniel Macher ◽  
Manuela Paechter ◽  
Moritz Heene ◽  
...  

2017 ◽  
Vol 38 (2) ◽  
pp. 68-76 ◽  
Author(s):  
Kristina Luhr ◽  
Ann Catrine Eldh ◽  
Ulrica Nilsson ◽  
Marie Holmefur

The Patient Preferences for Patient Participation tool (The 4Ps) was developed to aid clinical dialogue and to help patients to 1) depict, 2) prioritise, and 3) evaluate patient participation with 12 pre-set items reiterated in the three sections. An earlier qualitative evaluation of The 4Ps showed promising results. The present study is a psychometric evaluation of The 4Ps in patients with chronic heart or lung disease ( n = 108) in primary and outpatient care. Internal scale validity was evaluated using Rasch analysis, and two weeks test–retest reliability of the three sections using kappa/weighted kappa and a prevalence- and bias-adjusted kappa. The 4Ps tool was found to be reasonably valid with a varied reliability. Proposed amendments are rephrasing of two items, and modifications of the rating scale in Section 2. The 4Ps is suggested for use to increase general knowledge of patient participation, but further studies are needed with regards to its implementation.


2015 ◽  
Vol 25 (4) ◽  
pp. 1229-1234 ◽  
Author(s):  
Joshua M. Nadeau ◽  
Nicole M. McBride ◽  
Brittney F. Dane ◽  
Amanda B. Collier ◽  
Amanda C. Keene ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document