Evaluation of Automatic Scoring in Clinical Performance Oral Examination

Author(s):  
Kam Bee-sung ◽  
2007 ◽  
Vol 115 (5) ◽  
pp. 384-389 ◽  
Author(s):  
Theodorus G. Mettes ◽  
Wil J. M. van der Sanden ◽  
Henk G. Mokkink ◽  
Michel Wensing ◽  
Richard P. T. M. Grol ◽  
...  

2020 ◽  
Vol 51 (2) ◽  
pp. 479-493
Author(s):  
Jenny A. Roberts ◽  
Evelyn P. Altenberg ◽  
Madison Hunter

Purpose The results of automatic machine scoring of the Index of Productive Syntax from the Computerized Language ANalysis (CLAN) tools of the Child Language Data Exchange System of TalkBank (MacWhinney, 2000) were compared to manual scoring to determine the accuracy of the machine-scored method. Method Twenty transcripts of 10 children from archival data of the Weismer Corpus from the Child Language Data Exchange System at 30 and 42 months were examined. Measures of absolute point difference and point-to-point accuracy were compared, as well as points erroneously given and missed. Two new measures for evaluating automatic scoring of the Index of Productive Syntax were introduced: Machine Item Accuracy (MIA) and Cascade Failure Rate— these measures further analyze points erroneously given and missed. Differences in total scores, subscale scores, and individual structures were also reported. Results Mean absolute point difference between machine and hand scoring was 3.65, point-to-point agreement was 72.6%, and MIA was 74.9%. There were large differences in subscales, with Noun Phrase and Verb Phrase subscales generally providing greater accuracy and agreement than Question/Negation and Sentence Structures subscales. There were significantly more erroneous than missed items in machine scoring, attributed to problems of mistagging of elements, imprecise search patterns, and other errors. Cascade failure resulted in an average of 4.65 points lost per transcript. Conclusions The CLAN program showed relatively inaccurate outcomes in comparison to manual scoring on both traditional and new measures of accuracy. Recommendations for improvement of the program include accounting for second exemplar violations and applying cascaded credit, among other suggestions. It was proposed that research on machine-scored syntax routinely report accuracy measures detailing erroneous and missed scores, including MIA, so that researchers and clinicians are aware of the limitations of a machine-scoring program. Supplemental Material https://doi.org/10.23641/asha.11984364


2003 ◽  
Vol 2 (1) ◽  
pp. 125-126
Author(s):  
C PRONTERA ◽  
C PASSINO ◽  
A IERVASI ◽  
G ZUCCHELLI ◽  
A CLERICO ◽  
...  
Keyword(s):  

2018 ◽  
Author(s):  
Kate Flavin ◽  
Clare Morkane ◽  
Sarah Marsh
Keyword(s):  

2006 ◽  
Author(s):  
Shyam Balasubramanian ◽  
Cyprian Mendonca ◽  
Colin Pinnock
Keyword(s):  

1978 ◽  
Vol 17 (03) ◽  
pp. 157-161 ◽  
Author(s):  
F. T. De Dombal ◽  
Jane C. Horrocks

This paper uses simple receiver operating characteristic (ROC) curves (i) to study the effect of varying computer confidence of threshold levels and (ii) to evaluate clinical performance in the diagnosis of acute appendicitis. Over 1300 patients presenting to five centres with abdominal pain of short duration were studied in varying detail. Clinical and computer-aided diagnostic predictions were compared with the »final« diagnosis. From these studies it is concluded the simplistic setting of a 50/50 confidence threshold for the computer program is as »good« as any other. The proximity of a computer-aided system changed clinical behaviour patterns; a higher overall performance level was achieved and clinicians performance levels became associated with the »mildly conservative« end of the computers ROC curve. Prior forecasts of over-confidence or ultra-caution amongst clinicians using the computer-aided system have not been fulfilled.


Sign in / Sign up

Export Citation Format

Share Document