scholarly journals Objective versus subjective methods to assess discipline-specific knowledge: a case for Extended Matching Questions (EMQs)

Author(s):  
Robyn Maree Slattery

Background: Extended matching questions (EMQs) were introduced as an objective assessment tool into third year immunology undergraduate units at Monash University, Australia. Aim: The performance of students examined objectively by multiple choice questions (MCQs) was compared to their performance assessed by EMQs; there was a high correlation coefficient between the two methods. EMQs were then introduced and the correlation of student performance between related units was measured as a function of percentage objective assessment.  The correlation of student performance between units increased proportionally with objective assessment.  Student performance in tasks assessed objectively and subjectively was then compared. The findings indicate marker bias contributes to the poor correlation between marks awarded objectively and subjectively. Conclusion: EMQs are a valid method to objectively assess students and their increased inclusion in the assessment process increases the consistency of student marks.  The subjective assessment of science communication skills introduces marker bias, indicating a need to identify, validate and implement, more objective methods for their assessment. Keywords: Extended matching question (EMQ); Objective assessment (OA); SA (SA);  Marker bias; Discipline-specific assessment; Science communication assessment 

Author(s):  
Ademir Garcia Reberti ◽  
Nayme Hechem Monfredini ◽  
Olavo Franco Ferreira Filho ◽  
Dalton Francisco de Andrade ◽  
Carlos Eduardo Andrade Pinheiro ◽  
...  

Abstract: Progress Test is an objective assessment, consisting of 60 to 150 multiple-choice questions, designed to promote an assessment of the cognitive skills expected at the end of undergraduate school. This test is applied to all students on the same day, so that it is possible to compare the results between grades and analyze the development of knowledge performance throughout the course. This study aimed to carry out a systematic and literary review about Progress Test in medical schools in Brazil and around the world, understanding the benefits of its implementation for the development of learning for the student, the teacher and the institution. The study was carried out from July 2018 to April 2019, which addressed articles published from January 2002 to March 2019. The keywords used were: “Progress Test in Medical Schools” and “Item Response Theory in Medicine” in the PubMed, Scielo, and Lilacs platforms. There was no language limitation in article selection, but the research was carried out in English. A total of 192,026 articles were identified, and after applying advanced search filters, 11 articles were included in the study. The Progress Test (PTMed) has been applied in medical schools, either alone or in groups of partner schools, since the late 1990s. The test results build the students’ performance curves, which allow us to identify weaknesses and strengths of the students in the several areas of knowledge related to the course. The Progress Test is not an exclusive instrument for assessing student performance, but it is also important as an assessment tool for academic management use and thus, it is crucial that institutions take an active role in the preparation and analysis of this assessment data. Assessments designed to test clinical competence in medical students need to be valid and reliable. For the evaluative method to be valid it is necessary that the subject be extensively reviewed and studied, aiming at improvements and adjustments in test performance.


Author(s):  
Ademir Garcia Reberti ◽  
Nayme Hechem Monfredini ◽  
Olavo Franco Ferreira Filho ◽  
Dalton Francisco de Andrade ◽  
Carlos Eduardo Andrade Pinheiro ◽  
...  

Abstract: Progress Test is an objective assessment, consisting of 60 to 150 multiple-choice questions, designed to promote an assessment of the cognitive skills expected at the end of undergraduate school. This test is applied to all students on the same day, so that it is possible to compare the results between grades and analyze the development of knowledge performance throughout the course. This study aimed to carry out a systematic and literary review about Progress Test in medical schools in Brazil and around the world, understanding the benefits of its implementation for the development of learning for the student, the teacher and the institution. The study was carried out from July 2018 to April 2019, which addressed articles published from January 2002 to March 2019. The keywords used were: “Progress Test in Medical Schools” and “Item Response Theory in Medicine” in the PubMed, Scielo, and Lilacs platforms. There was no language limitation in article selection, but the research was carried out in English. A total of 192,026 articles were identified, and after applying advanced search filters, 11 articles were included in the study. The Progress Test (PTMed) has been applied in medical schools, either alone or in groups of partner schools, since the late 1990s. The test results build the students’ performance curves, which allow us to identify weaknesses and strengths of the students in the several areas of knowledge related to the course. The Progress Test is not an exclusive instrument for assessing student performance, but it is also important as an assessment tool for academic management use and thus, it is crucial that institutions take an active role in the preparation and analysis of this assessment data. Assessments designed to test clinical competence in medical students need to be valid and reliable. For the evaluative method to be valid it is necessary that the subject be extensively reviewed and studied, aiming at improvements and adjustments in test performance.


2019 ◽  
Author(s):  
Abdullah Ahmed S Albarrak ◽  
Faisal A Alnefaie ◽  
Riyadh Almasoud

Abstract Background Multiple factors have challenged the surgical training which led to the development of the instructional videos to hasten the proficiency in performing surgical procedures. However, their educational effect has not been objectively studied yet. This study aims at objectively assessing the effect of instructional videos along with the subjective assessment of the self-efficacy of the training residents. The videos used were selected from YouTube, which is a valuable resource currently because of its ubiquitous availability and cost effectiveness. Methods A stratified randomized controlled trial was performed using an objective assessment tool for the procedural knowledge along with a questionnaire to assess the effect of videos on the perceived self-efficacy of the residents. Results There was a significant positive effect of watching instructional videos on the procedural knowledge. Even though residents report positive experience with using the videos, there was no significant effect of videos on the self-efficacy scores as reported by the residents. Conclusion instructional videos improved the procedural knowledge of the residents. This positive observation of the use of the YouTube videos offers great opportunities for the educationalists to select from thousands of videos instead of making the videos themselves.


Author(s):  
Amitabha Ghosh

A two-loop learning outcomes assessment process was followed to evaluate the core curriculum in Mechanical Engineering at Rochester Institute of Technology. This initiative, originally called the Engineering Sciences Core Curriculum, provided systematic course learning outcomes and assessment data of examination performance in Statics, Mechanics, Dynamics, Thermodynamics, Fluid Mechanics and Heat Transfer. This paper reports longitudinal data and important observations in the Statics-Dynamics sequence to determine efficacy and obstacles in student performance. An earlier paper showed that students’ mastery of Dynamics is affected largely by weak retention of fundamentals of Statics and mathematics. New observations recorded in this report suggest the need for better instructional strategies to teach certain focal areas in Statics. Subsequesntly offered Dynamics and Fluid Mechanics classes further need reinforcement of some of these fundamental topics in Statics. This report completes a 9 year long broader feedback loop designed to achieve the educational goals in the Statics-Dynamics sequence.


2016 ◽  
Vol 40 (3) ◽  
pp. 304-312 ◽  
Author(s):  
Nicholas Cramer ◽  
Abdo Asmar ◽  
Laurel Gorman ◽  
Bernard Gros ◽  
David Harris ◽  
...  

Multiple-choice questions are a gold-standard tool in medical school for assessment of knowledge and are the mainstay of licensing examinations. However, multiple-choice questions items can be criticized for lacking the ability to test higher-order learning or integrative thinking across multiple disciplines. Our objective was to develop a novel assessment that would address understanding of pathophysiology and pharmacology, evaluate learning at the levels of application, evaluation and synthesis, and allow students to demonstrate clinical reasoning. The rubric assesses student writeups of clinical case problems. The method is based on the physician's traditional postencounter Subjective, Objective, Assessment and Plan note. Students were required to correctly identify subjective and objective findings in authentic clinical case problems, to ascribe pathophysiological as well as pharmacological mechanisms to these findings, and to justify a list of differential diagnoses. A utility analysis was undertaken to evaluate the new assessment tool by appraising its reliability, validity, feasibility, cost effectiveness, acceptability, and educational impact using a mixed-method approach. The Subjective, Objective, Assessment and Plan assessment tool scored highly in terms of validity and educational impact and had acceptable levels of statistical reliability but was limited in terms of acceptance, feasibility, and cost effectiveness due to high time demands on expert graders and workload concerns from students. We conclude by making suggestions for improving the tool and recommend deployment of the instrument for low-stakes summative assessment or formative assessment.


Author(s):  
Amitabha Ghosh

Dynamics is a pivotal class in a student’s life-long learning profile since it builds upon the logical extensions of Statics and Strength of Materials classes, and provides a framework on which Fluid Mechanics concepts may be developed for deformable media. This paper establishes the contextual reference of Dynamics in this framework. An earlier paper by the author discussed details of how the design of proper multiple choice questions is critical for assessment in Statics and Fluid Mechanics. This paper provides a progress report of such evaluations in Dynamics. In addition, this paper explores the pedagogical issues related to building a student’s learning profile. While comparing test results obtained in trailer sections of Dynamics with those obtained in sections taught by faculty teams, some structural differences were discovered. This reporting completes the feedback loop used by faculty in our Engineering Sciences Core Curriculum for improving student performance over time. The process may further be developed by using some similarities and differences in the performance data.


Author(s):  
M. Keefe ◽  
J. Glancey ◽  
N. Cloud

In general, assessing the learning process is difficult because objective measures are not readily available, and the time needed to fully evaluate is considerable. This problem is perhaps exacerbated in team-based courses, where learning is unstructured in large part and the body of knowledge expected to be learned is variable. Additional issues that complicate assessment include cross-disciplinary teams, project variability and the involvement of external mentors including industrial sponsors, guest lecturers and consultants. Collaborative learning in a team setting is beneficial to improving undergraduate science and engineering courses; however, no specific assessment tool has been used to evaluate its validity. As a result, novel techniques need to be developed to assess the value of team-based learning. This paper describes the experiences and lessons learned in assessing student performance in team-based project courses culminating in a senior capstone experience that integrates industry-sponsored design projects. Analysis of assessment data collected over the last four years indicates that student performance, measured by faculty grades and industry sponsor evaluations, is not significantly affected by the faculty advisor, project type or sponsoring company size. This is attributed to the focus on assessing student performance in executing the design process, and less on project results. However, faculty assessments of student performance do not correlate very well with industry sponsor assessments.


1999 ◽  
Vol 14 (2) ◽  
pp. 80-82
Author(s):  
M. A. Elsharawy ◽  
L. A. Donaldson ◽  
A. K. Samy

Aim: The severity of varicose vein symptoms is no more than a subjective assessment of the underlying disease. The aim of this study was to use an objective method for assessing the severity of the condition. Methods: We describe a test based on measuring the venous reflux time (VRT) using hand-held Doppler (HHD). To evaluate the efficiency of this test, a prospective study of 61 consecutive primary varicose vein patients with sapheno-femoral incompetence was carried out. Patients were scored preoperatively by a self-assessment questionnaire. The score was compared with the VRT of the same patients. Six months after surgery, a similar self-assessment questionnaire was sent to all patients. Results: The VRT was found to have a highly significant relationship to the preoperative score (ρ = 0.73, p=<0.001). It was also found that most of the patients with a low score of ≤ 3 had a VRT of ≤ 13 s whilst most with a high score of >3 had a VRT of >13 s (sensitivity 78%, specificity 100%, accuracy 84%, p=<0.0001). Only 41 patients responded to the postoperative questionnaire, giving symptom scores of 0 in 40 patients and a score of 1 in one patient. Conclusion: VRT is a simple, objective, non-invasive method of assessment of varicose veins, which relates strongly to the magnitude of the patients' symptoms.


2017 ◽  
Vol 12 (5) ◽  
pp. 331-336
Author(s):  
Lance Vincent Watkins

Purpose The purpose of this paper is to examine whether the current Royal College of Psychiatrists Membership (MRCPsych) written examination is a suitable assessment tool to distinguish between candidates in a high-stakes examination. Design/methodology/approach Review of current educational theory and evidence in relation to the use of multiple-choice questions (MCQs) as an assessment form. Findings When MCQ’s are constructed correctly they provide an efficient and objective assessment tool. However, when developing assessment tools for high-stakes scenarios, it is important that MCQs are used alongside other tests that may scrutinize other aspects of competence. It may be argued that written assessment can only satisfy the first stage of Miller’s pyramid. The evidence outlined demonstrates that this may not be the case and higher order thinking and problem solving can be assessed with appropriately constructed questions. MCQs or any other singular assessment alone, cannot demonstrate clinical competence or mastery. Originality/value Increasingly, the MRCPsych examination is used around the world to establish levels of competency and expertise in psychiatry. It is therefore essential that the Royal College of Psychiatrists lead the way in innovation of assessment procedures which are linked to current educational theory. The author has evidenced how the current MRCPsych, may at least in part, hold inherent biases which are not related to a candidate’s competency.


2017 ◽  
Vol 32 (4) ◽  
pp. 1-17 ◽  
Author(s):  
Dianne Massoudi ◽  
SzeKee Koh ◽  
Phillip J. Hancock ◽  
Lucia Fung

ABSTRACT In this paper we investigate the effectiveness of an online learning resource for introductory financial accounting students using a suite of online multiple choice questions (MCQ) for summative and formative purposes. We found that the availability and use of an online resource resulted in improved examination performance for those students who actively used the online learning resource. Further, we found a positive relationship between formative MCQ and unit content related to challenging financial accounting concepts. However, better examination performance was also linked to other factors, such as prior academic performance, tutorial participation, and demographics, including gender and attending university as an international student. JEL Classifications: I20; M41.


Sign in / Sign up

Export Citation Format

Share Document