Computer-based assessment of student performance in programing courses

2011 ◽  
Vol 21 (4) ◽  
pp. 671-683 ◽  
Author(s):  
N. Kalogeropoulos ◽  
I. Tzigounakis ◽  
E. A. Pavlatou ◽  
A. G. Boudouvis
2019 ◽  
Vol 52 (4) ◽  
pp. 757-762
Author(s):  
Besir Ceka ◽  
Andrew J. O’Geen

ABSTRACTThe use of course-management software such as Blackboard, Moodle, and Canvas has become ubiquitous at all levels of education in the United States. A potentially useful feature of these products is the ability for instructors to administer assessments including quizzes and tests that are flexible, easy to customize, and quick and efficient to grade. Although computer-based assessments offer clear advantages, instructors might be concerned about their effect on student performance. This article evaluates whether student performance differs between handwritten and computer-based exams through a randomized field experiment conducted in a research methods course. Overall, our findings suggest a significant improvement in student performance on computer-based exams that is driven primarily by the relative ease of producing thorough responses on the computer versus by hand.


2013 ◽  
Vol 8 (2) ◽  
pp. 168-207 ◽  
Author(s):  
John H. Tyler

Testing of students and computer systems to store, manage, analyze, and report the resulting test data have grown hand-in-hand. Extant research on teacher use of electronically stored data are largely qualitative and focused on the conditions necessary (but not sufficient) for effective teacher data use. Absent from the research is objective information on how much and in what ways teachers use computer-based student test data, even when supposed precursors of usage are in place. This paper addresses this knowledge gap by analyzing the online activities of teachers in one mid-size urban district. Utilizing Web logs collected between 2008 and 2010, I find low teacher interaction with Web-based pages that contain student test information that could potentially inform practice. I also find no evidence that teacher usage of Web-based student data are related to student achievement gains, but there is reason to believe these estimates are downwardly biased.


1974 ◽  
Vol 3 (1) ◽  
pp. 47-60 ◽  
Author(s):  
Samuel Spero

A computer based system for the evaluation of instructional strategies and student performance is described. The system provides the teacher with student scores and statistical information. The system also provides comments for students. The system assumes a one-hour turn around time for processing. The hardware includes a Bell and Howell Mark Document Reader and a General Electric Terminet 1200 Printer. These units are connected by telephone to a remote processor.


1978 ◽  
Vol 7 (2) ◽  
pp. 109-117 ◽  
Author(s):  
James V. Schultz ◽  
Richard B. Friedman ◽  
Robert S. Newsom

Computer-based clinical simulations provide a means for health care students to practice independent choice and action without the necessary constraints of patient safety. A specific diagnostic model is described along with its use in the training and evaluation of health care personnel. The educational viability of the model is examined in terms of a) the apparent reality of the simulation, b) the stability of student performance, c) the relationship of student performance on simulation to performance on more conventional testing methodologies, and d) the applicability of simulation to allied health care personnel. The implications for students, faculty and curriculum of computer-based clinical simulations are discussed.


2007 ◽  
Vol 29 (9-10) ◽  
pp. 990-992 ◽  
Author(s):  
E. Robert Burns ◽  
Judith E. Garrett ◽  
Gwen V. Childs

1993 ◽  
Vol 9 (3) ◽  
pp. 397-412 ◽  
Author(s):  
Sylvester Upah ◽  
Rex Thomas

In this study of the learning of programming, two computer-based simulations (manipulative models) of program loops were compared with a computer-based tutorial combined with paper-and-pencil exercises. For the treatment group, one simulation was used prior to and one following classroom instruction on the WHILE-DO and REPEAT-UNTIL looping constructs. For the control group, the tutorial preceded classroom instruction, which was followed by the paper-and-pencil exercises. Students using the manipulative models were more successful in applying their knowledge of loops to a situation requiring transfer, but were no more successful on problems requiring interpretation or direct application. Previous programming experience did not produce a measurable effect on student performance on looping problems.


Author(s):  
Daniel Patrick Kelly ◽  
Teomara Rutherford

<p class="3">Khan Academy is a large and popular open educational resource (OER) with little empirical study into its impact on student achievement in mathematics when used in schools. In this study, we examined the use of Khan Academy as a mathematics intervention among seventh grade students over a 4-week period versus a control group. We also compared differences between students who had supplemental mathematics instruction and those who had not. In both cases, we found no statistically significant differences in student test scores. Khan Academy has several internal metrics used to track student performance and use. We found significant relationships between these metrics and student test scores in this study. Khan Academy and other OER provide access to information and knowledge to large numbers of the population. This research adds to the discourse methods by which Khan Academy and other OER may affect learners.</p>


Author(s):  
William F. Moroney ◽  
Steven Hampton ◽  
David W. Biers ◽  
Thomas Kirton

The in-flight performance of aviation students trained on PC-Based Training (TDs), using “Elite” and “IFT” software packages, was compared to the in—flight performance of students trained in a FAA approved generic training device (the Frasca 141). Seventy-nine students enrolled in a Instrument Flight Training Course were trained on one of the three TDs and then flew in a Mooney 20J. Instructors/ evaluators used a form, based on criteria specified in FAA's Performance Test Standards (PTS) for an Instrument Rating, to evaluate student performance on six maneuvers and two categories of general flight skills. Student performance was evaluated by course instructors and independent “Stage Check Pilots” during both the ground-based and in-flight portion of the course. For the factors evaluated, no significant difference was noted among those studnets taught in any of the TDs in either the number of trials per task or hours to instrument flight proficiency in the aircraft. However, compared to students trained on the Frasca, students trained on the PC-Based TDs required significantly fewer hours and trials per task, to reach the overall PTSs in the TDs. Additionally, training received in the PC-Based TD cost 46% less than training received in the Frasca. Finally, the cost of the PC-Based TDs, associated hardware and Software was 7.6% of the $60,000 cost of the Frasca. The authors recommend that steps be initiated to qualify PC-Based TDs as Flight Training Devices, in which instrument rating training credit can be accrued.


Sign in / Sign up

Export Citation Format

Share Document