scholarly journals Determinants of Accounting Student Evaluations of Teaching Scores

2015 ◽  
Vol 8 (12) ◽  
pp. 26
Author(s):  
Saad S. Albuloushi ◽  
Mishari M. Alfraih

<p>Given the prevalent use of the student evaluations of teaching (SET) as a measure of teaching effectiveness, this study aims to investigate the determinants of SET scores among students attending the College of Business Studies at the Public Authority for Applied Education and Training (PAAET), Kuwait. A total of 678 SET were analysed using univariate and multiple regression analyses. It was found that SET scores were significantly and positively biased by expected grade, student age and course level. In contrast, class size and faculty experience were found to be significantly and negatively related to SET. Expected grade had the strongest impact on SET scores.<em> </em>The study findings raise concerns about the reliability and validity of the SET as well as their suitability for evaluation purposes. As SET scores have an important assessment function and serve as formative and summative measures in personnel decisions, the incentives for faculty to compromise their grading standards to receive good teaching evaluations increase. Accordingly, administrators should devote more effort to ensure a careful and complete understanding and interpretation of SET if they want to effectively incorporate them into the faculty evaluation process. To the authors’ knowledge, this is the first study to explore determinants of student evaluations of teaching scores in Kuwait.</p>

PeerJ ◽  
2017 ◽  
Vol 5 ◽  
pp. e3299 ◽  
Author(s):  
Bob Uttl ◽  
Dylan Smibert

Anonymous student evaluations of teaching (SETs) are used by colleges and universities to measure teaching effectiveness and to make decisions about faculty hiring, firing, re-appointment, promotion, tenure, and merit pay. Although numerous studies have found that SETs correlate with various teaching effectiveness irrelevant factors (TEIFs) such as subject, class size, and grading standards, it has been argued that such correlations are small and do not undermine the validity of SETs as measures of professors’ teaching effectiveness. However, previous research has generally used inappropriate parametric statistics and effect sizes to examine and to evaluate the significance of TEIFs on personnel decisions. Accordingly, we examined the influence of quantitative vs. non-quantitative courses on SET ratings and SET based personnel decisions using 14,872 publicly posted class evaluations where each evaluation represents a summary of SET ratings provided by individual students responding in each class. In total, 325,538 individual student evaluations from a US mid-size university contributed to theses class evaluations. The results demonstrate that class subject (math vs. English) is strongly associated with SET ratings, has a substantial impact on professors being labeled satisfactory vs. unsatisfactory and excellent vs. non-excellent, and the impact varies substantially depending on the criteria used to classify professors as satisfactory vs. unsatisfactory. Professors teaching quantitative courses are far more likely not to receive tenure, promotion, and/or merit pay when their performance is evaluated against common standards.


2019 ◽  
Vol 49 (1) ◽  
pp. 85-103
Author(s):  
Luis Francisco Vargas-Madriz ◽  
Norma Nocente ◽  
Rebecca Best-Bertwistle ◽  
Sarah Forgie

Student Evaluations of Teaching (SET) have been the most consistently administered tool, and they are still extensively used in higher education institutions to assess teaching effectiveness. The purpose of this study was to explore how SET are used by administrators in the teaching evaluation process at a large, research-intensive Canadian university. A basic qualitative research design was used in this project, and semi-structured interviews were used to obtain administrators’ experiences. The research question that guided this study was: How are SET (and other tools) used in the evaluation of teaching at this university? Findings showed that although participants mostly utilized a couple of SET statements as indicators of effective teaching, they were certainly aware of the intrinsic issues concerning these tools, and that they are continually seeking to obtain more evidence if SET results are below their benchmarks.


Author(s):  
Anne Boring ◽  
Kellie Ottoboni ◽  
Philip Stark

<p>Student evaluations of teaching (SET) are widely used in academic personnel decisions as a measure of teaching effectiveness. We show:</p><ul> <li>SET are biased against female instructors by an amount that is large and statistically significant</li> <li>the bias affects how students rate even putatively objective aspects of teaching, such as how promptly assignments are graded</li> <li>the bias varies by discipline and by student gender, among other things</li> <li>it is not possible to adjust for the bias, because it depends on so many factors</li></ul><ul> <li>SET are more sensitive to students&#39; gender bias and grade expectations than they are to teaching effectiveness</li> <li>gender biases can be large enough to cause more effective instructors to get lower SET than less effective instructors.</li></ul><p>These findings are based on nonparametric statistical tests applied to two datasets: 23,001 SET of 379 instructors by 4,423 students in six mandatory first-year courses in a five-year natural experiment at a French university, and 43 SET for four sections of an online course in a randomized, controlled, blind experiment at a US university.</p>


2011 ◽  
Vol 2 (3) ◽  
pp. 7
Author(s):  
Catherine S. Neal ◽  
Teressa Elliott

Because student evaluations of teaching effectiveness (SETEs) are an important and widely used tool used in the evaluation and reward systems for faculty members in higher education, a discussion and analysis of the ethical problems that may arise as a result of the conflict created by expectations of performance is provided.  This discussion specifically focuses on ethical issues related to setting course expectations and attendance policies to manipulate students’ perceptions of course rigor and the overall evaluation of the course and the instructor.


2014 ◽  
Vol 1 (1) ◽  
pp. 48
Author(s):  
Klarissa Lueg

<p>This paper proposes empirical approaches to testing the reliability, validity, and organizational effectiveness of student evaluations of teaching (SET) as a performance measurement instrument in knowledge management at the institutional level of universities. Departing from Weber’s concept of bureaucracy and critical responses to this concept, we discuss how contemporary SET are used as an instrument of organizational control at Danish universities. A discussion of the current state of performance measurement within the frame of new public management (NPM) and its impact on knowledge creation and legitimation forms the basis for proposing four steps of investigation. The suggested mixed-methods approach comprises the following: first, thematic analysis can serve as a tool to evaluate the legitimacy discourse as initiated by official SET affirmative documents by government, university, and students. Second, constructs for the SET questionnaire can be developed and compared to existing SET questionnaires in terms of reliability and validity. Third, data from SET can be used to corroborate the relationship between the qualitative (comments) and quantitative (scaled questionnaire) sections. Fourth, it can be investigated if SET actually contribute to teaching improvement by examining how the instrument is integrated into systematic ex-ante and ex-post organizational management. It is expected to find discrepancy between the proponents’ intent to evaluate teaching and the way the performance measurement instrument is implemented. </p>


Author(s):  
Robert E. Pritchard ◽  
Gregory C. Potter

Based on a detailed literature review and longitudinal analysis, this paper explores the possible underlying causes of the decline in the number of hours per week graduating business seniors indicated they studied during their senior year. The study was conducted at an AACSB accredited college of business at a regional university.  The study indicates that the decline in hours studied was likely an unintended result of using a process designed to demonstrate continuous improvement in teaching. The process utilized the Educational Testing Service’s SIR II student evaluation instrument as the only measure of teaching quality/effectiveness. The study concludes that the process may have pressured some instructors to sacrifice teaching rigor in an attempt to obtain more favorable student evaluations, thereby precipitating the decline in hours studied.


Sign in / Sign up

Export Citation Format

Share Document