Student evaluation team focus groups increase students’ satisfaction with the overall course evaluation process

2016 ◽  
Vol 51 (2) ◽  
pp. 215-227 ◽  
Author(s):  
Katharina Brandl ◽  
Jess Mandel ◽  
Babbi Winegarden
Author(s):  
Katharina Brandl ◽  
Soniya V. Rabadia ◽  
Alexander Chang ◽  
Jess Mandel

In addition to online questionnaires, many medical schools use supplemental evaluation tools such as focus groups to evaluate their courses. Although some benefits of using focus groups in program evaluation have been described, it is unknown whether these inperson data collection methods provide sufficient additional information beyond online evaluations to justify them. In this study, we analyze recommendations gathered from student evaluation team (SET) focus group meetings and analyzed whether these items were captured in open-ended comments within the online evaluations. Our results indicate that online evaluations captured only 49% of the recommendations identified via SETs. Surveys to course directors identified that 74% of the recommendations exclusively identified via the SETs were implemented within their courses. Our results indicate that SET meetings provided information not easily captured in online evaluations and that these recommendations resulted in actual course changes.


Curationis ◽  
2016 ◽  
Vol 39 (1) ◽  
Author(s):  
Ntefeleng E. Pakkies ◽  
Ntombifikile G. Mtshali

Background: Higher education institutions have executed policies and practices intended to determine and promote good teaching. Students’ evaluation of the teaching and learning process is seen as one measure of evaluating quality and effectiveness of instruction and courses. Policies and procedures guiding this process are discernible in universities, but it isoften not the case for nursing colleges.Objective: To analyse and describe the views of nursing students on block evaluation, and how feedback obtained from this process was managed.Method: A quantitative descriptive study was conducted amongst nursing students (n = 177) in their second to fourth year of training from one nursing college in KwaZulu-Natal. A questionnaire was administered by the researcher and data were analysed using the Statistical Package of Social Sciences Version 19.0.Results: The response rate was 145 (81.9%). The participants perceived the aim of block evaluation as improving the quality of teaching and enhancing their experiences as students.They questioned the significance of their input as stakeholders given that they had never been consulted about the development or review of the evaluation tool, or the administration process; and they often did not receive feedback from the evaluation they participated in.Conclusion: The college management should develop a clear organisational structure with supporting policies and operational guidelines for administering the evaluation process. The administration, implementation procedures, reporting of results and follow-up mechanisms should be made transparent and communicated to all concerned. Reports and actions related to these evaluations should provide feedback into relevant courses or programmes.Keywords: Student evaluation of teaching; perceptions; undergraduate nursing students; evaluation process


2015 ◽  
Vol 27 (5) ◽  
pp. 3-13
Author(s):  
Jacqueline E. McLaughlin ◽  
Amy Sloane ◽  
Elizabeth Billings ◽  
Mary T. Roth

Author(s):  
Seung Youn (Yonnie) Chyung ◽  
Stacey E. Olachea ◽  
Colleen Olson ◽  
Ben Davis

The College Advisory Program offered by Total Vision Soccer Club aims at providing young players with the opportunity to learn how to navigate the collegiate recruiting process, market themselves to college coaches, and increase their exposure to potential colleges and universities. A team of external evaluators (authors of this chapter) conducted a formative evaluation to determine what the program needs to do to reach its goal. By following a systemic evaluation process, the evaluation team investigated five dimensions of the program and collected data by reviewing various program materials and conducting surveys and interviews with players and their parents, upstream stakeholders, and downstream impactees. By triangulating the multiple sources of data, the team drew a conclusion that most program dimensions were rated as mediocre although the program had several strengths. The team provided evidence-based recommendations for improving the quality of the program.


2005 ◽  
Vol 5 (3-4) ◽  
pp. 115-120
Author(s):  
Suing-il Choi ◽  
J.Y. Yoon ◽  
S.K. Hong

One of the practical means for waterworks to compete with each other for efficient operation is implementing an evaluation program. The evaluation program, coinciding with public notification, was prepared and implemented as a trial in Korea in 2003. However, several unexpected issues arose during the implementation. The evaluation scheme has been revised and described in this paper. The water quality might better be used as a qualification barrier for the further evaluation process. It may also be reasonable to consider that the evaluation team might visit on a very exceptional day for the plant, therefore it might be better to let the evaluation team have enough time to assess plants more extensively. The collateral case study reports should be used as a manual to maintain consistency between the evaluation teams. Although the case study took place in Korea, the revised principles from this experience may be beneficial to other countries.


2019 ◽  
Vol 19 (4) ◽  
pp. 204-210 ◽  
Author(s):  
Kerrie Ikin ◽  
Peter McClenaghan

In recent years, the New South Wales government education system changed the way whole-school evaluation occurs. Moving away from external school reviews when data suggested underperformance, principals are now required to develop 3-year strategic school plans and self-evaluate them in consultation with their staff, parents and students. An external validation process is then undertaken by principal peers. The internal school process presumes a stakeholder-engagement approach to school planning and evaluation. It further presumes that stakeholders are not only consulted but also feel they understand and own the plan. One school principal, realising the challenges that the new model posed for himself and his staff, engaged an evaluation team to develop and implement a process that would help his school rise to these challenges. This article describes the empowerment evaluation process that ensued. It first explains the context of the school that gave rise to empowerment over other forms of stakeholder-engagement evaluation processes. It discusses how the literature on values underpinned the conceptual framework and operational model. The article then illustrates how the process enabled the staff to engage explicitly with personal and organisational values and how a focus on these values was built into every stage of the process. Finally, the benefits as well as the challenges of this approach are described.


1993 ◽  
Vol 76 (3) ◽  
pp. 995-1000 ◽  
Author(s):  
Patricia L. Dwinell ◽  
Jeanne L. Higbee

This research examined how 187 students assessed a course evaluation form, the anonymity of the evaluation process, the fairness and accuracy students attribute to the task of completing evaluations of instruction, and students' perceptions of the extent to which teachers and administrators make use of the information provided by evaluations. 92% of the student-participants believed that the rating forms provided an effective means of evaluating instruction. The majority thought instructors pay attention to evaluation results and change their behavior accordingly. Only 2% believed that their anonymity was not protected. Students appeared to have more faith in their own evaluations than in those of other students. They also lacked confidence in the use of evaluations for determining salary increases or tenure and promotion.


1973 ◽  
Vol 10 (2) ◽  
pp. 115-124 ◽  
Author(s):  
Kent L. Granzin ◽  
John J. Painter

This study of the student course evaluation process discovered significant correlations between course ratings and variables representing commitment and course-end attitudes toward the course. It found relationships of lesser significance for attitude change measures, while demographics provided generally nonsignificant correlations. Stepwise regression equations developed for their power to predict course ratings relied most heavily on course-end attitude variables. Factor analysis of the variable set revealed 6 factors underlying the course evaluation structure studied, and this analysis guided formulation of new regression equations having reduced predictive power but greater independence among included predictor variables. Conclusions focused on the study’s contributions to understanding the course evaluation process and suggested steps an instructor might take to improve his ratings.


Author(s):  
David L. Jones ◽  
Roberto Champney ◽  
Par Axelsson ◽  
Kelly Hale

A primary goal of the usability evaluation process is to create interfaces that can be seamlessly integrated into current processes and create an enjoyable experience for the user. Given this, it is critical to capture user input to effectively drive product development and redesign. While many methods are available to usability practitioners, this paper highlights three techniques that can be used to substantially enhance usability evaluation output. Specifically this paper presents a method to utilize focus groups, emotional profiling and Kano analysis methods in combination to define user needs, expectations, and desires, provide an explanation of why features of a product are liked or disliked, as well as add additional structure to the prioritization of usability shortcomings and related redesign recommendations. A background on each method, the process for implementing them into usability analyses, and guidelines for successful use are provided for usability practitioners.


Sign in / Sign up

Export Citation Format

Share Document