Using the 2018 AEA Evaluator Competencies for Effective Program Evaluation Practice

2020 ◽  
Vol 2020 (168) ◽  
pp. 75-97
Author(s):  
Laurie Stevahn ◽  
Dale E. Berger ◽  
Susan A. Tucker ◽  
Anna Rodell
2000 ◽  
Vol 10 (3) ◽  
pp. 331-339 ◽  
Author(s):  
Allison H. Fine ◽  
Colette E. Thayer ◽  
Anne Coghlan

2001 ◽  
Vol 62 (4) ◽  
pp. 307-315 ◽  
Author(s):  
Julie Brewer ◽  
Mark D. Winston

Academic libraries are turning increasingly to internship/residency programs to enhance their recruitment efforts. Yet, little evaluative information is available to measure the effectiveness of these programs or to justify funding for them. This article outlines the necessary components of an evaluation model for internship/residency programs based on a survey of academic library deans/directors and program coordinators. The study identifies the key evaluation factors that library administrators consider most important for measuring internship/residency programs, as well as the frequency, format, and sources of input for effective program evaluation.


1997 ◽  
Vol 18 (2) ◽  
pp. 127-135
Author(s):  
George A. Johanson

Differential item functioning (DIF) is not often seen in the literature on attitude assessment. A brief discussion of DIF and methods of implementation is followed by an illustrative example from a program evaluation, using an attitude-towards-science scale with 1550 children in grades one through six. An item exhibiting substantial DIF with respect to gender was detected using the Mantel-Haenszel procedure. In a second example, data from workshop evaluations with 1682 adults were recoded to a binary format, and it was found that an item suspected of functioning differentially with respect to age groups was, in fact, not doing so. Implications for evaluation practice are discussed.


2017 ◽  
Vol 32 (2) ◽  
Author(s):  
Ghislaine Hélène Tremblay ◽  
Frédéric Bertrand ◽  
Melissa Fraser

Rubrics are commonly used in the education sector to assess performance, products, or processes of student learning. Rubrics are gaining importance in or-ganizational performance and program evaluation practice. According to several evaluation practitioners, rubrics can elucidate how excellence and value are defined and applied to evaluation questions or indicators in a given context. This practice note summarizes a pilot project of the National Research Council Canada (NRC) using evaluative rubrics for characterizing relevance and generating conclusions in an evaluation.En pédagogie, on se sert souvent de rubriques pour évaluer la perfor-mance, les résultats ou la démarche d’apprentissage de l’étudiant. De plus en plus, les rubriques sont utilisées en analyse de la performance organisationnelle et dans la pratique évaluative. Selon certains évaluateurs, les rubriques peuvent contribuer à éclairer la façon dont l’excellence et le rendement sont définies et intégrées aux ques-tions d’évaluation ou aux indicateurs dans un contexte donné. Cette note de pratique résume un projet pilote du Conseil national de recherches du Canada (CNRC) dans lequel on a utilisé des rubriques pour caractériser la pertinence d’un programme et générer les conclusions de l’évaluation.


2019 ◽  
Vol 3 (Supplement_1) ◽  
pp. S880-S881
Author(s):  
Gabriela Zaragoza ◽  
Jaime Cruz-Martinez ◽  
Paula Reilly ◽  
Jeanette Ross ◽  
Michael J Mader ◽  
...  

Abstract Dementia awareness training alone does not improve care or outcomes for patients living with dementia. Effective dementia education programs for family caregivers and healthcare providers can lead to improved care practices and patient outcomes. The Dementia Immersion Simulation Experience (DISE) is a face-to-face 2-hour educational program that includes simulation, videos, a virtual reality station, group debriefing, and a didactic session delivered by faculty with dementia caregiving expertise. The purpose of this project was to evaluate the effectiveness of DISE in a group of 48 interdisciplinary healthcare providers, trainees and administrative staff. A program evaluation and pre and post knowledge questionnaires were administered. Prior to the activity, the mean score of all participants was 8.85. After the activity, the mean score was 10.1 (p<0.0001). 35.4% of all participants were well informed on dementia before DISE and 70.8% were well informed after the activity (p <0.0005). Qualitative analysis of the comments section of the program evaluation showed that 95% of the participants mentioned empathy for those living with dementia. Participants rated DISE on a scale of 1 (Strongly Disagree) to 5 (Strongly Agree) across ten categories, covering objectives, relevance, effectiveness, and value of the learning experience. Over 95% of respondents agreed or better (score = 4 or 5) with each evaluation statement and at least 85% strongly agreed with each statement. The evaluation scores are further evidence of an effective program. DISE is an effective tool to teach and support family caregivers, healthcare workers, and healthcare professionals and trainees.


2018 ◽  
Vol 10 (1) ◽  
pp. 114-117 ◽  
Author(s):  
Ingrid Philibert ◽  
John H. Beernink ◽  
Barbara H. Bush ◽  
Donna A. Caniano ◽  
John J. Coyle ◽  
...  

2005 ◽  
Vol 19 (1) ◽  
pp. 47-52
Author(s):  
A. Himelfarb ◽  
A. Lazar

Sign in / Sign up

Export Citation Format

Share Document