ACT program evaluation studies

2016 ◽  
2002 ◽  
Vol 18 (3) ◽  
pp. 229-241 ◽  
Author(s):  
Kurt A. Heller ◽  
Ralph Reimann

Summary In this paper, conceptual and methodological problems of school program evaluation are discussed. The data were collected in conjunction with a 10 year cross-sectional/longitudinal investigation with partial inclusion of control groups. The experiences and conclusions resulting from this long-term study are revealing not only from the vantage point of the scientific evaluation of new scholastic models, but are also valuable for program evaluation studies in general, particularly in the field of gifted education.


Author(s):  
Betty Onyura ◽  
Hollie Mullins ◽  
Deena Hamza

Logic models are perhaps the most widely used tools in program evaluation work. They provide reasonably straightforward, visual illustrations of plausible links between program activities and outcomes. Consequently, they are employed frequently in stakeholder engagement, communication, and evaluation project planning. However, their relative simplicity comes with multiple drawbacks that can compromise the integrity of evaluation studies. In this Black Ice article, we outline key considerations and provide practical strategies that can help those engaged in evaluation work to identify and mitigate the limitations of logic models.  


1974 ◽  
Vol 2 (3) ◽  
pp. 311-327 ◽  
Author(s):  
Charles Windle ◽  
Rosalyn D. Bass ◽  
Carl A. Taube

2019 ◽  
Vol 42 (4) ◽  
pp. 196-204
Author(s):  
Janice I. Robbins

This article presents a view of barriers to effective gifted program evaluation resulting from ineffective tools for measuring growth in gifted students and the human barriers confounding the evaluation process. The role of advocacy in the design, implementation, and utilization of evaluation studies is examined. Long held beliefs and biases related to gifted education are recognized as influencing program evaluations. The recognition of the strengths and challenges inherent in the educational role of specific stakeholder groups is presented. Suggestions for developing an emerging cadre of advocates for gifted education are detailed.


Education ◽  
2015 ◽  
Author(s):  
Jody Fitzpatrick

Program evaluation involves making use of social science research methods to judge the quality of a program or policy. It typically is designed to provide information to program stakeholders, including funders; public administrators and policymakers; program managers, deliverers, and clients; or citizens in general, about a program and its quality. The purpose may be to help plan a program (needs assessment), to improve an existing program (formative evaluation), or to determine whether to continue or expand a program (summative evaluation). Program evaluation emerged in the United States with Lyndon Johnson’s Great Society and emerged in most European countries in the 1980s. Australia, New Zealand, and Canada have also been leaders in evaluation work. In the United States, most professional evaluators come from education and psychology. In Europe, and some other countries, evaluators are more likely to come from the fields of political science and economics. These differences in disciplinary training interact with and influence the choice of programs to evaluate and the methods used in evaluation studies. Today, pressures for accountability and transparency have led to an expansion of evaluation around the world. Evaluation associations are emerging in Asia (Asia Pacific Evaluation Association, or APEA, 2012), Africa (African Evaluation Association, or AfrEA, 1999), and South America, with several regional and national associations. Evaluators differ from researchers in that they work with a client to define information needs and collect data to meet those needs making use of qualitative, quantitative, and mixed methods as appropriate to the issues being addressed. Current issues in the field include a focus on outcomes, randomized control trials (RCTs), the role of evaluators in pursuing social justice, involving others in evaluation, building organizations’ and countries’ capacity for evaluation, and, a long-term concern, maximizing the use of evaluations.


Author(s):  
SAIDA Hajjaji ◽  
Mounir Zouiten

The evaluation of urban development programs is now a prerequisite for any initiative to improve their effectiveness. The United Nations has designated 2015 as the International Year of Evaluation (EvalYear). This global initiative aims to support the development of an enabling environment for evaluation at international, national, and local levels (UN, 2015). In Morocco, the situation is still characterized by a weak anchoring of the evaluation function in the political-institutional landscape, except for a few sectoral mechanisms for collecting information and drawing up diagnoses. However, there is a real awareness of this, as the new Constitution of 2011 addresses this deficit and highlights the importance of evaluation in the management of public affairs. In this context, the Moroccan Ministry of Housing has initiated several evaluation studies on specific programs. Accordingly, we will analyze three evaluation studies of urban development projects. The objective of our work is to verify to what extent the modeling of the program evaluation process, developed by Hurteau and Houle (2006), was applied to the evaluation reports analyzed and to issue a well-founded judgment. To do this, we translated the steps of modeling the evaluation process into indicators to create an analysis grid. However, our study may have a limitation in that while the reports analyzed have the advantage of being almost uniform in terms of content, this choice is biased because it does not provide an exhaustive representation of evaluation practice. Finally, the results of our study show that the practice of modeling the evaluation process is not uniform and that it would be important to develop and frame the practice of program evaluation.


2021 ◽  
Vol 11 (1) ◽  
pp. 74-86
Author(s):  
Gülsüm Çonoğlu ◽  
Fatma Orgun

The aim of this study is to evaluate the undergraduate curriculum of a nursing program in accordance with nursing students' and instructors' opinions by using the Context, Input, Process, and Product (CIPP) model. This is a descriptive study which was conducted between September 2017 and July 2018 with 448 students and 82 instructors of a faculty of nursing. The Student and Instructor Information Form and Nursing Undergraduate Curriculum Evaluation Form (NUCEF) were used in order to collect data. The NUCEF consists of 50 items under four sub-dimensions called the Context, Input, Process and Product. The obtained data were analyzed using the SPSS 20.0 program. The frequency, percentage, mean, and standard deviation tests were used in the data analysis. The level of satisfaction with the curriculum was found to be 4.47±2.09 for the students and 6.80±1.89 for the instructors. 42% of the students stated that they thought the program outcomes were not achieved, and 42.7% of the instructors stated they thought the program outcomes were partially achieved. When the distribution of responses of Context, Input, Process and Product sub-dimensions of the NUCEF was examined; the mean item score of the students was found to be between 2.27±1.15 and 3.83±1.06, and the mean item score of the instructors was found to be between 2.08±1.06 and 4.06±.72. Considering all the sub-dimensions of NUCEF, students think that nursing undergraduate curriculum is partially adequate, and instructors think that nursing undergraduate curriculum is adequate. To conclude, it is recommended that the current undergraduate curriculum be reviewed and regulated, continuous and systematic program evaluation and improvement studies be carried out, and program evaluation studies in nursing education be increased.


Sign in / Sign up

Export Citation Format

Share Document