Assessing Student Team Performance in Industry Sponsored Design Projects

2007 ◽  
Vol 129 (7) ◽  
pp. 692-700 ◽  
Author(s):  
M. Keefe ◽  
J. Glancey ◽  
N. Cloud

Although cooperative learning in a team setting is a common approach for integrating problem-based learning into undergraduate science and engineering, standard assessment tools do not exists to evaluate learning outcomes. As a result, novel techniques need to be developed to assess learning in team-based design projects. This paper describes the experiences and lessons learned in assessing student performance in team-based, project courses culminating in a senior capstone experience that integrates industry-sponsored design projects. A set of rubrics linked to the instructional objectives was developed that define and communicate expectations during each of three project phases. Rubrics for each phase incorporate three fundamental areas of team performance assessment: (i) synthesis of a valid concept; (ii) management of resources; and (iii) interpersonal interaction and communication. At the end of each phase, both the faculty and industry sponsor use the same rubric to assess student team performance. An analysis of variance (ANOVA) of the assessment data collected over the last 5 years indicated that student performance, measured by faculty grades and industry sponsor evaluations, was not significantly affected by the faculty advisor, project type, or sponsoring company size. These results are attributed primarily to the faculty focusing more on assessing student performance in executing the design process and less on the actual project results. The analysis also revealed that faculty assessments of student performance did not correlate very well with industry sponsor assessments. To address this, a revised set of evaluation rubrics were developed and are currently being used to better articulate expectations from both faculty and industrial sponsor perspectives.

Author(s):  
M. Keefe ◽  
J. Glancey ◽  
N. Cloud

In general, assessing the learning process is difficult because objective measures are not readily available, and the time needed to fully evaluate is considerable. This problem is perhaps exacerbated in team-based courses, where learning is unstructured in large part and the body of knowledge expected to be learned is variable. Additional issues that complicate assessment include cross-disciplinary teams, project variability and the involvement of external mentors including industrial sponsors, guest lecturers and consultants. Collaborative learning in a team setting is beneficial to improving undergraduate science and engineering courses; however, no specific assessment tool has been used to evaluate its validity. As a result, novel techniques need to be developed to assess the value of team-based learning. This paper describes the experiences and lessons learned in assessing student performance in team-based project courses culminating in a senior capstone experience that integrates industry-sponsored design projects. Analysis of assessment data collected over the last four years indicates that student performance, measured by faculty grades and industry sponsor evaluations, is not significantly affected by the faculty advisor, project type or sponsoring company size. This is attributed to the focus on assessing student performance in executing the design process, and less on project results. However, faculty assessments of student performance do not correlate very well with industry sponsor assessments.


2020 ◽  
Vol 12 (23) ◽  
pp. 9826
Author(s):  
Rosa Isusi-Fagoaga ◽  
Adela García-Aracil

The aim of this paper is to provide insights into the appropriateness of teaching-learning and evaluation processes using rubrics, for student self-assessments. We studied students enrolled on the Master’s in Secondary Education Teaching—Music Specialism course. In the Spanish secondary education system, music is seen as increasing equity and improving student performance in line with the Agenda 2030 Sustainable Development Goals. The training of new teachers and the ongoing professional development of the current teaching force are critical for improving the quality of education. We adopted an action-research approach and obtained feedback from the Masters’ students via questionnaires administered at the start and end of the process (pre- and post-test). Our results show that using rubrics as formative and shared assessment tools has a positive influence on students’ perceptions of their acquisition of both transversal and specific competencies, as well as demonstrating the utility of rubrics for their future professional practice. However, rubrics on their own are not sufficient to increase the facility for learning and awareness among students.


Author(s):  
Philip E. Doepker ◽  
Andrew P. Murray

Abstract This paper outlines many of our experiences in the implementation of the Product Realization Process (PRP) in industry sponsored team design projects. There are three areas of emphasis. The first part reviews the Product Realization Process as implemented in our senior design courses. The second part presents and evaluates the data for the time spent on project phases and the total project. This aspect has been studied previously with industry projects and was found to be a useful way of evaluating projects as related to the PRP. The paper concludes with the lessons learned after 5 years of implementing these projects.


Author(s):  
Natalie Inoue ◽  
Kristie Kaczmarek ◽  
Emily Chen ◽  
Hiroe Ohyama

2018 ◽  
Vol 20 (3) ◽  
pp. 381-389 ◽  
Author(s):  
Gabrielle Turner-McGrievy ◽  
Danielle E. Jake-Schoffman ◽  
Camelia Singletary ◽  
Marquivieus Wright ◽  
Anthony Crimarco ◽  
...  

Background. Wearable physical activity (PA) trackers are becoming increasingly popular for intervention and assessment in health promotion research and practice. The purpose of this article is to present lessons learned from four studies that used commercial PA tracking devices for PA intervention or assessment, present issues encountered with their use, and provide guidelines for determining which tools to use. Method. Four case studies are presented that used PA tracking devices (iBitz, Zamzee, FitBit Flex and Zip, Omron Digital Pedometer, Sensewear Armband, and MisFit Flash) in the field—two used the tools for intervention and two used the tools as assessment methods. Results. The four studies presented had varying levels of success with using PA devices and experienced several issues that impacted their studies, such as companies that went out of business, missing data, and lost devices. Percentage ranges for devices that were lost were 0% to 29% and was 0% to 87% for those devices that malfunctioned or lost data. Conclusions. There is a need for low-cost, easy-to-use, accurate PA tracking devices to use as both intervention and assessment tools in health promotion research related to PA.


2016 ◽  
Author(s):  
Jheng-Wun Su ◽  
Zhengwei Nie ◽  
Jiamin Wang ◽  
Yuyi Lin

1997 ◽  
Vol 13 (04) ◽  
pp. 225-241
Author(s):  
J. Paul Lemoine ◽  
Henry S. Marcus ◽  
Joseph A. Curcio

The use of teams is a critical part of Total Quality Leadership/Management, and is a proven method for improving project performance in both civilian and Department of Defense applications. This paper considers whether the Navy should focus more attention on the application of team-based project administration in the ship acquisition process. A series of case studies written at MIT is analyzed to describe how teams have functioned in recent ship acquisition projects. Successful aspects of team performance are discussed. The lessons learned from these case studies are compared with the theory on the value and implementation of teams. The use of ship acquisition teams is analyzed in relationship to desired performance. Documented improvements in performance over previous contracts are presented. The paper concludes by assessing that the three cases examined were successful team efforts. Based on these projects, it is stated that the obstacles unique to the Navy ship acquisition process are surmountable by partnering approaches, and that such approaches should be considered for other yards and projects.


Author(s):  
Kenneth David Strang

This case presents a best-practice in higher education, whereby a balanced scorecard approach was used to assess the effectiveness of a distance education (online) course in an accredited business degree program at an Australian public university. The assessment rubric was created by applying the concept of the balanced scorecard (from management science) to measure student performance, satisfaction, as well as content and delivery effectiveness. Performance was derived from the course grades while a validated survey instrument was utilized to gather estimates of all other factors from the students. One of the key lessons-learned in the case was that rather than reinvent the wheel, it was better to reuse accreditation surveys designed for the classroom to assess online courses and leverage the management science philosophy of measuring more than just performance to evaluate program success. Similar scorecard concepts have already been applied in U.S. universities, thus their differences with this case are also discussed.


2010 ◽  
Vol 12 (3) ◽  
pp. 45-61 ◽  
Author(s):  
Kenneth David Strang

This case presents a best-practice in higher education, whereby a balanced scorecard approach was used to assess the effectiveness of a distance education (online) course in an accredited business degree program at an Australian public university. The assessment rubric was created by applying the concept of the balanced scorecard (from management science) to measure student performance, satisfaction, as well as content and delivery effectiveness. Performance was derived from the course grades while a validated survey instrument was utilized to gather estimates of all other factors from the students. One of the key lessons-learned in the case was that rather than reinvent the wheel, it was better to reuse accreditation surveys designed for the classroom to assess online courses and leverage the management science philosophy of measuring more than just performance to evaluate program success. Similar scorecard concepts have already been applied in U.S. universities, thus their differences with this case are also discussed.


2020 ◽  
pp. 183335832093680
Author(s):  
Heidi W Reynolds ◽  
Shannon Salentine ◽  
Eva Silvestre ◽  
Elizabeth Millar ◽  
Ashley Strahley ◽  
...  

Background: Evidence-based interventions are necessary for planning and investing in health information systems (HIS) and for strengthening those systems to collect, manage, sort and analyse health data to support informed decision-making. However, evidence and guidance on HIS strengthening in low- and middle-income countries have been historically lacking. Objective: This article describes the approach, methods, lessons learned and recommendations from 5 years of applying our learning agenda to strengthen the evidence base for effective HIS interventions. Methods: The first step was to define key questions about characteristics, stages of progression, and factors and conditions of HIS performance progress. We established a team and larger advisory group to guide the implementation of activities to build the evidence base to answer questions. We strengthened learning networks to share information. Results: The process of applying the learning agenda provided a unique opportunity to learn by doing, strategically collecting information about monitoring and evaluating HIS strengthening interventions and building a body of evidence. There are now models and tools to strengthen HIS, improved indicators and measures, country HIS profiles, documentation of interventions, a searchable database of HIS assessment tools and evidence generated through syntheses and evaluation results. Conclusion: The systematic application of learning agenda processes and activities resulted in increased evidence, information, guidance and tools for HIS strengthening and a resource centre, making that information accessible and available globally. Implications: We describe the inputs, processes and lessons learned, so that others interested in designing a successful learning agenda have access to evidence of how to do so.


Sign in / Sign up

Export Citation Format

Share Document