A supervised method for unbiased peer-to-peer evaluation. An experience with engineering students
<p class="Textoindependiente21">Continuous evaluation is an assessment method which has some appealing advantages but also implies an increase of the teacher’s efforts and it may be unfeasible if the class is large.</p><p class="Textoindependiente21">Of course, new technologies may be used to implement automatized evaluations, but it is usually quite difficult to carry them out when a complex task like an engineering problem is to be judged.</p><p class="Textoindependiente21">An interesting alternative is a peer-to-peer evaluation, that is, the students themselves review their works. Nevertheless, one drawback is that it is likely that the grades are overrated. Although this is a well-known problem, not much effort is usually put into solving it. In this work we propose a novel method to limit this inconvenience, which is that the teacher randomly supervises a fraction of the students tasks.</p><p class="Textoindependiente21">In this paper we present the results of such an experience carried out in a Signal Processing course within a Robotics Engineering degree. More precisely, four different sets of problems were solved by the teacher in class. At the same time, they were peer-to-peer reviewed by the students, following the indications given by the professor. Later, when the random supervision is performed, a penalty is applied if a major flaw in a student’s evaluation is detected. Thanks to this strategy, the scores tended to be more and more accurate according to the teacher’s criteria.</p><p class="Textoindependiente21">Finally, the results of a survey anonymously fulfilled by the students to assess this experience are also presented.</p>