scholarly journals Pedagogic Exploration in Adapting Automatic Writing Evaluation Software into University Writing Classes

Author(s):  
Wei-Yan Li
2021 ◽  
Vol 27 (1) ◽  
pp. 41
Author(s):  
Meilisa Sindy Astika Ariyanto ◽  
Nur Mukminatien ◽  
Sintha Tresnadewi

Automated Writing Evaluation (AWE) programs have emerged as the latest trend in EFL writing classes. AWE programs act as a supplementary to teacher feedback and offer automated suggestions and corrections to students' linguistic errors such as grammar, vocabulary, or mechanics. As there is a need for better recognition of different AWE brands utilized for different levels of students, this research sheds light on identifying six university students’ views of an AWE program, namely ProWritingAid (PWA). The six students are categorized as having high or low writing achievement. This descriptive study delineates the students’ perceptions qualitatively. A semi-structured interview was used to collect the data. The findings suggest the students’ positive views of PWA because it could make class time more effective; it had useful feedback on grammar, vocabulary choices, and mechanics; and it built students‘ self-confidence over their compositions. In addition, for different reasons, the students engaged differently with PWA to enhance their drafts, e.g. using PWA only for the first drafts or for the first and final drafts. Finally, despite of the students’ constructive views on PWA, there was a risk that students only engaged superficially with the program by hitting the correction directly.


2015 ◽  
Vol 3 (2) ◽  
pp. 101
Author(s):  
Xu Shao ◽  
Jingyu Zhang

<p><em>The efficacy of Grammar Correction (GC) in second language (L2) writing classes has been the subject of much controversy and the field seems to take Ferris’ (1999) generalization that students believe in GC and want to receive it for granted. To test Ferris’ generalization, this study examines Chinese students’ perceptions of GC in their English writing. The results of a questionnaire administered to six groups of three proficiency levels of university students majoring in or not in English show ambivalent perceptions towards GC. On the one hand, all learners believe GC has obvious effects and can improve their accuracy in L2 writing. On the other hand, they all agree that GC is not enough for improving learners’ writing ability and that the time spent on GC should be allocated on training other writing abilities. All groups of participants gave a negative to uncertain answer to GC, though different perception patterns figure in whether or not majoring in English: English-major groups’ mean expectation scores of GC increase while those of non-English-major groups decrease in keeping with their English levels. These results provide strong evidence for Truscott’s (1996) view that GC should be abandoned. We believe that the different perceptions of GC shown by English and non-English major students stem from the fact that the former receives a more systematic grammar instruction than the latter. The ambivalent perceptions of GC originate in the fact that grammar accuracy occupies an important proportion in various writing evaluation systems.</em></p>


ReCALL ◽  
2021 ◽  
pp. 1-13
Author(s):  
Aysel Saricaoglu ◽  
Zeynep Bilki

Abstract Automated writing evaluation (AWE) technologies are common supplementary tools for helping students improve their language accuracy using automated feedback. In most existing studies, AWE has been implemented as a class activity or an assignment requirement in English or academic writing classes. The potential of AWE as a voluntary language learning tool is unknown. This study reports on the voluntary use of Criterion by English as a foreign language students in two content courses for two assignments. We investigated (a) to what extent students used Criterion and (b) to what extent their revisions based on automated feedback increased the accuracy of their writing from the first submitted draft to the last in both assignments. We analyzed students’ performance summary reports from Criterion using descriptive statistics and non-parametric statistical tests. The findings showed that not all students used Criterion or resubmitted a revised draft. However, the findings also showed that engagement with automated feedback significantly reduced users’ errors from the first draft to the last in 11 error categories in total for the two assignments.


Sign in / Sign up

Export Citation Format

Share Document