assessment use argument
Recently Published Documents


TOTAL DOCUMENTS

4
(FIVE YEARS 1)

H-INDEX

2
(FIVE YEARS 0)

2021 ◽  
Vol 12 ◽  
Author(s):  
Don Yao ◽  
Matthew P. Wallace

It is not uncommon for immigration-seekers to be actively involved in taking various language tests for immigration purposes. Given the large-scale and high-stakes nature those language tests possess, the validity issues (e.g., appropriate score-based interpretations and decisions) associated with them are of great importance as test scores may play a gate-keeping role in immigration. Though interest in investigating the validity of language tests for immigration purposes is becoming prevalent, there has to be a systematic review of the research foci and results of this body of research. To address this need, the current paper critically reviewed 11 validation studies on language assessment for immigration over the last two decades to identify what has been focused on and what has been overlooked in the empirical research and to discuss current research interests and future research trends. Assessment Use Argument (AUA) framework of Bachman and Palmer (2010), comprising four inferences (i.e., assessment records, interpretations, decisions, and consequences), was adopted to collect and examine evidence of test validity. Results showed the consequences inference received the most investigations focusing on immigration-seekers’ and policymakers’ perceptions on test consequences, while the decisions inference was the least probed stressing immigration-seekers’ attitude towards the impartiality of decision-making. It is recommended that further studies could explore more kinds of stakeholders (e.g., test developers) in terms of their perceptions on the test and investigate more about the fairness of decision-making based on test scores. Additionally, the current AUA framework includes only positive and negative consequences that an assessment may engender but does not take compounded consequences into account. It is suggested that further research could enrich the framework. The paper sheds some light on the field of language assessment for immigration and brings about theoretical, practical, and political implications for different kinds of stakeholders (e.g., researchers, test developers, and policymakers).


2017 ◽  
Vol 36 (1) ◽  
pp. 125-144 ◽  
Author(s):  
Beata Beigman Klebanov ◽  
Chaitanya Ramineni ◽  
David Kaufer ◽  
Paul Yeoh ◽  
Suguru Ishizaki

Essay writing is a common type of constructed-response task used frequently in standardized writing assessments. However, the impromptu timed nature of the essay writing tests has drawn increasing criticism for the lack of authenticity for real-world writing in classroom and workplace settings. The goal of this paper is to contribute evidence to a validity argument for standardized writing tests. Using measurements of distances between rhetorical profiles in the corpora of interest, we examined connections between argumentative writing on standardized assessments and in external writing situations; namely, opinionated writing in academic and real-life settings. The results show that test corpora, focusing on argumentation in two standardized tests, are rhetorically similar to academic argumentative writing in a graduate-school setting, and about as similar as a corpus of civic writing in the same genre. The proximity between the test corpora and corpora representing external criteria of interest support the assessment use argument. The argumentative writing skills employed on the test are similar to the skills employed in academic and civic settings, despite the differences in the nature of the settings under which the writing samples for these different corpora are produced.


2012 ◽  
Vol 29 (4) ◽  
pp. 603-619 ◽  
Author(s):  
Huan Wang ◽  
Ikkyu Choi ◽  
Jonathan Schmidgall ◽  
Lyle F. Bachman

Sign in / Sign up

Export Citation Format

Share Document