scholarly journals Trends in International PISA Scores over Time: Which Countries Are Actually Improving?

2019 ◽  
Vol 8 (8) ◽  
pp. 231 ◽  
Author(s):  
Kristie J. Rowley ◽  
Shelby M. McNeill ◽  
Mikaela J. Dufur ◽  
Chrisse Edmunds ◽  
Jonathan A. Jarvis

Many countries attempt to increase their Program for International Student Assessment (PISA) rankings and scores over time. However, despite providing a more accurate assessment of the achievement-based improvements across countries, few studies have systematically examined growth in PISA scores over multiple assessments. Using data from the 2006, the 2009, and the 2012 PISA, we analyzed which countries experienced significant increases in their country-level average PISA scores between 2006 and 2012. To facilitate improved policy decisions, we also examined what country-level conditions were associated with such increases. Contrary to expectations, we found that few countries significantly increased their PISA scores over time. Countries that did experience meaningful improvements in PISA scores were more likely to have had lower PISA scores in 2006 and experienced country-level foundational advancements more recently, such as advancing to a more democratic form of government and/or a higher income classification.

Author(s):  
Markus Sauerwein ◽  
Désirée Theis

AbstractIn educational research, comparisons are often made of groups or of the development of various (latent) constructs over time (e.g. teaching quality in different countries or different groups’ (girls vs. boys) perceptions of teaching quality). However, before the results of such comparisons can be accurately interpreted, measurement invariance (MI) of the constructs under investigation needs to be established to ensure their meaning remains consistent across groups, subjects, or assessment points. Thus, if mean level changes are to be compared between groups, scalar factorial invariance needs to be established. In this chapter, we investigate and discuss how results of MI analyses should be interpreted and whether they should be reported on with regard to contents. Using data from the well-known Programme for International Student Assessment (PISA) study on teaching quality, we introduce an approach to examining the conditions under which comparison among cultural groups is possible even if MI is lacking.


2019 ◽  
Vol 24 (3) ◽  
pp. 231-242 ◽  
Author(s):  
Herbert W. Marsh ◽  
Philip D. Parker ◽  
Reinhard Pekrun

Abstract. We simultaneously resolve three paradoxes in academic self-concept research with a single unifying meta-theoretical model based on frame-of-reference effects across 68 countries, 18,292 schools, and 485,490 15-year-old students. Paradoxically, but consistent with predictions, effects on math self-concepts were negative for: • being from countries where country-average achievement was high; explaining the paradoxical cross-cultural self-concept effect; • attending schools where school-average achievement was high; demonstrating big-fish-little-pond-effects (BFLPE) that generalized over 68 countries, Organisation for Economic Co-operation and Development (OECD)/non-OECD countries, high/low achieving schools, and high/low achieving students; • year-in-school relative to age; unifying different research literatures for associated negative effects for starting school at a younger age and acceleration/skipping grades, and positive effects for starting school at an older age (“academic red shirting”) and, paradoxically, even for repeating a grade. Contextual effects matter, resulting in significant and meaningful effects on self-beliefs, not only at the student (year in school) and local school level (BFLPE), but remarkably even at the macro-contextual country-level. Finally, we juxtapose cross-cultural generalizability based on Programme for International Student Assessment (PISA) data used here with generalizability based on meta-analyses, arguing that although the two approaches are similar in many ways, the generalizability shown here is stronger in terms of support for the universality of the frame-of-reference effects.


Author(s):  
Björn Högberg ◽  
Solveig Petersen ◽  
Mattias Strandh ◽  
Klara Johansson

AbstractStudents’ sense of belonging at school has declined across the world in recent decades, and more so in Sweden than in almost any other high-income country. However, we do not know the characteristics or causes of these worldwide trends. Using data on Swedish students aged 15–16 years from the Programme for International Student Assessment (PISA) between 2000 and 2018, we show that the decline in school belonging in Sweden was driven by a disproportionately large decline at the bottom part of the distribution, and was greatest for foreign-born students, students from disadvantaged social backgrounds, and for low-achieving students. The decline cannot be accounted for by changes in student demographics or observable characteristics related to the school environment. The decline did, however, coincide with a major education reform, characterized by an increased use of summative evaluation, and an overall stronger performance-orientation.


2018 ◽  
Vol 26 (2) ◽  
pp. 213-226 ◽  
Author(s):  
Jörg Blasius

Purpose Evidence from past surveys suggests that some interviewees simplify their responses even in very well-organized and highly respected surveys. This paper aims to demonstrate that some interviewers, too, simplify their task by at least partly fabricating their data, and that, in some survey research institutes, employees simplify their task by fabricating entire interviews via copy and paste. Design/methodology/approach Using data from the principal questionnaires in the Programme for International Student Assessment (PISA) 2012 and the Programme for the International Assessment of Adult Competencies (PIAAC) data, the author applies statistical methods to search for fraudulent methods used by interviewers and employees at survey research organizations. Findings The author provides empirical evidence for potential fraud performed by interviewers and employees of survey research organizations in several countries that participated in PISA 2012 and PIAAC. Practical implications The proposed methods can be used as early as the initial phase of fieldwork to flag potentially problematic interviewer behavior such as copying responses. Originality/value The proposed methodology may help to improve data quality in survey research by detecting fabricated data.


Author(s):  
Pei-Yi Lin ◽  
Ching Sing Chai ◽  
Morris Siu-Yung Jong

The aim of the present study is twofold: (1) to identify a factor structure between variables-interest in broad science topics, perceived information and communications technology (ICT) competence, environmental awareness and optimism; and (2) to explore the relations between these variables at the country level. The first part of the aim is addressed using exploratory factor analysis with data from the Program for International Student Assessment (PISA) for 15-year-old students from Singapore and Finland. The results show that a comparable structure with four factors was verified in both countries. Correlation analyses and linear regression were used to address the second part of the aim. The results show that adolescents’ interest in broad science topics can predict perceived ICT competence. Their interest in broad science topics and perceived ICT competence can predict environmental awareness in both countries. However, there is difference in predicting environmental optimism. Singaporean students’ interest in broad science topics and their perceived ICT competences are positive predictors, whereas environmental awareness is a negative predictor. Finnish students’ environmental awareness negatively predicted environmental optimism.


2017 ◽  
Vol 36 (7) ◽  
pp. 709-724 ◽  
Author(s):  
Lazar Stankov ◽  
Jihyun Lee ◽  
Matthias von Davier

We examine construct validity of the anchoring method used with 12 noncognitive scales from the Programme for International Student Assessment (PISA) 2012 project. This method combines individuals’ responses to vignettes and self-rated scores based on Likert-type items. It has been reported that the use of anchoring vignettes can reverse country-level correlations between academic achievement scores and noncognitive measures from negative to positive, and therefore align them with the typically reported individual-level correlations. Using the PISA 2012 data, we show that construct validity of this approach may be open to question because the anchored scales produce a different set of latent dimensions in comparison with nonanchored scales, even though both scales were created from the same set of individual responses. We also demonstrate that only one of three vignettes may be responsible for the resolution of the “paradox” highlighting that the choice of vignettes may be more important than what was previously reported.


2015 ◽  
Vol 117 (1) ◽  
pp. 1-10
Author(s):  
Nancy Perry ◽  
Kadriye Ercikan

The Programme for International Student Assessment (PISA) was designed by the Organisation for Economic Cooperation and Development (OECD) to evaluate the quality, equity, and efficiency of school systems around the world. Specifically, the PISA has assessed 15-year-old students’ reading, mathematics, and science literacy on a 3-year cycle, since 2000. Also, the PISA collects information about how those outcomes are related to key demographic, social, economic, and educational variables. However, the preponderance of reports involving PISA data focus on achievement variables and cross-national comparisons of achievement variables. Challenges in evaluating achievement of students from different cultural and educational settings and data concerning students’ approaches to learning, motivation for learning, and opportunities for learning are rarely reported. A main goal of this themed issue of Teachers College Record (TCR) is to move the conversation about PISA data beyond achievement to also include factors that affect achievement (e.g., SES, home environment, strategy use). Also we asked authors to consider how international assessment data can be used for improving learning and education and what appropriate versus inappropriate inferences can be made from the data. In this introduction, we synthesize the six articles in this issue and themes that cut across them. Also we examine challenges associated with using data from international assessments, like the PISA, to inform education policy and practice within and across countries. We conclude with recommendations for collecting and using data from international assessments to inform research, policy, and teaching and learning.


2012 ◽  
Vol 42 (2) ◽  
pp. 259-279 ◽  
Author(s):  
JOHN JERRIM

AbstractThe Programme for International Student Assessment (PISA) and Trends in International Mathematics and Science Study (TIMSS) are respected cross-national studies of pupil achievement. They have been specifically designed to study how countries’ educational systems are performing against one another, and how this is changing over time. These are, however, politically sensitive issues, where different surveys can produce markedly different results. This is shown via a case study for England, where apparent decline in PISA test performance has caused policymakers much concern. Results suggest that England's drop in the PISA ranking is not replicated in TIMSS, and that this contrast may be due to data limitations in both surveys. Consequently, I argue that the current coalition government should not base educational policies on the assumption that the performance of England's secondary school pupils has declined over the past decade.


2020 ◽  
Author(s):  
Jose Marquez ◽  
Louise Lambert ◽  
Natasha Ridge ◽  
Stuart Walker

In most education systems, students with an immigrant background perform worse academically compared to native students. However, in the United Arab Emirates (UAE), differences emerge in the opposite direction and the national-expatriate gap in academic competence is equivalent to almost three years of schooling. This gap is a concern in the UAE, where national students mainly attend public schools and expatriates, mostly private schools. To investigate the competence gap between national and expatriate students, we estimate group differences and conduct linear regression analysis using data from the 2018 Programme for International Student Assessment. Results show that the gap varies by emirate and country of origin and is greater among boys, better-off students and in private schools. Between 33% and 47% of this gap is explained by school type, whether public or private. We offer recommendations; however, in a country characterized by 85% expatriates and a maturing education policy, challenges remain, but may serve to pave the way for other high expatriate nations in development.


2017 ◽  
Vol 43 (3) ◽  
pp. 316-353 ◽  
Author(s):  
Simon Grund ◽  
Oliver Lüdtke ◽  
Alexander Robitzsch

Multiple imputation (MI) can be used to address missing data at Level 2 in multilevel research. In this article, we compare joint modeling (JM) and the fully conditional specification (FCS) of MI as well as different strategies for including auxiliary variables at Level 1 using either their manifest or their latent cluster means. We show with theoretical arguments and computer simulations that (a) an FCS approach that uses latent cluster means is comparable to JM and (b) using manifest cluster means provides similar results except in relatively extreme cases with unbalanced data. We outline a computational procedure for including latent cluster means in an FCS approach using plausible values and provide an example using data from the Programme for International Student Assessment 2012 study.


Sign in / Sign up

Export Citation Format

Share Document