EI's Analysis of OECD's PISA 2006 Report
In general, PISA assesses the extent to which 15-yrs-old students near the end of compulsory education possess the key knowledge and skills for their full participation in society. PISA 2006, in particular, focuses on understanding scientific concepts and applying them to real life situations.
This approach aims at reflecting the changing competencies required in modern labour markets and, broadly, in societies - ranging from the application of new technologies to active citizenship. The main competences measured by PISA are: students' ability to identify scientific issues, to explain phenomena scientifically, to use scientific evidence in real life contexts. The report also measures students' knowledge about the natural world and technologies, and about science itself. Finally, students' attitudes towards science are evaluated. PISA is not designed on the basis of national curricula and programs, but it applies its own, innovative concepts to assess literacy and competencies in mathematics and science. Nor does PISA assess performance across the full range of education. This year's survey completes the first cycle of assessment of the three subject areas - reading (PISA 2000), mathematics (PISA 2003) and science (PISA 2006). Although the main focus of PISA 2006 is science literacy, the report still includes data on mathematics and reading, providing for the first time longitudinal comparisons with the results from the previous cycles on these subjects. Originally, PISA only covered OECD countries; however, it has gradually extended its scope to include OECD partner countries as well. PISA 2006 encompasses 57 countries and economies worldwide (roughly 90% of the world economy). In PISA, students are randomly selected to perform pencil-and-paper tests. Around 400,000 students were randomly selected to participate in PISA 2006 representing about 20 million 15-year-old in the schools of the 57 participating countries. At country level, the coverage of this year's report is a representative sample of between 3,500 and 50,000 15-year-old students in each country. Most federal countries also draw regional samples. In this year's report, groups of students in some countries are asked to respond to additional questions using ICT. These tasks require students both to develop their own answers and to fill in multiple choice questions. Furthermore, PISA 2006 asks students to provide information on their personal background, on their learning habits and attitudes, on their motivation towards and engagement in science learning. Moreover, in this cycle school principals as well complete a special questionnaire about their schools' characteristics (size, demographic composition of the student population, etc) and learning environment. PISA 2006 covers three main assessment areas: science, reading and mathematics. In each of these subjects, tasks ask students to demonstrate literacy (i.e. understanding of concepts), knowledge of the domain (i.e. physical systems, living systems, earth and space, technology), competencies (i.e. identifying scientific issues, explaining phenomena, using scientific evidence), and understanding of contexts and situations. On the basis of this methodology, PISA 2006 develops detailed student performance indicators and correlates them with background data about the students and schools, then from those correlations the report draws policy conclusions. Main results Finland scores at the top on the integrated science scale, reaching 563 points, followed closely by traditionally high performing countries in PISA (Hong Kong, Canada, Taipei, Japan, Australia, the Netherlands). Surprisingly, Estonia ranks 5th. At the bottom end of the scale, developing countries like Kyrgyzstan, Qatar, Azerbaijan, Tunisia, and others achieve less than 390 points. Yet the picture is not that simple. First of all, results are presented not only as country science aggregated ranking, but divided into single domains' rankings in order to give the real distribution of scores among areas of science learning. For instance, Japan scores second among participating countries in some domains, while being number nine in others. Still, Finland takes the lead in all domains. Moreover, results are presented in terms of the percentage of students reaching six different proficiency levels. These levels are constructed on the ground of students' ability to use science competencies. For example, the requirement for the two lowest levels (level 1 and level 2) is for students to have adequate scientific knowledge and cognitive skills. These two levels, considered unsatisfactory by PISA, represent a desirable outcome of science learning in many school systems. This illustrates the fact that PISA is not related as such to national standards and contexts of education systems, and this is an issue. The OECD approach is based on the assumption that modern labour markets increasingly demand highly skilled workers. Hence, the report devotes significant attention to students at the highest proficiency levels and to students' proportional distribution among levels. Data show that there is a great variation in the distributions across the different proficiency levels between countries. However, the gap between students' distribution across proficiency levels within countries is greater than the gap between countries. Although there are differences between countries' performances, this gap is very narrow. Of the 30 OECD countries, 20 have scores within 25 points of the OECD average of 500, illustrating that results should be presented in clusters rather than in rankings. In the group of best performing countries (Finland, Canada, Japan, New Zealand, Australia, Hong Kong, Taipei and Estonia), the difference in scores along the scale is between 527 and 542 points (e.g. only 15 points). However, there is a big gap between these average performers and countries at the bottom of the scale. Below the score of Greece (473) the next result is 454. In order to illustrate the significance of these gaps, the report explains that a difference of 74.7 score points represents one proficiency level out of 6. The distribution of students in the different proficiency levels in individual countries appears to be the most significant data, as it relates to both quality and equity in the education system. For example, in Finland more than half of students are in the top three levels, while in Kirgizstan more than 80% are in levels 1 and 2. Nonetheless, all the countries on the average level and above demonstrate a significantly equal proportional distribution between levels. Among the 38 high performing countries, no more than 25% of students are in the proficiency levels 1 and below, and no less than 70% are in levels 3 to 6. As mentioned above, the report places great emphasis on labour market requirements as leading motivation for science learning. This is illustrated by an analysis of how skill requirements in the US job market have evolved over generations: PISA 2006 arguably shows that the steepest decline in tasks over the past decades has been with routine cognitive tasks. Based on that analysis, the authors conclude that if students only learn to memorize and reproduce scientific knowledge and skills, they run the risk of being prepared for jobs that are disappearing from many labour markets worldwide. The question remains: to what extent are labour market specifics of the developed world relevant in developing countries? On the grounds of this assumption, PISA 2006 correlates students' performance in science with each country research intensity (i.e. number of researchers recruited per thousand employed). There is a close relationship between a country's proportion of students scoring at the highest levels and its research intensity. On the other hand, a great proportion of students at the bottom of the proficiency scale may be an indication of future problems for these students' integration into society. However, the report does not provide convincing correlation nor causality links between better students' performance and their motivation for a science related career, as we will illustrate further on. Another innovation of PISA 2006 is the assessment of students' comparative strengths and weaknesses in different knowledge domains. Data suggests that countries with higher national incomes tend to perform better in science. A significant proportion of the variation in countries' mean scores can be predicted on the basis of their per capita GDP. However, this does not necessarily imply a causal relationship between the two - since many other factors are involved - yet countries with higher national incomes seem to have a relative advantage. For instance, according to the OECD data, Finland, with an average per capita GDP, is clearly the number 1 performer, while the US, with the highest per capita GDP, performs below the OECD average. The report goes further, comparing actual spending per student with the average performance in science. There is a positive relationship between the two, but this relationship appears to be less significant than the correlation between national income and performance. New Zealand, for example, with expenditure per student similar to the one of Portugal, is among the best performers. On the other hand, Norway's expenditure per student is close to those of the US and Switzerland, but its results are below the OECD average. Yet the aggregated ranking table clearly shows that among the 20 top performers, only 4 countries have a national income below the OECD average, while among the 20 countries at the bottom end of the scale, none has a per capita GDP that reaches the OECD average. In other words, money matters. The gender gap in science performance tends to be small, both in absolute terms and if compared to performance in mathematics and reading. Nevertheless there are differences between genders in several competency and knowledge scales. For example, females tend to outperform males in the identification of scientific issues, while males outperform females in the scientific explanation of phenomena. PISA 2006 shows a clear difference between learning traditions. In Eastern and Central Europe students perform better in terms of theoretical knowledge of science (i.e. scientific facts and concepts), while in the Western countries students possess a better understanding of scientific processes (knowledge about science). However, this difference does not influence the overall performance of countries from one or the other learning tradition. It is difficult to assess to what extent high performance in school can predict the future success of students and countries. Generally, students report positive attitude towards and engagement in science. Nevertheless, the report suggests several reasons why governments should wish to develop these attitudes even further. For instance, students seem to be pessimistic about the capacity of technological innovations to improve social conditions and to solve environmental problems. Furthermore, only a minority of students report the aspiration of working in science as a career. The majority of these students come from low performing countries. Hence, there is no direct correlation between performance and career expectations. Virtually all attitudes towards science can to some degree be associated with performance. However, some attitudes seem to have a stronger correlation with performance. For example, awareness of environmental issues and self-efficacy are strongly correlated with higher performance. Quality and equity PISA 2006 pays great attention not only to the quality of outcomes but also to the equity of their distribution - in other words, to learning opportunities. In order to capture this, the report analyses socioeconomic background data (of both students and schools) and correlates them with learning outcomes. Differences between countries represent 28% of the overall variation in student performance; the remaining 72% is explained by differences between schools and within schools. This shows that inequality in the distribution of learning opportunities at country level is a key issue. According to the OECD, the variation in student performance is attributable to several causes, such as: social economic background of students and schools, the way in which teaching is organized, human and financial resources available, system level factors (curricula, policies, and so on). However, the OECD has an interest in demonstrating that even those countries with disadvantaged socioeconomic conditions can achieve good performance (Poland being an example). Although the report does not show a direct causal correlation between a disadvantaged economic background and poor performance, socioeconomic background appears to play an important role. PISA 2006 acknowledges that the achievement of an equitable distribution of learning opportunities is a key stated goal of public educational policies in most countries. Equity is compatible with quality. Although the report does not establish a clear correlation between the two (while previous PISA reports did), it shows that countries can aspire to both. One does not exclude the other. According to the OECD, achieving an equitable distribution of learning opportunities should be a key policy goal for countries in order to avoid higher social costs (for health care, income support, delinquency and social exclusion) for failing students in the future. The data show that there is, on average, a significant variance of results in all countries. But in some of them this variance is mostly between schools, while in others it is mostly between students within schools. This is explainable in the light of both socioeconomic background and organization of learning (in school-grouping or not). Top performing countries can be found among those with high variance between schools (Germany, Slovenia, Czech Republic, Hungary, Austria) as well as among those with high variance within schools but not between them (Finland, Estonia, Ireland, Canada, Australia, New Zealand). The OECD fails to establish causality links between performance and 'nature' of variance (whether between or within schools). However, as data show, there are more top performers among the countries with the least in-between schools variation. The report argues against grouping of students in school (in particular, early tracking or streaming). Poland is mentioned as a significant case where avoiding tracking has arguably led to an improvement of performance over the three PISA cycles. Immigrant status and student performance are clearly related. According to the report, between 1990 and 2000 the number of people living outside their country of birth nearly doubled worldwide, reaching 175 million. Among 15-year-olds, the proportion of students who are foreign born or who have foreign born parents now exceeds 10% in Germany, Belgium, Austria, France, the Netherlands, Sweden, Croatia, Estonia and Slovenia, while reaching 15% in the US, more than 20% in Switzerland, Australia, New Zealand and Canada, and 36% in Luxembourg. Amongst such countries, immigrant students lag 58 points behind their native counterparts, on average. It is a sizable different, since 38 score points are equivalent to the OECD average of one school year difference. However, there is no positive association between the size of the immigrant student population and the size of performance difference between them and native students. In some countries there are as many immigrant students as native students at the top levels 5 and 6 (Canada, New Zealand, Australia, Hong Kong), while in others there are much less immigrant students among top performers (the UK, the US, Denmark). In addition, at the bottom level, 31% of second generation immigrant students do not reach proficiency level 2 of science performance. Socioeconomic background plays a role, too. Countries where student achievements are highly affected by socioeconomic background (Portugal, Greece, Turkey, Mexico, etc) also have below-average results. On the contrary, the majority of top performing countries show little or no impact of socioeconomic background on student performance (Finland, Canada, Japan, Korea). PISA 2006 also suggests several policy directions to address both quality and equity. In particular, targeting low performing schools and low performing students within schools (through early prevention or recovery programs for students at risk, for instance) can be a useful approach. Similarly, educational policies should target disadvantaged children through special curricula and additional resources. Finally, the expansion of education opportunities for all students, especially in terms of raising education quality standards, is recommended (providing full school-day, increasing learning time, improving teaching techniques, etc). School and system characteristics Even if school and system characteristics cannot provide precise policy prescriptions, they can address educational policies correlated to high performance. Remarkably, the report admits that many important contextual factors cannot be captured by international comparative surveys like PISA, and thus do not allow cause and effect to be firmly established. Yet there are system and school factors associated with performance after accounting for demographic and socioeconomic backgrounds. The way in which students are selected by and grouped within schools does make a difference in their results. Early tracking, for example, reinforces the impact of socioeconomic background on the results of students, increasing the gap between low and high performers. Selectivity can be based on different criteria: students' residence, students' academic records, recommendations from feeder schools, parents' endorsement of the instructional or religious philosophy of the schools, students' need or desire for a specific programme, past or present attendance of other family members in the school. In those countries with the largest variety of programmes and subsequent tracking of students, socioeconomic background tends to have a significantly higher impact on performance, suggesting that stratification in education tends to be associated with socioeconomic segregation. PISA 2006 argues that grade repetition negatively impacts performance. While grouping of students by ability may seem to have beneficial effects on performance of the best students, it may diminish the learning opportunities of low performing students. Hence, the report recommends the creation of a homogeneous learning environment as a policy goal for both quality and equity. After accounting for their home background, students in schools that practice no ability grouping outperform those with ability grouping in such countries as the UK, Switzerland, Portugal, Germany, Czech Republic, Sweden, Luxembourg. Not surprisingly, schools reporting academic selectivity tend to perform better; however, this does not answer the question of how this benefits education system as a whole. The report states that, while selective schools tend to perform better, those school systems with a larger number of schools practicing selectivity do not perform better than those systems with a smaller number of selective schools - other factors being equal. Higher performance is associated with privately funded schools and with schools competing for students. Nonetheless, this effect disappears when students' socioeconomic background factors are taken into account. On average across OECD countries, 4% of 15-year-olds are enrolled in schools that are privately managed and privately funded. However, in OECD countries a much more common model for private schooling is that of schools that are privately managed but still receive a considerable share of public funding. While PISA 2006 recognizes that private schools do not tend to be superior after socioeconomic factors are accounted for, it suggests that private schools "still pose an attractive alternative for parents looking to maximize the benefits for their children, including being among the students of the same background" (p.321). Accountability is regarded as a key factor in improving school performance, in particular external accountability combined with external standards. According to the results, on average across countries students subjected to a standard-based external examination performed 36 score points higher, roughly equivalent to one school year. The OECD points to the role of parents' involvement and pressure. On average, 21% of students are enrolled in schools where school principals reported constant pressure from parents who expected the school to set very high academic standards, while 32% are in schools that do not report such pressures (including one of the best performing in Finland). Keeping track of student achievement at a public level seems to be associated with high student performance. Students in schools posting their results publicly perform only 14.7 score points higher that students in schools that do not post students' results. Those countries where schools have a greater autonomy in budgeting tend to perform better, even after accounting for school and system level as well as for demographic and socioeconomic factors. The association between the different aspects of school autonomy and student performance within a give country is often weak. In many cases it is because decision-making responsibilities are established at national level so that there is a little variation. Nevertheless, the OECD argues that in those countries where school principals report higher degrees of autonomy in decision-making, the average performance in science tends to be higher. For instance, autonomy to decide on course content accounts for 27% of performance differences, decision on budget allocation within the schools for 29%, free choice of textbooks accounts for 26%, and decision on formulating school budget accounts for 22%. For the remaining aspects, including teacher starting salaries and salary increases especially, the relation is not statistically significant. Among more traditional factors associated with better performance, PISA 2006 highlights the importance of the time spent in class. Comparison of reading and mathematics Across countries, performance in reading has remained broadly similar to the previous cycles, in spite of a significant increase in investment in education in most countries. At the same time, some countries have achieved relevant improvements in spite of a moderate augmentation in education costs. In mathematics, as well, the comparison between 2003 and 2006 does not show significant differences in performance. Conclusion PISA 2006 is about much more than ranking of countries. Education Unions should underline that it again reveals interesting data on correlations between the performance of 15 year-old students in science as well as reading and mathematics, their socio-economic backgrounds, and the organisation of schools. But PISA does not convey the total picture of education. It can help to stimulate debate about education. But any attempts to use the PISA results to support political agendas would be a misuse of the report and the data it contains. by Ditte Søbro For more information, please download the publication "EI's Guide to PISA 2006" (in English) by clicking on the link below.