The Programme for International Student Assessment (PISA) is, as its name suggests, an international survey that assesses students’ performance. The Survey, which is run in three-year cycles, assesses the performance of 15 year olds in reading literacy, maths and science. It also covers issues such as students’ backgrounds and characteristics and indicators about schools.

PISA is administered by the Organisation for Economic Co-operation and Development (OECD). It ran for the first time in 2000. The number of countries/economies participating in PISA has grown since then and more than 70 countries/economies now participate in the survey.

What PISA Assesses

The OECD states that PISA:

  • aims to evaluate education systems by testing the knowledge and skills of 15 year olds; and
  • focuses on the knowledge and skills that are needed for full participation in society.

The primary purpose of the Survey is to determine the extent to which students can apply their knowledge and skills in reading, maths and science to the challenges that they will encounter in adult life.

How PISA Assessments Work

PISA is a sample survey. A minimum of 150 schools in each country/economy participate in the study. Generally, between 4,500 and 10,000 students in each country/economy will take the test.

Students from England, Wales, Scotland and Northern Ireland are included in the UK sample. The proportion of students included from Wales, Scotland and Northern Ireland is higher than that for England. This enables individual reports to be produced for each.

Whilst PISA assesses performance in reading literacy, maths and science, only one of the domains is examined in detail in each cycle. For example, in 2006 the major domain was science, in 2009 it was reading literacy and in 2012, maths.

Students are required to sit a series of tests and complete a questionnaire. The questionnaire asks for information about the student and the school, including, for example, behaviour and discipline at the school. Parents are also asked to complete a questionnaire which covers areas related to socioeconomic attitudes to education and learning, and education resources at home.

The Principal of each school participating in the survey completes a questionnaire about the school, including information about teaching staff and school climate. PISA does not survey teachers. However, a teacher questionnaire will be included in the survey from 2015.

How the Results of PISA Surveys are Reported

PISA results are reported in several different ways. The most widely publicised and controversial is the publication of results in performance tables. Other PISA reports look at the relationship between students’ performance and social background, students’ attitudes and approaches to learning, and the relationship between students’ performance and education resources, policies and practice.

Country/economy test scores are standardised on a single scale with a mean of 500 and standard deviation of 100. Standardisation takes place when a subject is the major domain. The results for reading literacy were standardised in 2000, maths in 2003 and science in 2006.

Student performance is also reported through the use of proficiency scores on a seven-point scale ranging from 1b to 6. The OECD considers level 2 to be the baseline of proficiency at which students demonstrate the knowledge and skills needed for adult life. A score at level 4 or above is said to indicate strong performance. This information is used to indicate the spread of student performance within a country/economy. Combined with data about students’ background, this provides information about the relationship between performance and socioeconomic status.

The Main Concerns About PISA

There are many criticisms of PISA. These include criticism about the educational and cultural limitations of the survey, issues about the survey’s methodological limitations, questions about what PISA actually measures and concerns about political influence and interference.

Critics argue that PISA only focuses on reading literacy, maths and science and pays little attention to other aspects of the school curriculum. They also argue that the skills that are being assessed may not be commensurate with the educational goals and ethos of some countries. Further, they argue that the need for questions to be relevant across cultures and education contexts limits the questions that can be included and risks creating a ‘one size fits all’ assessment that is not a good match for any single system.

Whilst the OECD claims that PISA tests the knowledge and skills that are essential for full participation in society, critics argue that a test lasting just two hours and completed using a pencil and paper is unlikely to assess these things. They also argue that ‘real life’ will be different in different countries and that this cultural bias will disadvantage students from some countries. Some suggest that PISA will lead to a greater standardisation of education internationally. A particular concern is that there will be greater standardisation with those countries that have the greatest influence over the OECD and its work priorities (e.g. the USA, the UK and Australia).

Some argue that the format of PISA tests and students’ familiarity with the types of question will affect the test results. Students who are used to answering PISA-style questions and who regularly sit tests are likely to perform better than students who are not familiar with such questions or test environments.

A major criticism of PISA is that it is overtly political in nature and the reporting of students’ results in the form of league tables encourages this practice, distorting education policies and priorities. For example, some governments seek to introduce policies that will improve performance in PISA (effectively teaching to the PISA test). Governments also use performance in PISA to justify the need for policy change and then ‘cherry pick’ policies from ‘high.performing’ systems that reflect their ideological position.

The Use of International Assessments

The NASUWT believes that evidence about students’ performance in international assessments, including evidence from PISA, can be very useful. For example, it enables policy makers and others to draw comparisons between education systems. It enables them to examine elements of different education systems and consider the reasons for common problems as well as why some policies are effective. The evidence also provides opportunities for policy makers to reflect on the implications of these analyses for education (and social and economic) policy and practice.

The key point is that evidence from international assessments such as PISA is useful as a tool for examining education policy and practice. However, this is very different from using the evidence to say that an education system is better or worse than other systems. It is not appropriate to use international evidence in this way as the following example that compares the results of PISA with those of another international assessment, Trends in Mathematics and Science Study (TIMSS), illustrates:

TIMSS assesses the performance of students aged 9-10 years and 13-14 years in maths and science and uses questions that are based on national curricula. England participates in both PISA and TIMSS, as does New Zealand. In PISA 2009, New Zealand was one of the highest performing countries and performed significantly better than England in reading literacy, maths and science.

However, in TIMSS 2007, New Zealand’s was below the country mean in maths and just above the country mean in science and England’s results were significantly better than New Zealand’s in both maths and science. In other words, deciding to use PISA rather than TIMSS assessments to make a judgement about the quality of the education system could lead to very different conclusions about the system.

Education systems are about much more than what international assessments test, so it is not appropriate for governments to adopt strategies to simply improve performance in international assessments. The focus of policy reforms should be on agreed education priorities and on ensuring that policy is consistent with and contributes to the aims, purposes and values that underpin the education system. Policy should be based on a wide and rich range of evidence, including evidence from teachers and school leaders about effective practice and the issues that impact on learning and teaching. Of course, this may improve performance in international assessments such as PISA, and international assessments may be useful as one of the indicators that a policy is successful.

What the NASUWT is doing to Challenge the Inappropriate Use of PISA and Other International Assessments

Many governments are using PISA and other international assessments to justify policy change and selectively to promote a particular education ideology, cherry picking policies consistent with that ideology.

The NASUWT is challenging the inappropriate use of international evidence in a variety of forums and publicly.

Part of the NASUWT’s international work involves raising awareness of the problems caused by the inappropriate use of international assessments. This includes commissioning research to highlight the problems that occur.

The NASUWT is also working to improve the quality of international assessments. This includes work with the OECD on PISA. One result of this work is that PISA 2015 will include a questionnaire for teachers.