With the U.K sitting in 18th position in the latest Pisa rankings, and the predictable furore about whether this is a justification or indictment of recent educational policy, it’s worth asking do we really know what the Pisa tables mean?

On Tuesday 3rd December 2019 the latest set of the Programme for International Student Assessment (Pisa) rankings were released. The UK were ranked 18th, up from 27th in 2016, with an average score of 502. The English government has prioritised climbing up the Pisa rankings in recent years and the most recent results might indicate that this policy has been vindicated. But do we really know what Pisa is? The U.K ‘scored’ an average of 502 – but 502 what? Utils? Edu-points? And is it sensible that climbing up a potentially opaque international rankings should be an educational policy end-goal?

Pisa testing is conducted every three years by the Organisation for Economic Co-operation and Development (OECD), to assess the performance of 15-year-olds across 79 countries in reading, mathematics and science. So to fully understand Pisa, you must first understand the origins of The OECD and why it has now come to play an increasingly important role in educational policy.

The OECD, such as it is today, came into being in 1961. It was primarily an international organisation which aimed to promote free market economic practices in response to the threat of Cold War communism. However, after the collapse of the eastern bloc, the OECD needed a new raison d’etre, and so began to re-invent itself. Today, while the OECD is still “primarily concerned with economic policy”, education has an “increasing importance within that mandate, as it has been re-framed as central to national economic competitiveness … linked to an emerging ‘knowledge economy’.” (Grek 2008).

The OECD now takes the view (reasonably or unreasonably, depending on your politics) that a country’s educational investment is part of their future economic investment. It unambiguously states that “… to underpin innovation and growth,” governments “need to ensure that people of all ages can develop the skills to work productively and satisfyingly in the jobs of tomorrow.”

Today, the OECD is perhaps the foremost of a number of international organisations which gather and analyse educational performance data. They then publish their findings (such as McKinsey’s 2011 report ‘How the world’s best-performing schools come out on top’ or the findings of the Global Competitiveness Survey – or in international rankings tables, like Pisa) in order that international comparisons can be made. And it’s for this reason the OECD has become increasingly concerned with the gathering of international educational data over the past three decades.

But it’s worth noting here that although the OECD has placed an increased emphasis on measuring member states’ educational performance, the main concerns of the OECD are still economic rather the educational – in essence, the OECD sees education as a central mechanism in help member states create a modern  ‘knowledge-based’ economy. The OECD has identified education as the primary mode by which a member state can invest in their stock of ‘human capital’ and thus ensure future economic growth. Therefore, the OECD collates educational performance data so that governments of member states can see how their investments are panning out.

However, although the OECD has a clear economic agenda, it’s different from other International Organisations (like the World Bank for instance) in that it has no legal mandate by which to influence policy among member states – it has no real no ‘hard power’. But by gathering swathes of educational data, and then ranking and publishing this data into comparative international tables, what The OECD is able to do is provide ‘educational facts’ which policy makers can use to justify their decisions. And in this regard the OECD has been very successful.

Pisa has now become THE international educational performance brand and, as such, exerts significant ‘soft power’. Member states now seek to ‘close the gaps’ with countries above them in the Pisa tables so change educational policies in order to improve their ‘rankings’. Or, as Bieber and Martins (2011) point out: “Education is (now) increasingly regarded as human capital and the wealth of nations … and thus … countries try to improve their education system’s performance by international recommendations.”

Now, there are two points to make here.

The first is that none of this means the OECD ­­– or Pisa rankings themselves, for that matter ­­– are nefarious. The OECD are quite open about their mission, and they don’t demand that countries seek to improve Pisa performance (although they do ‘encourage’ countries to improve their performance by adopting what they see as ‘good’ educational practices).

The second point is this: that it’s hard to overstate the truly enormous effect that the Pisa tables now have on current educational policy trends in Britain and beyond. The publication of the Pisa rankings, as can be seen by recent media attention, is now a significant event in the educational calendar, and the results exert huge influence on policy makers.

For instance, when in 2016 it tuned out that Britain was ‘languishing’ in 22nd place in the Pisa tables for reading, Michael Gove took this as empiric certainty of the need to introduce phonics. The ‘low’ Pisa rankings of 2016 were also central justification for the widespread implementation of the findings of Paul Kirschner, Richard E Clark and John Sweller. As Gove saw it, the Pisa data “all point(ed) towards the importance of direct instruction … and why minimally guided teaching techniques do not work”. And anyone who is familiar with the new Ofsted framework will know that the policy decisions prompted by the 2016 Pisa tables have now begun to come to fruition.

But my concern here is not whether phonics or direct instruction are objectively good or bad pedagogical approaches, but whether it is healthy for one international ranking brand to influence British educational policy on such a scale.

I mean if minsters and policy makers are going to see climbing the Pisa ranking as an educational end in itself, then they should at least know a bit more about how Pisa testing works. But I wonder do they?

Do they know, for instance, know that Pisa tests do not ask the same questions to all students tested?

This is because Pisa, in seeking to see how well students can apply their learning in ‘real world scenarios’, adjusts the questions it asks from country to country to take account of differing cultural norms and expectations. It then applies a statistical adjustment (and not, to my knowledge, one that has ever been publicly explained) to allow answers to be ‘compared’ (and thus ranked). So the idea that the Pisa rankings supply an empirical truth about educational performance is, if not highly dubious, at least worth of a much more rigorous probing.

Secondly, I wonder do they know that one of key things being measured by Pisa testing is not simply student performance but the gap between top on bottom performers – and the larger the disparity, the ‘worse’ the Pisa score?

Thus, countries with comparatively larger income gaps between the richest and poorest tend to have more pronounced gaps in Pisa testing – and therefore lower rankings. So one thing Pisa tables actually reveal are income disparities within member countries. This is one of the reasons why places with relatively smaller and relatively culturally and economically homogeneous populations tend to do better ­– like Singapore and Finland. Ah, but why does China do so well then, I hear you ask? The answer is partly because China as a whole country is not tested, instead Pisa only tests some of the wealthier urban centres; such as Hong Kong, Beijing, Shanghai and Macao.

However, despite both the points above, the response to a low Pisa ranking by policy makers is often to simply to try and improve Pisa performance – but not to address the underlying causes poor rankings: such as that a country might be lower ranked because there might be a high income disparity between top and bottom attainers. But any government chasing improved Pisa rankings might first take heed of Basil Bernstein’s observation that “Education cannot compensate for Society” – an education system can not outperform the society in which it operates. However, it’s easier for a minister to chase an improved Pisa rankings then it is to tackle the deep-seated causes which invariably underpin poor educational performance – and so another things Pisa tables allow is for governments to create a narrative where the burden of (and responsibility for) social failings are shifted away from politicians and placed solely on to teachers and schools.

The British education system is now, like it or not, heavily influenced by a growing global demand to inform educational policy by international comparison. Whether you see Pisa comparison tables as a force for good or not will most likely be shaped by to what extent you think market forces should shape educational policy but, at the very least, if we are going to insist on attaching such weight to Pisa rankings then we should understand what they seek to measure and where they come from.