PISA 2012: International Comparison of High Achievers’ Performance


This post examines what PISA 2012 can tell us about the comparative performance of high achievers in England, other English-speaking countries and those that top the PISA rankings.

Introductory Brochure for PISA 2012 by Kristjan Paur

Introductory Brochure for PISA 2012 by Kristjan Paur

It draws on a similar range of evidence to that deployed in my post on the PISA 2009 results (December 2010).

A more recent piece, ‘The Performance of Gifted High Achievers in TIMSS, PIRLS and PISA’ (January 2013) is also relevant.

The post reviews:

  • How the PISA 2012 Assessment Framework defines reading, mathematical and scientific literacy and its definitions of high achievement in each of the three core domains.
  • How average (headline) performance on the three core measures has changed in each jurisdiction compared with PISA 2006 and PISA 2009.
  • By comparison, how high achievers’ performance – and the balance between high and low achievers’ performance – has changed in each jurisdiction over the same period.
  • How jurisdictions compare on the ‘all-rounder’ measure, derived from achievement of a high performance threshold on all three assessments.

The twelve jurisdictions included in the main analysis are: Australia, Canada, England, Finland, Hong Kong (China), Ireland, New Zealand, Shanghai (China), Singapore, South Korea, Taiwan and the USA.

The post also compares the performance of the five home countries against the high achievement thresholds. I have foregrounded this analysis, which appears immediately below, save only for the headline (but potentially misleading) ‘top 10’ high achiever rankings for 2012.




World Leaders against PISA’s High Achievement Benchmarks

The top 10 performers in PISA 2012 against the high achievement benchmarks (Level 5 and above), in reading, maths and science respectively, are set out in Table 1 below.

The 2009 rankings are shown in brackets and the 2012 overall average rankings in bold, square brackets. I have also included England’s rankings.


Table 1

Rank Reading Maths Science
1 Shanghai (1) [1] Shanghai (1) [1] Shanghai (1) [1]
2 Singapore (3) [3] Singapore (2) [2] Singapore (2) [3]
3 Japan (5) [4] Taiwan (4) [4] Japan (5) [4]
4 Hong Kong (9) [2] Hong Kong (3) [3] Finland (3) [5]
5 S. Korea (6) [5] S Korea (5) [5] Hong Kong (6) [2]
6 N Zealand (2) [13] Liechtenstein (13) [8] Australia (7) [16]
7 Finland (4) [6] Macao (15) [6] N Zealand (4) [18]
8 Canada (7=) [8] Japan (8) [7] Estonia (17) [6]
9 France (13) [21] Switzerland (6) [9] Germany (8) [12]
10 Belgium (10) [16] Belgium (9) [15] [15] Netherlands (9) [14]
England 19th (19) [23] England 24th (32) [25] England 11th  (12) [18]


On the basis of these crude rankings alone, it is evident that Shanghai has maintained its ascendancy across all three domains.

Singapore has reinforced its runner-up position by overtaking New Zealand in reading. Hong Kong and Japan also make it into the top ten in all three domains.

Notable improvements in the rankings have been made by:

  • Japan, Hong Kong and France in reading
  • Liechtenstein and Macao in maths
  • Japan and Estonia in science



Jurisdictions falling down the rankings include:

  • Australia, New Zealand and Finland in reading
  • Finland and Switzerland in maths
  • Canada and New Zealand in science.

Those whose high achiever rankings significantly exceed their average rankings include:

  • New Zealand, France and Belgium in reading
  • Belgium in maths
  • Australia, New Zealand, Germany and the Netherlands in science

The only one of the top ten jurisdictions exhibiting the reverse pattern with any degree of significance is Hong Kong, in science.

On this evidence, England has maintained its relatively strong showing in science and a mid-table position in reading, but it has slipped several places in maths.

Comparing England’s rankings for high achievers with its rankings for average performance:

  • Reading 19th versus 23rd
  • Maths 24th versus 25th
  • Science 11th versus 18th

This suggests that England is substantively stronger at the top end of the achievement spectrum in science, slightly stronger in reading and almost identical in maths. (The analysis below explores whether this is borne out by the proportions of learners achieving the relevant PISA thresholds.)

Overall, these rankings suggest that England is a respectable performer at the top end, but nothing to write home about. It is not deteriorating, relatively speaking – with the possible exception of mathematics – but it is not improving significantly either. The imbalance is not atypical and it requires attention, but only as part of a determined effort to build performance at both ends.


Comparing the Home Countries’ Performance

Table 2 below shows how each home country has performed at Level 5 and above in each of the three core PISA assessments since 2006.


Table 2

  2012 Level 5+ 2009 Level 5+ 2006 Level 5+
  Read Maths Sci Read Maths Sci Read Maths Sci
England 9.1 12.4 11.7 8.1 9.9 11.6 9.2 11.2 14.0
N Ireland 8.3 10.3 10.3 9.3 10.3 11.8 10.4 12.2 13.9
Scotland 7.8 10.9 8.8 9.2 12.3 11.0 8.5 12.1 12.5
Wales 4.7 5.3 5.7 5.0 5.0 7.8 6.4 7.2 10.9
UK 8.8 11.9 11.1 8.0 9.9 11.4 9.0 11.2 13.8
OECD average 8.4 12.6 8.4 7.6 12.7 8.5 8.6 13.3 9.0


In 2012, England is ahead of the other home countries in all three domains. Northern Ireland is runner-up in reading and science, Scotland in maths. Wales is a long way behind the other four in all three assessments.

Only England tops the OECD average in reading. All the home countries fall below the OECD average in maths, though all but Wales are above it in science.

Compared with 2006, England’s performance has changed little in reading, increased somewhat in maths (having fallen back betweentimes) and fallen quite significantly in science.

In comparison, Northern Ireland is on a downward trend in all three domains, as is Scotland (though it produced small improvements in maths and reading in 2009). Wales has fallen back significantly in science, though somewhat less so in reading and maths.

It seems that none of the home countries is particularly outstanding when it comes to the performance of their high achievers, but England is the strongest of the four, while Wales is clearly the weakest.

A slightly different perspective can be gained by comparing high and low performance in 2012.

Table 3 below shows that the proportion of low achievers is comfortably larger than the proportion of high achievers. This is true of all the home countries and all subjects, though the difference is less pronounced in science across the board and also in Scotland. Conversely, the imbalance is much more significant in Wales.


Table 3

2012 Reading Maths Science
  L5+6 L1+below L5+6 L1+below L5+6 L1+below
England 9.1 16.7 12.4 21.7 11.7 14.9
N Ireland 8.3 16.7 10.3 24.1 10.3 16.8
Scotland 7.8 12.5 10.9 18.2 8.8 12.1
Wales 4.7 20.6 5.3 29.0 5.7 19.4
UK 8.8 16.7 11.9 21.8 11.1 15.0
OECD average 8.4 8.4 12.6 23.0 8.4 17.8


The ‘tail’ in reading is significantly higher than the OECD average in all four countries but – with the exception of Wales – somewhat lower in science.

In maths, the ‘tail’ is higher than the OECD average in Wales and Northern Ireland, but below average in England and Scotland.

The average figures suggest that, across the OECD as a whole, the top and bottom are broadly balanced in reading, there is a small imbalance in science towards the bottom end and a more significant imbalance in maths, again towards the bottom end.

By comparison, the home countries have a major issue at the bottom in reading, but are less significantly out of line in maths and science.

Overall, there is some evidence here of a longish tail of low achievement, but with considerable variation according to country and domain.

The bottom line is that all of the home countries have significant issues to address at both the top and the bottom of the achievement distribution. Any suggestion that they need to concentrate exclusively on low achievers is not supported by this evidence.


Francois Peron National Park by Gifted Phoenix 2013

Francois Peron National Park by Gifted Phoenix 2013


Background to PISA


What is PISA?

The Programme for International Student Assessment (PISA) is a triennial OECD survey of the performance of 15 year-old students which typically covers maths, science and reading. Science was the main focus in 2006, reading in 2009 and maths in 2012.

PISA 2012 also included a computer-based assessment of problem-solving and a financial literacy assessment. However, some jurisdictions did not participate in the problem-solving exercise owing to ‘technical issues’ and financial literacy was undertaken by some countries only, as an optional extra.

Fifty-eight jurisdictions took part in PISA 2006 and 74 in PISA 2009 (65 undertook the assessment in 2009 and a further nine did so in 2010).

To date, a total of 65 jurisdictions have also taken part in PISA 2012.

According to the OECD’s own FAQ:

  • PISA tests reading, mathematical and scientific literacy ‘in terms of general competencies, that is, how well students can apply the knowledge and skills they have learned at school to real-life challenges. PISA does not test how well a student has mastered a school’s specific curriculum.’
  • Student performance in each field is comparable between assessments – one cannot reasonably argue therefore that a drop in performance is attributable to a more difficult assessment.
  • Each participating jurisdiction receives an overall score in each subject area – the average of all its students’ scores. The average score among OECD countries is set at 500 points (with a standard deviation of 100 points).
  • Participating jurisdictions are ranked in each subject area according to their mean scores, but:

‘is not possible to assign a single exact rank in each subject to each country…because PISA tests only a sample of students from each country and this result is then adjusted to reflect the whole population of 15-year-old students in that country. The scores thus reflect a small measure of statistical uncertainty and it is therefore only possible to report the range of positions (upper rank and lower rank) within which a country can be placed.’

Outside the confines of reports by the OECD and its national contractors, this is honoured more in the breach than the observance.

  • Scores are derived from scales applied to each subject area. Each scale is divided into levels, Level 1 being the lowest and Level 6 typically the highest

Further background detail on the 2012 assessments is set out in the ‘PISA 2012 Assessment and Analytical Framework’ (2013).

This explains that the framework for assessing maths was completely revised ahead of the 2012 cycle and ‘introduces three new mathematical processes that form the basis of developments in the reporting of PISA mathematics outcomes’, whereas those for science and reading were unchanged (the science framework was revised when it was the main focus in 2006 and ditto for reading in 2009).

The Framework clarifies the competency-based approach summarised in the FAQ:

‘ISA focuses on competencies that 15-year-old students will need in the future and seeks to assess what they can do with what they have learnt – reflecting the ability of students to continue learning throughout their lives by applying what they learn in school to non-school environments, evaluating their choices and making decisions. The assessment is informed, but not constrained, by the common denominator of national curricula. Thus, while it does assess students’ knowledge, PISA also examines their ability to reflect, and to apply their knowledge and experience to real-life issues in a reflective way. For example, in order to understand and evaluate scientific advice on food safety, an adult would need not only to know some basic facts about the composition of nutrients, but also to be able to apply that information.’

It explains that between 4,500 and 10,000 students drawn from 150 schools are typically tested in each jurisdiction.

Initial reports suggested that England would not take part in the 2012 assessments of problem-solving and financial literacy, but it subsequently emerged that this decision had been reversed in respect of problem-solving.


Setting PISA Outcomes in Context

There are plenty of reasons why one should not place excessive weight on PISA outcomes:

  • The headline rankings carry a significant health warning, which remains important, even though it is commonly ignored.

‘As the PISA 2000 and PISA 2003 samples for the United Kingdom did not meet the PISA response-rate standards, no trend comparisons are possible for these years.’ (p.1)

Hence, for the UK at least, reliable comparisons with pre-2006 results are off the table.

‘The pressure from policymakers for advice based on PISA interacts with this unhealthy mix of policy and technical people. The technical experts make sure that the appropriate caveats are noted, but the warnings are all too often ignored by the needs of the policy arm of PISA. As a result, PISA reports often list the known problems with the data, but then the policy advice flows as though those problems didn’t exist. Consequently, some have argued that PISA has become a vehicle for policy advocacy in which advice is built on flimsy data and flawed analysis.’

  • PISA is not the only game in town. TIMSS and PIRLS are equally significant, though relatively more focused on content knowledge, whereas PISA is primarily concerned with the application of skills in real life scenarios.
  • There are big political risks associated with worshipping at the PISA altar for, if the next set of outcomes is disappointing, the only possible escape route is to blame the previous administration, a strategy that wears increasingly thin with the electorate the longer the current administration has been in power.



It would be quite wrong to dismiss PISA results out of hand, however. They are a significant indicator of the comparative performance of national (and regional) education systems. But they are solely an indicator, rather than a statement of fact.


What is assessed – and what constitutes high achievement – in each domain

The Assessment and Analytical Framework provides definitions of each domain and level descriptors for each level within the assessments.


Mathematical Literacy

The PISA 2012 mathematics framework defines mathematical literacy as:

‘An individual’s capacity to formulate, employ and interpret mathematics in a variety of contexts. It includes reasoning mathematically and using mathematical concepts, procedures, facts and tools to describe, explain and predict phenomena. It assists individuals to recognise the role that mathematics plays in the world and to make the well-founded judgments and decisions needed by constructive, engaged and reflective citizens.’

Three aspects of maths are identified:

  • Mathematical processes and the fundamental capabilities underlying them. Three processes are itemised: formulating situations mathematically; employing mathematical concepts, facts, procedures and reasoning; and interpreting, applying and evaluating mathematical outcomes. The capabilities are: communication; mathematizing (transforming a real life problem to a mathematical form); representation; reasoning and argument; devising problem-solving strategies; using symbolic, formal and technical language and operations; and using mathematical tools.
  • Content knowledge, comprising four elements: change and relationships; space and shape; quantity; and uncertainty and data.
  • The contexts in which mathematical challenges are presented: personal; occupational; societal and scientific.

Six levels are identified within the PISA 2012 mathematics scale’. The top two are described thus:

  • ‘At Level 6 students can conceptualise, generalise and utilise information based on their investigations and modelling of complex problem situations. They can link different information sources and representations and flexibly translate among them. Students at this level are capable of advanced mathematical thinking and reasoning. These students can apply their insight and understandings along with a mastery of symbolic and formal mathematical operations and relationships to develop new approaches and strategies for attacking novel situations. Students at this level can formulate and precisely communicate their actions and reflections regarding their findings, interpretations, arguments and the appropriateness of these to the original situations.’
  • ‘At Level 5 students can develop and work with models for complex situations, identifying constraints and specifying assumptions. They can select, compare and evaluate appropriate problem-solving strategies for dealing with complex problems related to these models. Students at this level can work strategically using broad, well-developed thinking and reasoning skills, appropriate linked representations, symbolic and formal characterisations and insight pertaining to these situations. They can reflect on their actions and formulate and communicate their interpretations and reasoning.’


Reading literacy

Reading Literacy is defined as:

‘An individual’s capacity to understand, use, reflect on and engage with written texts, in order to achieve one’s goals, to develop one’s knowledge and potential, and to participate in society.’

The assessment ‘is built on three major task characteristics’:

  • Situation – the context or purpose for which reading takes place, which may be personal (practical and intellectual interests), public (activities and concerns of society), educational (for learning purposes) or occupational (accomplishment of a task).
  • Text – the range of material that is read, which may be print or digital. In the case of digital text, the environment may be authored (the reader is receptive), message based, or mixed. In the case of both print and digital text, the format may be continuous (sentences and paragraphs), non-continuous (eg graphs, lists), mixed or multiple, while the text type may be description, narration, exposition, argumentation, instruction or transaction.
  • Aspect – how readers engage with the text, which includes accessing and retrieving; integrating and interpreting; and reflecting and evaluating.

Separate proficiency scales are provided for print and digital reading respectively. Both describe achievement in terms of the task rather than the student.

The print reading scale has six levels (Level One is subdivided into two). The top levels are described as follows:

  • Level 6: Tasks at this level typically require the reader to make multiple inferences, comparisons and contrasts that are both detailed and precise. They require demonstration of a full and detailed understanding of one or more texts and may involve integrating information from more than one text. Tasks may require the reader to deal with unfamiliar ideas, in the presence of prominent competing information, and to generate abstract categories for interpretations. Reflect and evaluate tasks may require the reader to hypothesise about or critically evaluate a complex text on an unfamiliar topic, taking into account multiple criteria or perspectives, and applying sophisticated understandings from beyond the text. A salient condition for access and retrieve tasks at this level is precision of analysis and fine attention to detail that is inconspicuous in the texts.
  • Level 5: Tasks at this level that involve retrieving information require the reader to locate and organise several pieces of deeply embedded information, inferring which information in the text is relevant. Reflective tasks require critical evaluation or hypothesis, drawing on specialised knowledge. Both interpretative and reflective tasks require a full and detailed understanding of a text whose content or form is unfamiliar. For all aspects of reading, tasks at this level typically involve dealing with concepts that are contrary to expectations.

For digital reading there are only four levels, categorised as 2-5. Level 5 is described thus:

‘Tasks at this level typically require the reader to locate, analyse and critically evaluate information, related to an unfamiliar context, in the presence of ambiguity. They require generating criteria to evaluate the text. Tasks may require navigation across multiple sites without explicit direction, and detailed interrogation of texts in a variety of formats.’


Scientific literacy

Scientific literacy is defined as:

‘An individual’s scientific knowledge and use of that knowledge to identify questions, to acquire new knowledge, to explain scientific phenomena, and to draw evidence-based conclusions about science-related issues, understanding of the characteristic features of science as a form of human knowledge and enquiry, awareness of how science and technology shape our material, intellectual, and cultural environments, and willingness to engage in science-related issues, and with the ideas of science, as a reflective citizen.’

The domain consists of four interrelated aspects:

  • Context – life situations involving science and technology. Contexts are personal, social or global and may relate to health, natural resources, environment, hazard or the frontiers of science and technology.
  • Knowledge – knowledge of the natural world (covering physical systems, living systems, earth and space systems and technology systems) and knowledge about science itself (scientific enquiry and scientific explanations).
  • Competencies , of which  three are identified: identify scientific issues, explain phenomena scientifically and use scientific evidence.
  • Attitudes, including an interest in science, support for scientific enquiry and a motivation to act responsibly towards the natural world.

A 6-level proficiency scale is defined with the top levels explained as follows:

  • At Level 6, students can consistently identify, explain and apply scientific knowledge and knowledge about science in a variety of complex life situations. They can link different information sources and explanations and use evidence from those sources to justify decisions. They clearly and consistently demonstrate advanced scientific thinking and reasoning, and they use their scientific understanding in support of solutions to unfamiliar scientific and technological situations. Students at this level can use scientific knowledge and develop arguments in support of recommendations and decisions that centre on personal, social or global situations.
  • At Level 5, students can identify the scientific components of many complex life situations, apply both scientific concepts and knowledge about science to these situations, and can compare, select and evaluate appropriate scientific evidence for responding to life situations. Students at this level can use well-developed inquiry abilities, link knowledge appropriately and bring critical insights to situations. They can construct explanations based on evidence and arguments based on their critical analysis.


Denham Sunset by Gifted Phoenix

Denham Sunset by Gifted Phoenix


 Changes in Average Performance in Reading, Maths and Science

The OECD published PISA outcomes for maths, science and reading on 3 December 2013.

Similarly, the PISA National Report on England, published simultaneously, covers the three core assessments.

This section looks briefly at the headline average scores and rankings across the selected sample of twelve jurisdictions, principally to enable comparisons to be drawn with the subsequent analysis of high achievers’ performance.

I apologise in advance for any transcription errors. Please let me know if you spot any and I will correct the tables accordingly.



Table 4 below gives the headline average numerical scores and ranks in reading from PISA 2006, 2009 and 2012 respectively.


Table 4

Country 2012 2009 2006
score rank score rank score rank
Australia 512↓ 13↓ 515↑ 9↓ 513 7
Canada 523↓ 8↓ 524↓ 6↓ 527 4
Finland 524↓ 6↓ 536↓ 3↓ 547 2
Hong Kong 545↑ 2↑ 533↓ 4↓ 536 3
Ireland 523↑ 7↑ 496↓ 21↓ 517 6
S Korea 536↓ 5↓ 539↓ 2↓ 556 1
New Zealand 512↓ 13↓ 521 7↓ 521 5
Shanghai 570↑ 1= 556 1 N/A N/A
Singapore 542↑ 3↑ 526 5 N/A N/A
Taiwan 523↑ 8↑ 495↓ 23↓ 496 16
UK (England) 500↑ 23↑ 495↓ 25↓ 496 17
US 498↓ 24↓ 500 17 N/A N/A
OECD Average 496↑ 493↓ 495


Shanghai has retained the ascendancy it established in 2009, adding a further 14 points to its average 2009 score. Whereas it was only 17 points beyond its nearest competitor in 2009, that lead has now been extended to 25 points.

South Korea’s performance has fallen slightly and it has been leapfrogged in the rankings by Hong Kong (up 12 points), Singapore (up 16 points), and Japan (not included in the table).

Two countries making even more significant improvements are Taiwan (up 28 points) and Ireland (up 27 points). Conversely, the performance of Finland (down 12 points) and New Zealand (down 9 points) has noticeably declined. Finland’s performance has been declining since 2006.

Results remain broadly unchanged in Australia, Canada, England, South Korea and the USA. South Korea has been unable to make up the ground it lost in 2009.

Ireland’s huge improvement from a very similar starting point in 2009 throws England’s lack of progress into sharper relief, although it is largely catching up lost ground in 2009, having performed relatively well in 2006.

England, like the US, continues to perform slightly above the OECD average, but has fallen further behind the Asian Tigers. The gap with the world’s leader in each assessment is now 70 points (up from 60 in 2006),



Table 5 below sets out scores and rankings in maths since PISA 2006


Table 5

Country 2012 2009 2006
  score rank score rank score rank
Australia 504↓ 19↓ 514↓ 15↓ 520 13
Canada 518↓ 13↓ 527= 10↓ 527 7
Finland 519↓ 12↓ 541↓ 6↓ 548 2
Hong Kong 561↑ 3= 555↑ 3 547 3
Ireland 501↑ 20↑ 487↓ 32↓ 501 22
S Korea 554↑ 5↓ 546↓ 4 547 4
New Zealand 500↓ 23↓ 519↓ 13↓ 522 11
Shanghai 613↑ 1= 600 1 N/A N/A
Singapore 573↑ 2= 562 2 N/A N?A
Taiwan 560↑ 4↑ 543↓ 5↓ 549 1
UK (England) 495↑ 25↑ 493↓ 27↓ 495 24
US 481↓ 36↓ 487↑ 31↑ 474 35
OECD Average 494↓   496↓   497  


The overall picture is rather similar to that for reading.

Shanghai (up 13 points) and Singapore (up 11 points) continue to stretch away at the head of the field. Taiwan (up 17 points) has also made significant improvement and is now close behind Hong Kong.

There has been relatively more modest improvement in Hong Kong and South Korea (which has been overtaken by Taiwan).

Elsewhere, Ireland has again made significant headway and is back to the level it achieved in 2006. But Finland’s score has plummeted 22 points. New Zealand is not far behind (down 19). There have also been significant falls in the performance of Australia (down 10) Canada (down 9) and the US (down 6).

The US is now trailing 13 points below the OECD average, having failed to sustain the substantial improvement it made in 2009.

In England meanwhile, results are largely unchanged, though now just above the OECD average rather than just below it.

The gap between England and world leader Shanghai has reached 118 points, compared with a gap in 2006 between England and world leader Taiwan of 54 points. The gap between England and its main Commonwealth competitors has narrowed, but only as a consequence of the significant declines in the latter.



Table 6 below provides the same data in respect of science.


Table 6

Country 2012 2009 2006
  score rank score rank score rank
Australia 521↓ 16↓ 527= 10↓ 527 8
Canada 525↓ 10↓ 529↓ 8↓ 534 3
Finland 545↓ 5↓ 554↓ 2↓ 563 1
Hong Kong 555↑ 2↑ 549↑ 3↓ 542 2
Ireland 522↑ 15↑ 508 20 508 20
S Korea 538= 7↓ 538↑ 6↑ 522 11
New Zealand 516↓ 18↓ 532↑ 7 530 7
Shanghai 580↑ 1= 575 1 N/A N/A
Singapore 551↑ 3↑ 542 4 N/A N/A
Taiwan 523↑ 13↓ 520↓ 12↓ 532 4
UK (England) 516↑ 18↓ 515↓ 16↓ 516 14
US 497↓ 28↓ 502↑ 23↑ 489 29
OECD Average 501=   501↑   498  


Shanghai is again out in front, having repeated the clean sweep it achieved in 2009.

However, it has managed only a 5-point improvement, while Taiwan has improved by 13 points and Singapore by 9 points. Hong Kong has moved up by 6 points and Taiwan by 3 points, but South Korea’s score is unchanged from 2009.

New Zealand has dropped by 16 points and Finland by 9 points compared with 2009. There have been comparatively smaller declines in Australia and Canada, while Ireland has once again improved dramatically, by 14 points, and – in this case – the improvement is not simply clawing back ground lost in 2009.

England remains comfortably above the OECD average, but has made negligible improvement since 2006. US performance has dropped back below the OECD average as it has lost some of the ground it made up in 2009.

The gap between England and the world leaders is comparable with that in maths and significantly lower than in reading. The gap is now 64 points, compared with just 47 points in 2006.



Overall, the Asian Tigers have consolidated their positions by maintaining improvement in all three domains, though South Korea appears to be struggling to maintain the success of earlier years.

Finland and New Zealand are in worrying decline while Ireland is making rapid progress in the opposite direction.



The US results are stagnant, remaining comparatively poor, particularly in maths.

England has broadly maintained its existing performance profile, neither improving nor declining significantly. But, it is conspicuously losing ground on the world leaders, especially in maths. Other than in science it is close to the OECD average.

There is nothing here to give comfort to either the previous Government or the present incumbents. There might be some limited relief – even a degree of shadenfreude – in the fact that several better-placed nations are falling back more severely. But of course one cannot win the ‘global race’ by simply standing still.


Floral by Gifted Phoenix

Floral by Gifted Phoenix


Changes in High Achievers’ Performance

So much for the average headline figures.

The remainder of this post is focused on  high achievement data. The ensuing sections once more examine reading, maths and science in that order, followed by a section on all-rounders.



Table 7 shows how the percentage achieving higher levels in reading has changed since PISA 2006, providing separate columns for Level 6 and above level 5 respectively (there was no Level 6 in 2006)..


Table 7

Country 2012 2009 2006
Level 6 Levels 5 and 6 Level 6 Levels 5+6 Level 5
Australia 1.9 11.7 2.1 12.8 10.6
Canada 2.1 12.9 1.8 12.8 14.5
Finland 2.2 13.5 1.6 14.5 16.7
Hong Kong 1.9 16.8 1.2 12.4 12.8
Ireland 1.3 11.4 0.7 7.0 11.7
S Korea 1.6 14.2 1.0 12.9 21.7
New Zealand 3.0 13.9 2.9 15.8 15.9
Shanghai 3.8 25.1 2.4 19.4 N/A
Singapore 5.0 21.2 2.6 15.7 N/A
Taiwan 1.4 11.8 0.4 5.2 4.7
UK (England) 1.3 9.1 1.0 8.1 9.2
US 1.0 7.9 1.5 9.9 N/A
OECD Average 1.1 8.4 1.0 7.0 8.6


This reveals that:

  • In 2012, Singapore has a clear lead on its competitors at Level 6, but it is overtaken by Shanghai at Level 5 and above. New Zealand also remains comparatively strong at Level 6, but falls back significantly when Levels 5 and 6 are combined.
  • The other Asian Tigers do not perform outstandingly well at Level 6: Hong Kong, South Korea and Taiwan are all below 2.0%, behind Canada and Finland. However, all but Taiwan outscore their competitors when Levels 5 and 6 are combined.
  • Hong Kong, Shanghai, Singapore and Taiwan are all making fairly strong progress over time. Patterns are rather less discernible for other countries, though there is a downward trend in the US.
  • In Finland, New Zealand and Canada – countries that seem to be falling back overall – the percentage of Level 6 readers continues to improve. This might suggest that the proportion of the highest performers in reading is not significantly affected when national performance begins to slide.
  • When judged against these world leaders, England’s comparative performance is brought into much clearer perspective. At Level 6 it is not far behind Taiwan, South Korea and even Hong Kong. But, at Level 5 and above, the gap is somewhat more pronounced. England is improving, but very slowly.
  • The comparison with Taiwan is particularly stark. In 2006, England had roughly twice as many students performing at Level 5. By 2009 Taiwan had caught up some of this ground and, by 2012, it had overtaken.

Table 8 compares changes since PISA 2006 in national performance at Level 5 and above with changes at Level 1 and below.

This is intended to reveal the balance between top and bottom – and whether this sample of world-leading and other English-speaking jurisdictions is making consistent progress at either end of the spectrum.


 Table 8

Country Levels 5 (and 6 from 2009) Level 1 (or equivalent) and below
2006 2009 2012 2006 2009 2012
Australia 10.6 12.8 11.7 13.4 14.3 14.2
Canada 14.5 12.8 12.9 11.0 10.3 10.9
Finland 16.7 14.5 13.5 4.8 8.1 11.3
Hong Kong 12.8 12.4 16.8 7.2 8.3 6.8
Ireland 11.7 7.0 11.4 12.2 17.2 9.7
S Korea 21.7 12.9 14.2 5.7 5.8 7.6
New Zealand 15.9 15.8 13.9 14.6 14.3 16.3
Shanghai N/A 19.4 25.1 N/A 4.1 2.9
Singapore N/A 15.7 21.2 N/A 12.4 9.9
Taiwan 4.7 5.2 11.8 14.3 15.6 11.5
UK (England) 9.2 8.1 9.1 18.9 18.4 16.7
US N/A 9.9 7.9 N/A 17.7 16.7
OECD Average 8.6 7.0 8.4 20.1 18.8 18


We can see that:

  • The countries with the highest proportion of students at Level 5 and above tend to have the lowest proportion at Level 1 and below. In Shanghai in 2012, there is a 22% percentage point gap between these two populations and fewer than 3 in every hundred fall into the lower attaining group.
  • Singapore is much closer to Shanghai at the top end than it is at the bottom. But even Shanghai seems to be making faster progress at the top than at the bottom, which might suggest that it is approaching the point at which the proportion of low achievers cannot be further reduced.
  • Compared with Hong Kong and South Korea, Singapore has a higher proportion of both high achievers and low achievers.
  • Whereas Taiwan had three times as many low achievers as high achievers in 2006, by 2012 the proportions were broadly similar, but progress at the top end is much faster than at the bottom.
  • The decline in Finland has less to do with performance at the top end (which has fallen by three percentage points) than with performance at the bottom (which has increased by more than six percentage points).
  • Canada has consistently maintained a higher percentage of high achievers than low achievers, but the reverse is true in Australia. In New Zealand the percentage at the top is declining and the percentage at the bottom is increasing. The gap between the two has narrowed slightly in England, but not significantly so.
  • To catch up with Shanghai, England has to close a gap of some 16 percentage points at the top end, compared with one of around 14 percentage points at the bottom.

The PISA National Report on England offers some additional analysis, noting that 18 jurisdictions had a higher proportion of pupils than England at Level 5 or above in 2012, including all those that outperformed England overall (with the exception of Estonia and Macao), and also France and Norway.

The National Report relies more heavily on comparing the performance of learners at the 5th and 95th percentiles in each country, arguing that:

‘This is a better measure for comparing countries than using the lowest and highest scoring pupils, as such a comparison may be affected by a small number of pupils in a country with unusually high or low scores.’

This is true in the sense that a minimum sample of 4,500 PISA participants would result in fewer than 100 at Level 6 in many jurisdictions.

On the other hand, the National Report fails to point out that analysis on this basis is not particularly informative about comparative achievement of the criterion-referenced standards denoted by the PISA thresholds.

It says rather more about the spread of performance in each country and rather less about direct international comparisons.

Key points include:

  • In England the score of learners at the 5th percentile was 328, compared with 652 at the 95th percentile. This difference of 324 points is slightly larger than the OECD average difference of 310 points. More than two-thirds of OECD countries had a smaller difference between these percentiles.
  • Compared with PISA 2012, the score of high achievers at the 95th percentile in PISA 2009 increased by six points to 652, while the score of low achievers at the 5th percentile fell by six points to 328. This increase in the attainment gap is higher than in 2009 (312) but lower than in 2006 (337). Thirteen OECD countries reported a wider spread of attainment than England.
  • Of countries outperforming England, only Japan (325 points), Singapore (329 points) Belgium (339 points) and New Zealand (347 points) demonstrated a similar or wider spread of attainment. Shanghai had the lowest difference (259 points) followed by Estonia (263).
  • The strongest performing jurisdictions at the 95th percentile were Singapore (698), Shanghai (690) and Japan (689), compared with 652 for England.
  • Amongst jurisdictions ranked higher than England, only the Netherlands, Liechtenstein, Estonia and Macao secured a lower score at the 95th percentile. Only Belgium reported a lower score at the 5th percentile.



Turning to maths, Table 9 illustrates changes in the pattern of high achievement since 2006, again showing the percentages performing at Level 6 and above Level 5 respectively.


Table 9

Country 2012 2009 2006
  Level 6 Levels 5 + 6 Level 6 Levels 5+6 Level 6 Levels 5+6
Australia 4.3 14.8 4.5 16.4 4.3 16.4
Canada 4.3 16.4 4.4 18.3 4.4 18
Finland 3.5 15.2 4.9 21.6 6.3 24.4
Hong Kong 12.3 33.4 10.8 30.7 9 27.7
Ireland 2.2 10.7 0.9 6.7 1.6 10.2
S Korea 12.1 30.9 7.8 25.5 9.1 27.1
New Zealand 4.5 15.0 5.3 18.9 5.7 18.9
Shanghai 30.8 55.4 26.6 50.7 N/A N/A
Singapore 19.0 40.0 15.6 35.6 N/A N/A
Taiwan 18.0 37.2 11.3 28.5 11.8 31.9
UK (England) 3.1 12.4 1.7 9.9 2.5 11.2
US 2.2 9.0 1.9 9.9 1.3 7.7
Average 3.3 12.6 3.1 12.7 3.3 13.4


The variations between countries tend to be far more pronounced than in reading:

  • There is a huge 28 percentage point spread in performance at Level 6 within this sample – from 2% to 30% – compared with a three percentage point spread in reading. The spread at Level 5 and above is also significantly larger – 46 percentage points compared with 17 percentage points in reading.
  • Shanghai has an 11 percentage point lead over its nearest competitor at Level 6 and an even larger 15 percentage point lead for Level 5 and above. Moreover it has improved significantly on both counts since 2009. Well over half its sample is now performing at Level 5 or above and almost a third are at Level 6.
  • Singapore and Taiwan are the next best performers, both relatively close together. Both are improving but, following a small dip in 2009, Taiwan is improving at a faster rate – faster even than Shanghai.
  • Hong Kong and South Korea also have similar 2012 profiles, as they did back in 2006. South Korea also lost ground in 2009, but is now improving at a faster rate than Hong Kong.
  • Finland appears to be experiencing quite significant decline: the proportion of Level 6 performers in 2012 is not far short of half what it was in 2006 and performance above Level 5 has fallen by more than nine percentage points. This is a somewhat different pattern to reading, in that the top performers are also suffering from the overall decline.



  • Australia, Canada and New Zealand have maintained broadly the same performance over time, though all are showing a slight falling off at Level 5 and above, and in New Zealand this also applies at Level 6.
  • After a serious slump in 2006, Ireland has overtaken its 2006 position. Meanwhile, the US has been making some progress at Level 6 but is less convincing at Level 5 and above.
  • Once again, this comparison does not particularly flatter England. It is not too far behind the Commonwealth countries and declining Finland at Level 6 but the gap is slightly larger at Level 5 and above. That said, England has consistently performed below the OECD average and remains in that position.
  • There are, however, some grounds for domestic celebration, in that England has improved by 2.5% at Level 5 and above, and by 1.4% at Level 6. This rate of improvement bears comparison with Hong Kong, albeit from a much lower base. It suggests a narrowing gap between England and its Commonwealth counterparts.

Table 10 gives the comparison with achievement at the bottom end of the distribution, setting out the percentages performing at different levels.


Table 10

Country Levels 5 and 6 Level 1 and below
  2006 2009 2012 2006 2009 2012
Australia 16.4 16.4 14.8 13.0 15.9 18.6
Canada 18 18.3 16.4 10.8 11.4 13.8
Finland 24.4 21.6 15.2 5.9 7.8 12.2
Hong Kong 27.7 30.7 33.4 9.5 8.8 8.5
Ireland 10.2 6.7 10.7 16.4 20.9 16.9
S Korea 27.1 25.5 30.9 8.8 8.1 9.1
New Zealand 18.9 18.9 15.0 14.0 15.5 22.6
Shanghai N/A 50.7 55.4 N/A 4.8 3.7
Singapore N/A 35.6 40.0 N/A 9.8 8.3
Taiwan 31.9 28.5 37.2 11.9 12.8 12.8
UK (England) 11.2 9.9 12.4 19.9 19.8 21.7
US 7.7 9.9 9.0 28.1 23.4 25.9
Average 13.4 12.7 12.6 21.3 22.0 23.0


Key points include:

  • The same pattern is discernible amongst the strongest performers as was evident with reading: those with the highest percentages at the top end tend to have the lowest percentages at the bottom. If anything this distinction is even more pronounced. Shanghai records a 52 percentage point gap between its highest and lowest performers and the latter group is only slightly larger than the comparable group in the reading assessment.
  • Amongst the Asian Tigers, the ratio between top and bottom is at least 3:1 in favour of the top. For most of the other countries in the sample, there is never more than a 7 percentage point gap between top and bottom, but this stretches to 9 in the case of England and 13 for the USA. Needless to say, the low achievers are in the majority in both cases.
  • Although the percentages for top and bottom in Australia are broadly comparable, it has shifted since 2006 from a position where the top end was in the majority by 3 percentage points to almost a mirror image of that pattern. In New Zealand, the lower achievers have increased by almost 9 percentage points, almost double the rate of decline at the top end, as their ‘long tail’ grows significantly longer.
  • Apart from Shanghai, only Singapore, Hong Kong and South Korea have fewer than 10% in the lower performing category. Despite its reputation as a meritocratic environment, Singapore gets much closer to Shanghai at the bottom of the distribution than it does at the top. The same is true of Hong Kong and South Korea.
  • It is also noticeable that none of the Tigers is making extraordinary progress at the bottom end. Hong Kong has reduced this population by 1% since 2003, Singapore by 1.5% since 2006, Shanghai by only 0.9% since 2006. The percentage has increased in South Korea and Taiwan. Improvement has been significantly stronger at the top of the distribution. Again this might suggest that the Tigers are closing in on the point where they cannot improve further at the bottom end.
  • In Finland, the percentage achieving the higher levels has fallen by over 9 percentage points since 2006, while the increase at the lower levels is over 6 percentage points. This compares with a 3 point fall at the top and a 6 point rise at the bottom in reading. The slump amongst Finland’s high achievers is clearly more pronounced in maths.
  • England’s 9.3 percentage point gap between the top and bottom groups in 2012 is lightly larger than the 8.7 point gap in 2006. It has a whopping 43 percentage point gap to make up on Shanghai at the top end, and an 18 point gap at the bottom. England is just on the right side of the OECD average at the bottom and just on the wrong side at the top.



The National Report notes that all jurisdictions ahead of England in the rankings had a higher percentage of learners at Level 5 or above.

As for percentiles

  • The difference between the 5th percentile (335 points) and the 95th percentile (652 points) was 316 in England. The average difference for OECD countries was 301, only slightly lower than that.
  • Ten countries had a greater difference than this, five of them amongst those the highest overall mean scores. Others were Israel, Belgium, Slovakia, New Zealand and France.
  • Whereas the difference between the lowest and highest percentiles has increased very slightly across all OECD countries, this is more pronounced in England, increasing from 285 points in 2009 to 316 points in 2012. This is attributable to decreasing scores at the 5th percentile (350 in 2006, 349 in 2009 and 335 in 2012) compared with changes at the 95th percentile (643 in 2006, 634 in 2009 and 652 in 2012).



Table 11 compares the performance of this sample of PISA participants at the higher levels in the science assessment on the last three occasions.


Table 11

Country 2012 2009 2006
  Level 6 Levels 5 + 6 Level 6 Levels 5+6 Level 6 Levels 5+6
Australia 2.6 13.5 3.1 14.6 2.8 14.6
Canada 1.8 11.3 1.6 12.1 2.4 14.4
Finland 3.2 17.1 3.3 18.7 3.9 20.9
Hong Kong 1.8 16.7 2 16.2 2.1 15.9
Ireland 1.5 10.8 1.2 8.7 1.1 9.4
S Korea 1.1 11.7 1.1 11.6 1.1 10.3
New Zealand 2.7 13.4 3.6 17.6 4 17.6
Shanghai 4.2 27.2 3.9 24.3 N/A N/A
Singapore 5.8 22.7 4.6 19.9 N/A N/A
Taiwan 0.6 8.4 0.8 8.8 1.7 14.6
UK (England) 1.9 11.7 1.9 11.6 3.0 14.0
US 1.1 7.4 1.3 9.2 1.5 9.1
Average 1.2 8.4 1.1 8.5 1.3 8.8


In science, the pattern of high achievement has more in common with reading than maths. It shows that:

  • There is again a relatively narrow spread of performance between this sample of jurisdictions – approaching five percentage points at Level 6 and 20 percentage points at Level 5 and above.
  • As in reading, Singapore outscores Shanghai at the top level 6, but is outperformed by Shanghai at Level 5 and above. Both are showing steady improvement, but Singapore’s improvement at Level 6 is more pronounced than Shanghai’s.
  • Finland remains the third best performer, although the proportion of learners achieving at both Level 6 and Level 5 plus has been declining slightly since 2006.
  • Another similarity with reading is that Australia, Finland and New Zealand all perform significantly better at Level 6 than Hong Kong, South Korea and Taiwan. Hong Kong alone performs equally well at Level 5 and above. None of these three Asian Tigers has made significant progress since 2006.
  • In Australia, Canada, New Zealand and the US there has also been relatively little progress over time – indeed some evidence to suggest a slight decline. Conversely, Ireland seems to be moving forward again after a slight dip at Level 5 and above in 2009.



  • England was a strong performer in 2006, broadly comparable with many of its competitors. But it fell back significantly in 2009 and has made no progress since then. The proportions are holding up but there is no substantive improvement since 2009, unlike in maths and (to a lesser extent) reading. However England continues to perform somewhat higher than the OECD average. There is an interesting parallel with Taiwan, although that country dipped even further than England in 2009.

Table 12 provides the comparison with the proportions achieving the lower thresholds.


Table 12

Country Levels 5 and 6 Levels 1 and Below
  2006 2009 2012 2006 2009 2012
Australia 14.6 14.6 13.5 12.8 12.6 13.6
Canada 14.4 12.1 11.3 10.0 9.5 10.4
Finland 20.9 18.7 17.1 4.1 6.0 7.7
Hong Kong 15.9 16.2 16.7 8.7 6.6 5.6
Ireland 9.4 8.7 10.8 15.5 15.1 11.1
S Korea 10.3 11.6 11.7 11.2 6.3 6.7
New Zealand 17.6 17.6 13.4 13.7 13.4 16.3
Shanghai N/A 24.3 27.2 N/A 3.2 2.7
Singapore N/A 19.9 22.7 N/A 11.5 9.6
Taiwan 14.6 8.8 8.4 11.6 11.1 9.8
UK (England) 14.0 11.6 11.7 16.7 14.8 14.9
US 9.1 9.2 7.4 24.4 18.1 18.2
Average 8.8 8.5 8.4 19.3 18.0 17.8


  • Amongst the top performers the familiar pattern reappears. In 2012 Shanghai has 27% in the top categories against 2.7% in the bottom categories. This is very similar to reading (25.1% against 2.9%). At the bottom end, Shanghai’s nearest competitors are Hong Kong and South Korea, while Singapore and Taiwan are each approaching 10% at these levels. This is another similarity with reading (whereas, in maths, Singapore is more competitive at the lower end).
  • Since 2009, Shanghai has managed only a comparatively modest 0.5% reduction in the proportion of its students at the bottom end, compared with an increase of almost 3% at the top end. This may lend further support to the hypothesis that it is approaching the point at which further bottom end improvement is impossible.
  • No country has made consistently strong progress at the bottom end, though Ireland has made a significant improvement since 2009. There has been steady if unspectacular improvement in Hong Kong, Taiwan and Singapore. South Korea, having achieved a major improvement in 2009 has found itself unable to continue this positive trend.
  • Finland’s negative trend is consistent since 2006 at both ends of the achievement spectrum, though the decline is not nearly as pronounced as in maths. In science Finland is maintaining a ratio of 2:1 in favour of the performers at the top end, while percentages at top and bottom are now much closer together in both reading and maths.
  • There are broadly similar negative trends at top and bottom alike in the Commonwealth countries of Australia, Canada and New Zealand, although they have fallen back in fits and starts. In New Zealand the balance between top and bottom has shifted from being 4 percentage points in favour of the top end in 2006, to 3 percentage points in favour of the bottom end by 2012.
  • A similar gap in favour of lower achievers also exists in England and is unchanged from 2009. By comparison with the US (which is a virtual mirror image of the top-bottom balance in Finland, Singapore or South Korea) it is in a reasonable position, rather similar to New Zealand, now that it has fallen back.
  • England has a 1.5 percentage point gap to make up on Shanghai at the top end of the distribution, compared with a 12.2 percentage point gap at the bottom.

The PISA 2012 National Study reports that only the handful of jurisdictions shown in Table 11 above has a larger percentage of learners achieving Level 6. Conversely, England has a relatively large number of low achievers compared with these jurisdictions.

Rather tenuously, it argues on this basis that:

‘Raising the attainment of lower achievers would be an important step towards improving England’s performance and narrowing the gap between highest and lowest performers.’

When it comes to comparison of the 5th and 95th percentiles:

  • The score at the 5th percentile (343) and at the 95th percentile (674) gives a difference of 331 points, larger than the OECD average of 304 points. Only eight jurisdictions had a wider distribution: Israel, New Zealand, Luxembourg, Slovakia, Belgium, Singapore and Bulgaria.
  • The OECD average difference between the 5th and 95th percentiles has reduced slightly (from 311 in 2006 to 304 in 2012) and there has also been relatively little change in England.


Top-Performing All-Rounders

Volume 1 of the OECD’s ‘PISA 2012 Results’ document provides additional data about all-round top performers achieving Level 5 or above in each of the three domains.


PISA 2012 top performers Capture.

The diagram shows that 4.4% of learners across OECD countries achieve this feat.

This is up 0.3% on the PISA 2009 figure revealed in this PISA in Focus publication.

Performance on this measure in 2012, compared with 2009, amongst the sample of twelve jurisdictions is shown in the following Table 13. (NB that the UK figure is for the UK combined, not just England).


Table 13

2012 2009
%age rank %age rank
Australia 7.6 7 8.1 6
Canada 6.5 9 6.8 8
Finland 7.4 8 8.5 4
Hong Kong 10.9 4 8.4 5
Ireland 5.7 15 3.2 23
S Korea 8.1 5 7.2 7
New Zealand 8.0 6 9.9 3
Shanghai 19.6 1 14.6 1
Singapore 16.4 2 12.3 2
Taiwan 6.1 10 3.9 17
UK 5.7 15 4.6 14
US 4.7 18 5.2 11
Average 4.4 4.1


In terms of percentage increases, the fastest progress on this measure is being made by Hong Kong, Ireland, Shanghai, Singapore and Taiwan. Shanghai has improved a full five percentage points and one in five of its students now achieve this benchmark.

The UK is making decent progress, particularly compared with Australia, Canada, Finland New Zealand and the US, which are moving in the opposite direction.

The Report notes:

‘Among countries with similar mean scores in PISA, there are remarkable differences in the percentage of top-performing students. For example, Denmark has a mean score of 500 points in mathematics in PISA 2012 and 10% of students perform at high proficiency levels in mathematics, which is less than the average of around 13%. New Zealand has a similar mean mathematics score of 500 points, but 15% of its students attain the highest levels of proficiency, which is above the average…these results could signal the absence of a highly educated talent pool for the future.

Having a large proportion of top performers in one subject is no guarantee of having a large proportion of top performers in the others. For example, Switzerland has one of the 10 largest shares of top performers in mathematics, but only a slightly-above-average share of top performers in reading and science.

Across the three subjects and across all countries, girls are as likely to be top performers as boys. On average across OECD countries, 4.6% of girls and 4.3% of boys are top performers in all three subjects…To increase the share of top-performing students, countries and economies need to look at the barriers posed by social background…the relationship between performance and students’… and schools’ organisation, resources and learning environment.’ (p65)


Denizen by Gifted Phoenix

Denizen by Gifted Phoenix



Priorities for Different Countries

On the basis of this evidence, it is possible to draw up a profile of the performance of different countries across the three assessments at these higher levels, and so make a judgement about the prospects in each of ‘a highly educated talent pool for the future’. The twelve jurisdictions in our sample might be advised as follows:

  • Shanghai should be focused on establishing ascendancy at Level 6 in reading and science, particularly if there is substance to the suspicion that scope for improvement at the bottom of the spectrum is now rather limited. Certainly it is likely to be easier to effect further improvement at the very top.
  • Singapore has some ground to catch up with Shanghai at Level 6 in maths. It has narrowed that gap by three percentage points since 2009, but there is still some way to go. Otherwise it should concentrate on strengthening its position above Level 5, where Shanghai is also conspicuously stronger.
  • Hong Kong needs to focus on Level 6 in reading and science, but perhaps also in maths where it has been extensively outpaced by Taiwan since 2009. At levels 5 and above it faces strong pressure to maintain proximity with Shanghai and Singapore, as well as marking the charge made by Taiwan in reading and maths. Progress in science is relatively slow.
  • South Korea should also pay attention to Level 6 in reading and science. It is improving faster than Hong Kong at Level 6 in maths but is also losing ground on Taiwan. That said, although South Korea now seems back on track at Level 5 and above in maths, but progress remains comparatively slow in reading and science, so both Levels 5 and 6 need attention.
  • Taiwan has strong improvement in reading and maths since 2009, but is deteriorating in science at both Levels 5 and 6. It still has much ground to pick up at Level 6 in reading. Its profile is not wildly out of kilter with Hong Kong and South Korea.
  • Finland is bucking a downward trend at Level 6 in reading and slipping only slightly in science, so the more noticeable decline is in maths. However, the ground lost is proportionately greater at Level 5 and above, once again more prominently in maths. As Finland fights to stem a decline at the lower achievement levels, it must take care not to neglect those at the top.
  • Australia seems to be slipping back at both Levels 5 and 6 across all three assessments, while also struggling at the bottom end. There are no particularly glaring weaknesses, but it needs to raise its game across the board.
  • Canada is just about holding its own at Level 6, but performance is sliding back at Level 5 and above across all three domains. This coincides with relatively little improvement and some falling back at the lower end of the achievement distribution. It faces a similar challenge to Finland’s although not so pronounced.
  • New Zealand can point to few bright points in an otherwise gloomy picture, one of which is that Level 6 performance is holding up in reading. Elsewhere, there is little to celebrate in terms of high achievers’ performance. New Zealand is another country that, in tackling more serious problems with the ‘long tail’, should not take its eye off the ball at the top.



  • The US is also doing comparatively well in reading at Level 6, but is otherwise either treading water or slipping back a little. Both Level 6 and Level 5 and above need attention. The gap between it and the world’s leading countries continues to increase, suggesting that it faces future ‘talent pool’ issues unless it can turn round its performance.
  • Ireland is a good news story, at the top end as much as the bottom. It has caught up lost ground and is beginning to push beyond where it was in 2006. Given Ireland’s proximity, the home countries might want to understand more clearly why their nearest neighbour is improving at a significantly faster rate. That said, Ireland has significant room for improvement at both Level 6 and Level 5 and above.
  • England’s performance at Level 6 and Level 5 and above has held up surprisingly well compared with 2009, especially in maths. When the comparison is solely historical, there might appear to be no real issue. But many other countries are improving at a much stronger rate and so England (as well as the other home countries) risks being left behind in the ‘global race’ declared by its Prime Minister. The world leaders now manage three times as many Level 6 performers in science, four times as many in reading and ten times as many in maths. It must withstand the siren voices urging it to focus disproportionately at the bottom end.


Addressing These Priorities

It is far more straightforward to pinpoint these different profiles and priorities than to recommend convincingly how they should be addressed.

The present UK Government believes firmly that its existing policy direction will deliver the improvements that will significantly strengthen its international competiveness, as judged by PISA outcomes. It argues that it has learned these lessons from careful study of the world’s leading performers and is applying them carefully and rigorously, with due attention to national needs and circumstances.



But – the argument continues – it is too soon to see the benefits of its reforms in PISA 2012, such is the extended lag time involved in improving the educational outcomes of 15 year-olds. According to this logic, the next Government will reap the significant benefits of the present Government’s reform programme, as revealed by PISA 2015.

Recent history suggests that this prediction must be grounded more in hope than expectation, not least because establishing causation between indirect policy interventions and improved test performance must surely be the weakest link in the PISA methodology.

But, playing devil’s advocate for a moment, we might reasonably conclude that any bright spots in England’s performance are attributable to interventions that the previous Government got right between five and ten years ago. It would not be unreasonable to suggest that the respectable progress made at the top PISA benchmarks is at least partly attributable to the national investment in gifted education during that period.

We might extend this argument by suggesting a similar relationship between progress in several of the Asian Tigers at these higher levels and their parallel investment in gifted education. Previous posts have drawn attention to the major programmes that continue to thrive in Hong Kong, Singapore, South Korea and Taiwan.

Shanghai might have reached the point where success in mainstream education renders investment in gifted education unnecessary. On the other hand, such a programme might help it to push forward at the top in reading and science – perhaps the only conspicuous chink in its armour. There are lessons to be learned from Singapore. (Gifted education is by no means dormant on the Chinese Mainland and there are influential voices pressing the national government to introduce more substantive reforms.)

Countries like Finland might also give serious consideration to more substantive investment in gifted education geared to strengthening high attainment in these core domains. There is increasingly evidence that the Finns need to rethink their approach.



The relationship between international comparisons studies like PISA and national investment in gifted education remains poorly researched and poorly understood, particularly how national programmes can most effectively be aligned with and support such assessments.

The global gifted education community might derive some much-needed purpose and direction by establishing an international study group to investigate this issue, providing concrete advice and support to governments with an interest.



December 2013

Where is New Zealand’s Excellence Gap? – Part 2


This is Part 2 of a Post for the Blog Tour associated with New Zealand’s Gifted Awareness Week 2012. If you missed Part 1 you can find it here


I have set aside until now any discussion of the nature and application of deciles so as not to confuse the treatment of the substantive issue.

For, if it wasn’t bad enough to take New Zealand to task for apparently using ethnic background as a proxy for socio-economic disadvantage, I feel there is also some cause for concern in its tendency to use a school-level measure of disadvantage as a proxy for individual disadvantage.

The two issues are related, in that they potentially create a ‘double whammy’ situation for disadvantaged European/Pakeha learners who have the misfortune to attend a relatively advantaged school.


What are Deciles?

But non-Kiwi readers will first require an explanation of how deciles are derived and how they work.

New Zealand’s Ministry of Education explains that:

‘A decile indicates the extent to which a school draws its students from low socio-economic communities. Decile 1 schools are the 10% of schools with the highest proportion of students from low socio-economic communities. Decile 10 schools are the 10% of schools with the lowest proportion of these students.’

It follows that deciles are not always a reliable measure of the individual socio-economic backgrounds of pupils in a school. A significant minority of the pupils at a low decile school may be relatively advantaged and vice versa.

Deciles are calculated following each 5-yearly New Zealand Census, though schools can apply for a review betweentimes. The calculation is based on the smallest census areas (known as ‘meshblocks’) in which a school’s students live.

It is based on five percentages:

  • households with adjusted income in the lowest 20% nationally;
  • employed parents in unskilled and semi-skilled occupations;
  • crowded households (ie less than one bedroom per couple, per single person aged 10 or over, or per pair of children aged under 10);
  • parents with no school or tertiary qualifications and
  • parents who directly received income support from specified sources.

These five factors are weighted to reflect the number of students per meshblock, so those housing more students will have a relatively greater impact on the school decile.

Schools are placed in rank order for each of the five factors, receiving a score for each. The five scores are totalled and the total scores are used to divide schools into the 10 deciles, each containing the same number of schools. No further weighting is applied.

The published Schools Directory contains the decile alongside each entry, so parents are fully aware of a school’s decile and that knowledge may influence their admissions preferences.

The Ministry’s guidance on applying for a change of decile notes:

‘In the past, boards and principals seeking a review have provided information on such things as rurality, fluid rolls, the incidence of single parent families or students with special needs. While such matters certainly impose organisational problems on a school, they are not factors that are used to determine the decile.

The decile does not indicate the “average” socio-economic status of families that contribute to the school roll, but focuses on five specific factors that have been shown to affect academic achievement.’

Deciles are used to determine the funding received by a school. Indeed, several different funding elements are allocated on this basis:

  • Targeted Funding for Educational Achievement (TFEA) (deciles 1-9)
  • Special Education Grant (SEG) (deciles 1-10)
  • Careers Information Grant (CIG) (deciles 1-10)
  • Kura Kaupapa Māori Transport (deciles 1-10)
  • Priority Teacher Supply Allowance (PTSA) (deciles 1-2)
  • National Relocation Grant (NRG) (deciles 1-4)
  • Decile Discretionary Funding for Principals (deciles 1-4)
  • Resource Teachers of Learning and Behaviour (RTLBs) Learning Support Funding (deciles 1-10)
  • RTLBs for years 11-13 (deciles 1-10)
  • School Property Financial Assistance scheme (deciles 1-10)
  • Study Support Centres (deciles 1-3)
  • Social Workers in Schools (deciles 1-5)
  • District Truancy Service (deciles 1-10)

Some sources suggest that deciles impact on some 15% of schools’ operational funding overall, but the first element in the list above seem by far the most significant for the purposes of this post.

Lakeside courtesy of Chris Gin

Targeted Funding for Educational Achievement

Targeted Funding for Educational Achievement (TFEA) is:

‘a resource provided to assist boards of decile 1–9 schools to lower barriers to learning faced by students from low socio-economic communities. It is calculated and funded on a per pupil, decile related basis’

So, whereas a significant proportion of school funding is dependent on school decile, the TFEA element of that funding is derived on a per capita basis. The text is ambiguous but, as far as I can establish, this sum is payable for every student on the school roll, rather than only for students below a specified personal threshold of disadvantage.

If so, there will be an inevitable element of deadweight in lower decile schools and a degree of rough justice in higher decile schools, because that is the way a proxy operates.

The current rates for TFEA are set out below, with figures provided inclusive and exclusive of Goods and Services Tax.

As one can see, the rate varies significantly according to school decile. The highest rate for decile 1 schools is approaching 50% more than the highest rate for decile 2 and 3 times as much as the highest rate for decile 3.

For decile 5 and below, the sums available are relatively insignificant and, for the highest decile, they disappear entirely.

The History of Deciles and TFEA

There is useful background about the development of this system in a December 2003 Inquiry into decile funding by the Education and Science Committee of the New Zealand House of Representatives

TFEA was first introduced in 1995 but, until 1997, it was awarded only to low decile schools. It was the first tranche of funding awarded using the decile system – all the other elements above have been added subsequently.

The 18 funding steps in the TFEA system were already present at the time of the inquiry. (The three different rates for schools in each of deciles 1-4 are presumably calculated by dividing each ranked decile into three equal parts, but the text does not state this.)

When the Inquiry was held in 2003, deciles were derived from six indicators rather than five, the sixth being the proportion of students of Maori or Pacific Islands origin within the meshblock.

Some of the Inquiry’s respondents were reportedly critical of this:

‘Some contended that as Maori and Pacific Island families are over-represented in statistics reflecting socio-economic disadvantage, there may be a ‘double counting’ of disadvantaged minority groups, thus skewing the decile rankings of some areas.’

 In other words, because Maori and Pasifika were already over-represented in the other five ethnically-neutral elements of the calculation, there was a risk that areas with a significant Maori/Pasifika population would benefit disproportionately.

Some of the Inquiry’s members felt this:

 ‘creates an incentive for schools to push the boundaries regarding the proportion of Maori and Pacific Island students on their roll’

though why that should be a bad thing is never explained.

Such was the sensitivity of any adjustment that it was referred to a much wider review of targeted programmes. The Minister responsible declared that:

‘The objective of the review is to give ministers and the public assurance that policy is being developed on the basis of need, not on the basis of race’

showing that the Government of the time was thoroughly alert to the risk that I have been discussing in this post.

But the report of the first results makes it clear that the review did not examine the full range of education policy. The Minister says:

‘The Labour-led government firmly believes in giving everyone a fair go. Unlike the National party, we are committed to lifting Maori and Pacific Island job prospects, educational achievement and health.

We will continue to use targeted programmes and policies for specific ethnic groups that prove effective at addressing proven needs, just as we do for other groups of New Zealanders who need specific help, such as the elderly or those in rural communities.

These reviews have confirmed that for most of these programmes, targeting by ethnicity is appropriate, as there is good evidence that this sort of targeting is addressing need effectively. Because of this these programmes will not be changed.

In fact, the review concludes that change is required in only a single area – the calculation of deciles.

Even so, the Minister feels it necessary to announce additional targeted support to sweeten the pill:

There is increasing evidence and research that suggests that lifting educational achievement for Maori and Pasifika students is better done through tailored programmes that address certain factors – such as giving teachers the support and the skills to teach students from different backgrounds who have different needs.

We are investing in these sorts of programmes already. As well, I am announcing today two new initiatives, worth $11.5 million over three years, that will support more effective teaching. The first will develop, pilot and establish a national approach to training educators who teach teachers. The second will apply recent research findings about what works in the classrooms for Maori and Pasifika students to ten pilot studies involving teachers in clusters of schools.’

The first part of this quotation explains why there has been continuing emphasis on targeted support for Maori and Pasifika learners under successive Governments.

But it begs the question whether other disadvantaged students would not benefit from similar tailored programmes, rather than relying principally on the distribution of weighted funding to low decile schools.

Meanwhile, the Inquiry into decile funding also recommended that the Ministry of Education should undertake research into the effectiveness of TFEA in improving the learning outcomes of disadvantaged students and disseminate best practice guidance to schools (and schools should also account for how they spend this resources):

 ‘Due to the absence of research in this area, we have been unable to determine how effective the Targeted Funding for Educational Achievement scheme is at improving the outcomes of students who face barriers to learning due to their socio-economic status. We believe that such research is important to ensure that decile-based funding is achieving its stated goals.’

I have not been able to track down any research commissioned in response to this recommendation.

In a submission to the Inquiry the New Zealand Council for Educational Research argues that:

Closing the overall gap in student achievement levels in relation to schools serving different socioeconomic communities is a somewhat different purpose from that of improving the learning outcomes for individual low-achieving students, in whichever decile school they attend. There is overlap between these two, which is evident in some of the views expressed by those in high decile schools concerned about meeting the needs of their lower-performing students….

Trying to assess individual student learning needs and provide funding accordingly on an individual basis would prove to be enormously expensive. The testing required would withdraw money from the public funding available which many in the sector already regard as inadequate for the higher expectations we now have that every student will achieve a level of educational performance which will be satisfying, and provide a useful basis for meaningful employment and social participation.

If such a system were used only for those students thought to face additional learning barriers, and based on school applications, then it would face the same problems of lack of fairness and inconsistency that were apparent in the system replaced by the decile rankings, and would not reach all those who might need it.

Individual vouchers would prove costly and very difficult to administer, and put school principals under great pressure, as the experience of principals with parental expectations related to the average per student funding of the ORRS scheme used in special education shows (Wylie 2000). Nor have individual vouchers for students proved to make a difference for student learning… What does make a difference is to build on what we know to ensure teachers are well equipped with the knowledge, curriculum and assessment resources, and time to work with individual students, and to work together to share knowledge of individual students, and to improve their practice.’

This suggestion that individual low-achieving students should be targeted rather misses the point. Isn’t the first question to address why individual disadvantaged students should not be targeted in this manner?

The assumption that the only solution lies in vouchers is also misplaced, as the English Government’s decision to adopt a Pupil Premium demonstrates. As far as I can see, there is fundamentally no reason why TFEA could not be awarded to schools on the basis of individual student need. All that would be required is a robust definition of need which could be applied to all learners.


Te Anau courtesy of Stuck in Customs

Distribution of pupils by ethnic background according to School Decile

There is no doubt that pupils from different ethnic background are very differently distributed within high and low decile schools.

The July 2011 data gives the following rounded percentages (the totals also include learners from other backgrounds):

European/Pakeha Maori Pasifika Total
Deciles 1-3 8 44 60 22
Deciles 8-10 50 16 12 39

It is quite clear that funding and policies targeted at low decile schools will disproportionately benefit learners from Maori and Pasifika backgrounds.

Unfortunately, I have been unable to track down any data showing the distribution of economically disadvantaged learners between schools in different deciles. We cannot see the extent to which learners’ personal background reflects the decile of the institution that they attend.

There is bound to be significant overlap between these two given the way New Zealand school admissions operate and how deciles are calculated, but the match will only be approximate – there will be a minority of individually disadvantaged learners attending schools above decile 3 and, similarly, advantaged leaners attending schools below decile 8.

So we have a second proxy in play that will tend to put disadvantaged learners from European/Pakeha backgrounds who have the misfortune to attend mid and high decile schools further towards the back of the queue.

If TFEA funding was tied explicitly to meeting the needs of the disadvantaged pupils who attract it – which is not strictly the case with the Pupil Premium in England – a significant proportion of the deadweight in the current allocation could be eradicated.

It would also help ensure that gifted learners from disadvantaged backgrounds were equal beneficiaries, since there is otherwise a risk that the funding is tied almost exclusively to eradicating New Zealand’s ‘long tail of underachievement’ at the lower end of the attainment spectrum.

The Range at Night courtesy of Stuck in Customs


What do we know about New Zealand’s Disadvantaged Gifted Learners?

As far as I can discover online, relatively little is known about the size and composition of New Zealand’s population of disadvantaged gifted learners.

New Zealand’s guidance does not rely on a fixed percentage definition of gifted learners, pointing out that different approaches to identification can result in very different outcomes – the variance is from 1-15% of the school population, assuming that the guidance is rigorously observed in every single school.

If we estimate, for the sake of illustration, a mid-point of around 8% then, on the assumption that giftedness is evenly distributed in the school population – and on the basis of 2011 school rolls – New Zealand’s national gifted and talented population would include some 61,000 gifted and talented learners all told.

  • Overall, over 33,000 European/Pakeha learners will be gifted and talented, approaching 14,000 learners from a Maori background and around 6,000 from a Pasifika background
  •  About 13,400 of those will be attending disadvantaged decile 1-3 schools. Over 6,000 of them will be from a Maori background, approaching 3,600 from a Pasifika background and around 2,600 of them European/Pakeha.

While there is apparently no hard data, we do know – from the sources I have already quoted – that pupils from Maori and Pasifika backgrounds and from disadvantaged backgrounds (whether or not represented by the proxy of low decile schools) are heavily under-represented.

That seems fairly typical of gifted and talented populations worldwide.

Another conclusion we might reasonably draw is that, with approximately this number of disadvantaged gifted and talented learners concentrated in a relatively small land mass, it ought to be feasible to design a single personalised intervention programme to support them all, funding permitting of course…

How Might We Learn More?

Although there are significant political and professional hurdles to overcome, most of the elements are seemingly in place to secure useful national data about New Zealand’s gifted disadvantaged learners.

New Zealand’s National Administration Guidelines require school boards to:

‘on the basis of good quality assessment information, identify students and groups of students:

i.        who are not achieving;
ii.        who are at risk of not achieving;
iii.        who have special needs (including gifted and talented students)’

Such groups must include Maori students by virtue of a separate requirement:

‘in consultation with the school’s Māori community, develop and make known to the school’s community policies, plans and targets for improving the achievement of Māori students’

There are no other specific requirements relating to the ethnic or socio-economic composition of these groups.

However, where a school has students enrolled in Years 1-8, the board is required, from 2011, to use New Zealand’s National Standards to:

‘Report in the board’s annual report on:

              i.     the numbers and proportions of students at, above, below or well below the standards, including by Māori, Pasifika and by gender (where this does not breach an individual’s privacy); and

ii.     how students are progressing against the standards as well as how they are achieving.’

The New Zealand Government seems to have decided in favour of the public dissemination of such data, despite professional fears about the creation of school league tables and their capacity to mislead the public

The Ministry of Education is reportedly compiling a report based on the first round of data, to be published in September. The Prime Minister has argued that there is parental demand for such data, which will in any case be made available by the media, who can access it under the Official Information Act.

The twin requirements on school boards to identify gifted and talented learners and report achievement against the standards provides a mechanism which could potentially be used to collect data about the incidence of gifted and talented learners, broken down by ethnicity and gender, and their performance (recognising the limitations of the four-level scale deployed within the standards for this purpose).

Such data would certainly be analysed by school decile – so using the familiar proxy for personal disadvantage – which would permit the derivation of approximate data about the numbers and distribution of gifted disadvantaged learners, though with the shortcomings we have identified above.

If the Government were also prepared to specify that the data collection should include disadvantaged learners identified on the basis of a personal measure of disadvantage, that would of course be far preferable.

Meanwhile, the lack of national data collection on this basis would not prevent the collection of sample data by the gifted education community from schools willing to supply it.

Whakapapa River courtesy of ed37

How Might New Zealand’s Gifted Disadvantaged Learners Be Supported?

What follows is a personal perspective from a distance of several thousand miles – only New Zealand’s gifted educators will know whether these ideas make sense in their particular national context, but here goes anyway!

In researching this post, I have come across several interesting initiatives that were new to me, including the University of Auckland’s Starpath Project and  the First Foundation which seem commendably focused on disadvantage regardless of the ethnic background of the disadvantaged young people they are supporting (though in both cases, school decile seems to be an integral part of the identification process).

A map of such existing provision would help to identify the gaps that need to be filled, and inform any intention to draw existing provision into a single framework servicing the entire gifted disadvantaged population.

Other possibilities include:

  • Ensuring that national and school policy statements explicitly recognise the complex relationship between ethnic background, disadvantage and several other key variables such as gender, special educational need, even month of birth.
  •  Making clear the downside of a proxy-driven approach – specifically that some key parts of the disadvantaged gifted population are overlooked while other, relatively less disadvantaged learners will benefit in their place.
  •  Introducing strategies to encourage schools to identify gifted and talented learners from disadvantaged backgrounds. For example, guidance could promote the idea that schools’ gifted and talented populations should broadly reflect their intake. Schools’ decisions could be audited, perhaps on a sample basis. Effective school level strategies could be developed and disseminated nationally;
  •  Developing greater understanding of how disadvantage impacts on gifted learners, including those from non-Maori and non-Pasifika backgrounds. Is there a distinct poor European/Pakeha population whose needs are not being met? Do Asian learners from disadvantaged backgrounds tend to overcome these more readily, as is the case with some Asian populations in England? If so, what can be learned from that?
  •  Developing personalised solutions that address the very different causes of disadvantage faced by gifted learners, ensuring that support for Maori/Pasifika dimensions is proportionate and part of a sophisticated toolkit of strategies available to schools. These could be targeted on the basis of a needs analysis process, as embodied in the recently published questionnaire)
  •  Developing guidelines for the providers of generic non-targeted out of school programmes to ensure that they too provide support for a population that  is broadly representative by gender, ethnic background and other such variables, rather than disproportionately serving one group or another.
  •  Monitoring and evaluating the impact of this strategy and progress towards specified outcomes, tied explicitly to reducing the excellence gap.

For what it’s worth, current interest in charter schools seems largely irrelevant to this discussion because the terms of reference are explicit that such schools will not be selective, so they will not be able to prioritise disadvantaged gifted learners in their admission arrangements.

Whereas in England free schools for students aged 16-19 are not caught by the Government’s moratorium on new selective schools, there doesn’t seem to be a similar escape clause in New Zealand. So charter school pilots might serve at best as models and laboratories for disadvantaged learners of all abilities.

Assuming, of course, that they will really serve disadvantaged learners, rather than acting as a magnet for the middle classes.

The Prospects Are Good

There is evidence to suggest that new Zealand’s disadvantaged gifted learners are already relatively well-placed.

Another PISA publication called ‘Against the Odds: Disadvantaged Students Who Succeed in School’ uses PISA 2006 science results to inform an analysis of resilient students from disadvantaged backgrounds who score highly on the PISA assessment.

The report uses two definitions of resilience:

  • A within-country definition which looks at those students who fall within the bottom third of their country’s distribution by socio-economic background but who nevertheless score within the top third of PISA entrants in that country;
  •  An across-country definition which looks at students who, as above, fall within the bottom third of their national distribution by socio-economic background but who performed in the top third of all PISA entrants after controlling for socio-economic background.

New Zealand is one of a handful of countries in which the proportion of such students is close to 50%. These resilient students ‘are more motivated, more engaged and more self-confident than their disadvantaged low-achieving peers’.

This finding is supported in domestic studies by Nadine Ballam:

‘Socioeconomic adversity was found to be more intrinsically valuable than damaging in terms of talent development and self-identity. This challenges stereotypic perceptions that may be commonly held about individuals who come from financially disadvantaged backgrounds. It also broadens the picture of what has traditionally been suggested to be characteristic of people living in low socioeconomic situations. Participants reported that drive and determination, a strong work ethic, and an appreciation for things that may be less significant to more financially advantaged young people were the most intrinsically beneficial elements of their financial constraints. While many participants also experienced some negative impacts related to their socioeconomic circumstances, it appeared that their determination to change their situations tended to counteract any long lasting influence that these effects may have had…

Further findings from this study revealed that physical assistance provided in the form of tangible resources and opportunities actually contributed to the participants’ overall sense of wellbeing. However, it could well be that more focus is also required on the intrinsic aspects, and on supporting and empowering these young people to develop a strong and secure sense of their own identity, whatever this may mean for the individual within the context of their challenging situations.’

For many disadvantaged young people, that identity will include belonging the Maori or Pasifika communities, but for others it will not.

Last Words

This post has explored a perceived tendency for New Zealand to use ethnic proxies for personal educational disadvantage rather than relying on a more targeted measure.

It has:

  • Examined how this is evidenced in key documents and considered whether the available data supports or contradicts the tendency.
  • Suggested that the pre-eminent use of deciles to target funding and support compounds the problem.
  • Pointed out the implications for gifted disadvantaged learners and for the country’s efforts to tackle the excellence gap between the attainment of its advantaged and disadvantaged gifted learners.
  • Cautiously proposed some starting points for further information-gathering about the size and distribution of New Zealand’s gifted disadvantaged learners and for designing a coherent response to their needs.

The result is a jigsaw puzzle with several pieces still missing.

When I do finally visit New Zealand I hope to find out whether the adoption of Maori and Pasifika background as a proxy for disadvantage is a reality.

If so, is it a ‘truth that dare not speak its name’ or an openly acknowledged accommodation that reflects the historical guilt of one community and the historical suffering of two more?



June 2012

Where is New Zealand’s Excellence Gap? – Part 1

This post is my contribution to the Blog Tour for New Zealand’s Gifted Awareness Week 2012. It asks:

  • Whether New Zealand is too ready to adopt proxies for educational disadvantage and
  • If that hinders its capacity to narrow the attainment gap between advantaged and disadvantaged learners, especially the ‘excellence gap’ between gifted learners from advantaged and disadvantaged backgrounds.

It is written with half an eye to the New Zealand audience, while the other half is trained on a global readership. The former may find I am telling them too many things they already know; the latter may feel that some essential background material is lacking. I have tried to steer a middle way.

I present this analysis with all due humility – as appropriate for a non-Kiwi who has never once visited New Zealand and is relying exclusively on material available online – but in the genuine hope that it will stimulate further discussion and debate.

I have divided the text into two parts on an entirely arbitrary basis, simply because it is too long to form a single post.

Reflections on Last Year’s Post

In 2011 my contribution to the New Zealand Gifted Awareness Week (NZGAW) Blog Tour was a two-part analysis – here and here –  of how vouchers could be applied to gifted education, featuring the proposals in Step Change: Success the Only Option.

As we all know, education vouchers are a controversial market-based education reform, increasingly prevalent in the United States but with a relatively limited foothold elsewhere. They are as yet almost entirely unknown in gifted education.

I am afraid I was rather dismissive of the politically-inspired proposals within ‘Step Change’, though I did not dismiss outright the potential of voucher schemes to support gifted education. Despite the shortcomings of the Step Change scheme, its originators deserve some credit for framing the suggestion in the first place.

I thought my post was rather provocative, but it raised barely a whimper.

Vouchers may excite policy wonks but they are some distance away from the everyday concerns of busy educators. As far as New Zealand colleagues were concerned,  they were little more than a theoretical irrelevance, because the Step Change proposals had been ditched, publicly and unceremoniously, by the time I published my post.

In Search of a Topic for 2012

Charter schools are the latest ‘big idea’ imported into New Zealand, currently receiving consideration by a dedicated working group. At this early stage it is hard to know whether the report it will produce in due course is destined for the same treatment as ‘Step Change’, though that is a distinct possibility.

I could have written about charter schools but, in reflecting on them as a possible topic, I found myself distracted by a much more fundamental, sensitive and controversial question to which I did not have the answer.

Unlike vouchers – and probably charter schools too – it goes to the very heart of New Zealand’s educational policy and practice, and is directly relevant to how New Zealand policy makers and practitioners envision and implement gifted education.

Quite rightly in my view, New Zealand places very strong emphasis on a socially and culturally inclusive approach to education, and gifted education is no exception. It is rightly expected that gifted learners will be drawn from across society, including from Maori, Pasifika and disadvantaged backgrounds.

But, although this expectation is expressed in terms of ethnicity and disadvantage, it often seems – at least from this distance – that the issue is being addressed almost entirely in terms of ethnicity.

It seems that there is, quite rightly, a big investment in meeting the needs of Maori and Pasifika learners, including gifted learners, much of it on the basis that belonging to those cultures is synonymous with disadvantage.

Now I perfectly understand that learners from those backgrounds are heavily and disproportionately represented amongst the socio-economically disadvantaged population in New Zealand.

But I am also sure that there is a minority of relatively advantaged Maori and Pasifika learners and, perhaps more to the point, a significant number of socio-economically disadvantaged learners who are not from Maori and Pasifika backgrounds.

I wondered whether this appearance is reflected in reality and, if so, why New Zealanders have reached a position where Maori and Pasifika cultural backgrounds have become an imperfect proxy for socio-economic disadvantage.

I was curious if, as a consequence, poor learners from other backgrounds are relatively neglected, perhaps even overlooked. I wondered whether that circumstance might apply equally to gifted education.

This topic seems almost taboo in New Zealand educational circles. I am sure that many readers will feel I am trespassing into territory I do not understand – and clumping around in hobnailed boots where angels fear to tread.

It may be that the evidence overall does not support this analysis, in which case I am more than ready to adjust it accordingly. But I feel the need to pose the questions nevertheless.

A Drive to Remember courtesy of WanderingTheWorld

 New Zealand’s Educational Policy and Priorities

To get a grasp on how national educational priorities are articulated within Government, I began with the Ministry of Education’s Briefing to the Incoming Minister (December 2011).

The Executive Summary illustrates beautifully the disparity between expectation and implementation I outlined above.

The opening paragraph expresses the overarching aim thus (the emphasis is mine):

‘Our over-riding goal is a world-leading education system that equips all learners with the knowledge, skills and values to be successful citizens in the 21st Century. Although New Zealand’s education system has many strengths, with systematic under-achievement for Maori, Pasifika and other learners from poorer backgrounds, we are a considerable way from achieving that goal. New Zealand’s highest achieving learners compare with the best in the world, but those groups least well served by New Zealand’s education system achieve outcomes comparable with the lowest performing OECD countries.  The social consequences of this are all too clear. The economic consequences are equally unacceptable.’

This text might be criticised because it implies that Maori, Pasifika and poor learners on one hand and high achievers on the other are two mutually exclusive populations but, that aside, it states New Zealand’s fundamental educational problem with admirable clarity.

But, having stated the problem in this manner, the next few paragraphs make no further reference to those ‘other learners from poorer backgrounds’, implying that there is no policy solution targeted specifically at them.

Instead, the issue is addressed entirely in terms of ethnic background:

‘The attainment gaps between learners of different ethnicities are stubborn and in danger of being viewed as inevitable. They are not…

the issue of Maori and Pasifika underachievement is pervasive and needs to be addressed in every setting, and in schools of every decile…

….Educational achievement for all is the single most important issue facing New Zealand education and in order to achieve a step change in outcomes for Maori and Pasifika we need to be relentless in our focus on good education outcomes for every single child and adult learner.  We need to “stress test” all of our current policy settings, including funding mechanisms, programmes and interventions and ask if they are doing all they can to address this fundamental weakness in New Zealand’s education system.’

The original point about the distribution of disadvantage is reinforced later in the Briefing, within an analysis of performance against key indicators by ethnic group:

‘Despite some overall improvements, the gap between our high performing and low performing students remains one of the widest in the Organisation of Economic Cooperation and Development (OECD). These low performing students are likely to be Maori or Pasifika and/or from low socioeconomic communities.

Disparities in education appear early and persist throughout learning. The Table below highlights some of this participation and achievement disparity between Maori, Pasifika and non-Maori/Pasifika…Although there is a relationship between socio-economic status, ethnicity and achievement, these are not pre-determinants for success or failure. There is a spread of achievement within these groups.’

We will return to the Table later. For now the critical point is the recognition of a complex relationship between ethnicity, socio-economic disadvantage and achievement.

Given that understanding, one might expect the next stage of the argument to be insistence on a personalised approach, designed to meet the very different needs of disadvantaged learners, who are affected in complex ways by the interaction of these and several other variables.

Instead, we are told that that a key challenge has to be addressed:

We must support Maori, Pasifika and students with special needs to realise their inherent potential to achieve educational success.   This goal requires giving full effect to the Government’s strategies for these groups: Ka Hikitia: Managing for Success, the Pasifika Education Plan and Success for All – Every School, Every Child.’

Special needs makes it into the equation, but what has happened to those from disadvantaged backgrounds who have the misfortune to sit outside the Maori and Pasifika communities?

This is by no means an isolated example. The same elision features in the Ministry of Education’s Statement of Intent 2012-17 which again identifies four priority groups:

Improving education outcomes for Maori learners, Pasifika learners, learners with special education needs and learners from low socio-economic backgrounds’

But when the ‘operating intentions’ are spelled out, we seek in vain for separate and specific reference to targeted support for the latter group:

‘We will improve education outcomes for our priority groups by focusing on the evidence of what works best. We will use policy, accountability and funding levers to maximise improvement for these learners. To make the system work, it is critical to have and use information that informs best practice and makes it possible to target support and resources effectively….

We will report regularly on the progress the system is making towards improving its performance for and with Māori learners, using Ka Hikitia – Managing for Success as the framework. We will implement a refreshed version, Ka Hikitia – Managing for Success 2013-2017, based on emerging research and evidence. This will further focus the Ministry’s activity and that of education providers to improve the education system for and with Māori.

As part of the refresh of Ka Hikitia – Managing for Success, specific targets will be set and communicated. These targets will address the Government’s priorities and will align with the Better Public Services result areas. Targets will be set to increase the proportion of:

    • Māori children participating in early childhood education
    • Māori learners with NCEA Level 2 or an equivalent qualification
    • Māori 25- to 34-years-old, with a qualification at level 4 or above on the New Zealand Qualifications Framework…

…We will implement a new, updated Pasifika Education Plan for 2013-2017, which will support the education system to perform better for Pasifika learners, and to focus on sustainable and continuous improvement. The plan will set ambitious targets to increase Pasifika participation in early childhood education and the percentage of Pasifika learners with NCEA Level 2 or an equivalent qualification, aligning with the Better Public Services result areas.

Setting, and then achieving, the goals and targets of the plan will be a joint project between the Ministry and the Ministry of Pacific Island Affairs. We will work with education agencies to ensure their plans for increasing Pasifika learners’ achievement align with the Pasifika Education Plan….

…We will continue to implement Success for All – Every School, Every Child to ensure all learners with special education needs are able to learn and succeed in the education setting of their choice.

The Government has set a performance target of 80% of schools demonstrating inclusive practice of learners with special education needs by the end of 2014, with the remaining 20% demonstrating good progress. No schools should be doing a poor job of providing an inclusive learning environment for these learners.’

Are we to conclude that, for learners from low socio-economic backgrounds who fall outside the other ‘target groups’, there is no need for targeted intervention? If so, what is the rationale for this decision and where is the evidence presented?

The Remarkables courtesy of WanderingTheWorld

The Elision is Repeated in NZ Gifted Education Documents

Some of the key reference documents for New Zealand’s gifted educators perform exactly the same trick, though this is not universally true. The older documents appear more inclusive, perhaps suggesting that the socio-economically disadvantaged did not disappear from view until midway through the last decade.

The Ministry of Education’s publication: Gifted and Talented Students: Meeting their needs in NZ Schools (2000) notes that:

‘New Zealand is a multicultural society with a wide range of ethnic groups.  The concept of giftedness and talent that belongs to a particular cultural group is shaped by its beliefs, values, attitudes, and customs. The concept varies from culture to culture. It also varies over time.

It is important that each school incorporates relevant cultural values into its concept of giftedness and talent. These values will also influence procedures used for identifying students from different cultural groups and for providing relevant programmes. Culturally diverse and economically disadvantaged students are grossly under-represented in programmes for the gifted and talented. Schools must make a special effort to identify talented students from these groups.’

It moves on to consider identification issues for each of a series of vulnerable groups and offers specific guidance on identifying disadvantaged gifted learners:

‘Students from Low Socio-economic Backgrounds

Disadvantaged gifted and talented students (or gifted and talented students from low socio-economic backgrounds) are difficult to identify and are seriously underrepresented in programmes for the gifted and talented. Since the performance of these students generally declines the longer they are at school (by comparison with students from more advantaged backgrounds), it is critically important to identify them as early as possible. Attention should focus on early childhood education and on the junior school.

Traditional identification methods tend to be ineffective with this group of students. Standardised tests of achievement and intelligence may penalise students from lower socio-economic backgrounds. Non-verbal tests of general ability, such as the Standard Progressive Matrices, are more culturally fair although they do not predict academic performance as well as some tests.

The accuracy of teacher identification can be increased with the use of checklists designed specifically for identifying disadvantaged gifted students. Peer nominations have proved promising, particularly where peers have identifi ed areas of special ability outside the classroom, such as art, music, sport, and leadership. Of particular value, however, has been the responsive learning environment approach for this group of students. When coupled with early identification and intervention, it is usually the most effective method.’

But, moving ahead to 2008, while the ERO Report on ‘Schools’ Provision for Gifted and Talented Students’ follows the earlier Ministry publication in advocating identification processes that:

‘Identify special groups, including Maori, students from other cultures/ethnicities,            students with learning difficulties     or disabilities, underachievers, and those from low socio-economic backgrounds’,

when it comes to reporting on and exemplifying effective practice, the latter group simply vanishes.

  • In establishing indicators of good practice for defining and identifying giftedness, ERO sought evidence that Maori and multicultural concepts were incorporated and that students identified ‘reflected the diversity of the school’s population’.
  • Only 5% of schools could demonstrate a ‘highly inclusive and appropriate’ approach on these terms, with a further 40% deemed ‘inclusive and appropriate’. Practice in the remaining 55% of schools therefore fell short of this expectation.
  • The ensuing discussion of good practice references the incorporation ‘of Maori or multicultural concepts of giftedness and talents’ in schools’ definitions (the majority of schools had not demonstrated this).
  • Just 15% of schools included Maori theories and knowledge in their identification process and even fewer – 12% – incorporated ‘multi-culturally appropriate methods’.
  • ‘Identified gifted and talented students reflected the diversity of the school’s population at just under half the schools. This diversity included ethnicity, year levels, gender, and curriculum areas’.

Socio-economic factors are neither explicitly identified in ERO’s template of effective practice, nor referenced explicitly in the practice they surveyed. There is a clear problem in respect of Maori and multicultural representation, but the issue of socio-economic representation is entirely invisible.

The only reference to disadvantage is in terms of schools:

‘In general, high decile schools were more likely to have good quality provision for their gifted and talented students than low decile schools. Similarly, urban schools were more likely to have good quality provision for their gifted and talented students than rural schools.’

which leads to a recommendation that the Ministry:

‘Provide targeted, high quality professional development to rural and low decile schools on providing for gifted and talented students’

We shall return later to the issue of support differentiated according to school decile, since that too is a questionable proxy for individual socio-economic disadvantage.

The current TKI Gifted site follows the 2000 publication up to a point:

‘Disadvantaged gifted and talented students (or gifted and talented students from low socio economic backgrounds) are difficult to identify and are seriously underrepresented in programmes for the gifted and talented. Since the performance of these students generally declines the longer they are at school (by comparison with students from more advantaged backgrounds), it is critically important to identify them as early as possible. Attention should focus on early childhood education and on the junior school.’

But it carries no links to programmes or resources that explicitly address this issue.

The letter signed by various New Zealand organisations and just issued to Members of Parliament references their commitment to a vision that:

‘All gifted and talented learners have equitable access to a differentiated and culturally responsive education. They are recognised, valued and empowered to develop their exceptional abilities and qualities.’

But there is no mention of disadvantaged gifted learners in the associated recommendations for practice, though there are references to research in ‘Pasifika concepts of giftedness and Maori perceptions and understanding of giftedness’.

This formulation cannot be criticised on the grounds that it focuses exclusively on Maori and Pasifika disadvantage. Rather, the emphasis on disadvantage is missing entirely – and only the need to account for different cultural perceptions remains.

There is a fascinating – and in my view telling – extract in The Extent, Nature and Effectiveness of Planned Approaches in New Zealand Schools for Providing for Gifted and Talented Students (2004).

It appears during a discussion of cultural issues, and specifically the representation of Maori and Pasifika students:

Socioeconomic factors. Keen (2001) hypothesized that the under-representation of Mäori and other Polynesian children that emerged in his research could be related to socioeconomic status rather than ethnicity. He notes that children of beneficiaries and unskilled labourers are also under-represented amongst the gifted and that “a disproportionate number of Mäori fall within these occupational categories” (p. 9). Similarly, Rata (2000) maintains that ethnicity has been credited with a greater influence than it actually exerts and that poverty is principally responsible for the educational and social inequalities that exist in New Zealand. However, Blair, Blair, and Madamba (1999) argue that it is virtually impossible to separate the potential effects of ethnicity and social class, while Bevan Brown (2002) and Glynn (cited in Bevan-Brown, 2002) maintain that it is a pointless exercise anyway as both these dimensions need be taken cognisance of in any educational provisions for poor Mäori students with special needs and abilities.’

It appears that, around the turn of the century, various experts were arguing that poverty rather than ethnicity was the real problem that required addressing in relation to under-representation in gifted populations.

Others regarded these two factors (quite wrongly in my view) as indistinguishable. Others saw the issue entirely through the lens of support for Maori learners, and so entirely missed the point.

Is this the real heart of the issue? Have the arguments advanced by Keen and Rata been set aside too readily in an effort to address the under-representation of Maori and Pasifika gifted learners?

Earlier in this Report we are told:

‘It is beyond the scope of this review of the literature to examine the recommendations for each potentially under-represented group of gifted and talented students; however, given the cultural diversity of New Zealand, issues related to the identification of minority cultures, and specifically, Mäori students, are of utmost importance. This is discussed in the section on cultural issues of this literature review.’

Is that the nub of the problem, and have we identified the turning point in New Zealand’s gifted education discourse?

Waterfalls at Midnight courtesy of Stuck in Customs

Is this Conflation of Ethnicity and Disadvantage Borne Out By the Data?

I want to turn to the statistical evidence about the extent of disadvantage in New Zealand, the composition of the disadvantaged population and the impact of disadvantage on educational outcomes.

The Extent of Disadvantage and Breakdown by Ethnic Background

I haven’t found it an easy matter to derive estimates of New Zealand children living in poverty broken down by ethnic background. Such statistics are less readily available than one might expect.

The 2010 Social Report defines low income as 60% of the 2007 household disposable income median, minus a 25% deduction to account for housing costs. The total is adjusted to reflect inflation so it remains level in real terms.

In the year ending in June 2009, 15% of New Zealand’s population had incomes below this threshold. However 22% of children aged 0-17 lived in households with incomes below this level.

The Report does not provide an analysis by ethnic background because sample sizes are said to be too small to provide a robust time series. I am no statistician but this seems a rather convenient and only partially accurate excuse.

The August 2011 publication ‘Household Incomes in New Zealand: Trends in Indicators of Inequality and Hardship 1982 to 2010’ informs us that:

  • New Zealand does not have an official poverty measure – the Report uses the 60% of median household income and also a 50% median household income measure. It notes that both are regularly used by the EU and OECD
  • Of New Zealand’s total population of 4.26m (2010) some 500,000 to 750,000 are in poverty depending on which definition is adopted.
  • The childhood poverty rate is 22% to 25% depending on the definition adopted. Of the 1.07m dependent children under 18 in New Zealand (2010) between 170,000 and 270,000 were in households in poverty
  • Over the period 2007-2010, one in three Maori children one in four Pasifika children and one in six European/Pakeha children were living in poverty.

(For those readers outside New Zealand, ‘Pakeha’ is the Maori word for New Zealanders of European descent.)

The Social Report tells us that, at 2006, 72% of 0-17 year-olds were reported as of European or ‘Other’ origin (‘Other’ including ‘New Zealander’); 10% were reported as Asian, 24% as Maori and 12% as Pacific Peoples.) Some were obviously reported as belonging to more than one ethnic group.

Using the Statistics New Zealand Table Builder, one can derive estimated numbers of 0-14 year-olds and 15-19 year-olds by ethnic background in 1996, 2001 and 2006.

So the totals for 0-19 year-olds in 2006 are:

European or Other (including New Zealander) – 645,300 + 222,370 = 867,670

Maori – 215,300 + 65,980 = 281,280

Pacific peoples – 110,300 + 31,830 = 142,130

Asian – 83,600 + 35,840 = 119,440

Recognising the inaccuracy of the figures – one can roughly estimate an order of magnitude for the number of children from each background (other than Asian) living in poverty, by applying the proportions given in the Social Report:

European or Other (including New Zealander) – 16.67% of 281,280 = 46,890

Maori – 33% of 281,280 = 92,820

Pacific Peoples – 25% of 142,130 = 35,530

One can conclude that:

  • the total number of children living in poverty in New Zealand is relatively small in absolute terms, but constitutes a significant proportion of the total population of New Zealand children.
  • While only a minority of Maori and Pasifika children live in poverty…
  • In numerical terms, roughly twice as many Maori live in poverty as European/Pakeha but the latter significantly exceed the size of the Pasifika-in-poverty population.
  • Almost 50,000 young New Zealanders – well over 4% of the total national population of 0-19 year-olds – are neither Maori nor Pasifika yet live in poverty.

It is this group that seems most at risk of neglect when it comes to the delivery of education interventions, including gifted education interventions.

Data on Educational Performance by Ethnic and Socio-Economic Background 

Ethnic Background

As noted above, the Ministry of Education’s Brief to the Incoming Minister carries a Table showing several indicators of relatively poor Maori/Pasifika educational performance. This is reproduced below.

These figures tell a bleak story and they are reinforced elsewhere, though the data does not always give a consistent picture.

The Social Report 2010 provides evidence of performance by both ethnic background and disadvantage, but unfortunately no analysis of the relative impact of each of these two factors.

In relation to ethnic background:

  • The proportion of secondary school leavers who left school with an upper secondary qualification at NCEA Level 2 or above: in 2008, 71% of all school leavers achieved this benchmark. The comparable figures by ethnic background were: European – 75.2%; Maori – 50.4%, Pacific peoples – 62.9%.
  • The proportion of the population aged 15 and over enrolled at any time during the year in formal tertiary education leading to a recognised NZ qualification: during 2009, 426,000 young people achieved this benchmark (12.4%). The age standardised ethnic breakdown was: Maori – 17.1%; Pacific peoples – 12.1%; Europeans – 11.4%. The age standardised percentages for enrolment in bachelor’s degree courses was: Europeans – 3.5%; Maori – 3.1%; Pacific peoples – 3.0%. Females from Maori and Pacific backgrounds were more likely to be enrolled than males from European backgrounds.

Education Counts provides an analysis of the proportion of students leaving school with a university entrance standard in 2010. Overall, 42% of leavers achieved this measure. The ethnic breakdown was: Asian 65.3%, European/Pakeha 47.5%, Pasifika 25.8%, Maori 20%.

Data from PISA 2009 adds a further dimension. The NZ Ministry of Education publication ‘PISA2009: Our 21st Century Learners at Age 15’ provides useful evidence of the impact of ethnic background on achievement in literacy.

We learn that:

  • Overall, 16% of New Zealand’s students achieved level 5 and above on the PISA 2009 literacy test and 14% achieved below Level 2. The former is comparable with or exceeds the outcome in other high-scoring countries but the proportion of weaker readers is relatively larger than in most other high-scoring countries other than Australia and Japan.
  • 19% of Pakeha/European students achieved level 5 and above, as did 16% of Asian students. The comparable figures for Maori and Pasifika were 7% and 4% respectively. Conversely, the figures for those achieving below Level 2 were 11% Pakeha/European, 18% Asian, 30% Maori and 48% Pasifika.
  • Amongst the eight highest-performing countries, New Zealand had the widest gap between the scores of its top 5% and its bottom 5% of performers.

Punctuated Sky courtesy of Chris Gin

Socio-economic Disadvantage

The Social Report 2010 reveals  the proportion of secondary school leavers who left school with an upper secondary qualification at NCEA Level 2 or above in terms of school decile, showing that 57% of pupils at relatively disadvantaged schools in deciles 1-3 achieved this benchmark, compared with 67% at schools in deciles 4-7 and 82% at schools in deciles 8-10.

Education Counts similarly deploys school decile when considering the proportion of students leaving school with a university entrance standard in 2010. It notes:

‘A clear positive correlation between the socio-economic mix of the school the student attended and the percentage of school leavers attending a university entrance standard…Students from schools in deciles 9 and 10 were three times more likely to leave school having achieved a university entrance standard than students from schools in deciles 1 and 2’


There is a large variation in the proportion of school leavers achieving a university entrance standard amongst schools within each decile.’

This is exemplified in the table below. If similar distinctions occur in the achievement of disadvantaged pupils in these schools, then the shortcomings of a decile-based approach are clear.

Interestingly, New Zealand’s domestic analysis of PISA 2009 does not examine variations according to socio-economic background, so we must turn to the original PISA 2009 Results (Volume 2).

This provides useful international comparisons of:

  • The percentage variation in student performance in reading explained by students’ socio-economic background (the strength of the gradient showing the association between student performance and background) and
  • The average gap in reading performance of students from different socio-economic backgrounds (the slope of the gradient measuring by how much student performance changes when socio-economic status changes).

The table reproduced below shows that, on the first of these measures, New Zealand is three points above the OECD average of 14%, so in the upper part of the distribution but not too far distant from other high performing countries (eg Singapore 15%, Shanghai 12%, Korea 11%, Canada 9%.

But on the second measure, New Zealand’s score of 52 exceeds that of every other country in the table. Competitors’ scores include: Singapore 47, Korea 32, Canda 32, Finland 31, Shanghai 27 and Kong Kong 17.

The text tells us:

Where the slope of the gradient is steep and the gradient is strong, the challenges are greatest because this combination implies that students and schools are unlikely to “escape” the close relationship between socioeconomic background and learning outcomes. In these countries, this strong relationship also produces marked differences in performance between students from advantaged and disadvantaged backgrounds. Where the slope is steep and the gradient weak, the relationship between socio-economic background and learning outcomes is an average tendency with many students performing above or below what is expected by this general trend.’

Only Belgium and New Zealand demonstrate ‘high average performance and large socio-economic inequalities’.

I sought in vain for a publicly-available and reliable outcome measure – whether of achievement or destination – that would throw further light on the existence of an excellence gap between advantaged and disadvantaged high attainers.

But one can reasonably assume that the relationships identified in this PISA analysis apply at each level of performance, so that New Zealand’s excellence gap is likely to be fairly pronounced.

Cross-referencing Data on Ethnic and Socio-Economic Underachievement

Maybe I haven’t been looking in the right place, but educational achievement data that cross-references ethnic and socio-economic background seems conspicuously thin on the ground.

This Table offers a beguiling glimpse into analysis across both these variables. It too uses school deciles as a proxy, but groups them into five quintiles:

From this we can infer that, although European/Pakeha tend to achieve more highly:

  • Maori in decile 7-10 schools (quintiles 4-5) and Pasifika in decile 5-10 schools (quintiles 3-5) are more likely to achieve a university entrance standard than European/Pakeha in decile 1-2 schools (quintile 1)
  • Maori and Pasifika in decile 9-10 schools (quintile 5) are more likely to achieve a university entrance standard than European/Pakeha in decile 1-6 schools (quintiles 1-3)

However, the overall variation we have already noted between schools in the same decile on this measure suggests that there will be similar variation as far as disadvantaged students are concerned (to the extent that they are represented in higher decile schools). It is perhaps likely that the strongest schools in deciles 1-5 will tend to out-perform the weakest in deciles 6-10.

So we have evidence of a significant ethnicity-based performance gap and a significant socio-economically based performance gap with a degree of overlap between them, though not to the extent that one entirely explains the other.

The New Zealand Institute’s NZahead report card explains it thus:

‘New Zealand’s overall strong performance in PISA masks three important problems.  First, wide disparities in student achievement exist between ethnic groups.  Māori and Pacific peoples’ average PISA scores are much lower than the average for Pakeha/European students…. the gap has not been narrowing fast enough over the years for Māori and not at all for Pacific peoples.

Over the seven years from 2004 to 2010 Māori and Pacific candidates for NCEA at all three levels and for University Entrance were consistently less successful than European and Asian candidates.  For example in 2010, 61% of Māori and 52% of Pacific candidates gained NCEA Level 3 compared to 79% for NZ European and 78% for Asian candidates.

Second, wide performance disparities exist for students from different socio-economic backgrounds.  In Education at a Glance 2011, New Zealand is shown to have the greatest difference in reading performance between students from different socio-economic backgrounds out of all OECD countries.  Although the relationship between students’ background and school performance is evident in all countries, New Zealand is the least successful at mitigating the effect a student’s background has.

Third, too many young New Zealanders are becoming disengaged and not remaining in education as long as their OECD peers.’

These are clearly overlapping problems but here they are presented as quite distinct, which rather begs the question why they are confused together when it comes to the implementation of educational policy solutions.


June 2012


A Comparative Review of Gifted Education Quality Standards: Part 2

This is the second part of a post dedicated to a comparative analysis of gifted education quality standards.

As far as I can establish, a total of 10 standards have been developed in six countries since the publication of NAGC’s original district-level Gifted Program Standards in 1998. Two countries – England and the US – have updated and published revised standards. The other four – the Netherlands, New Zealand, Saudi Arabia and Wales – have each produced a single edition.

Part One set out a personal perspective on what constitutes a really good set of quality standards, summed up in the paradox of a ‘flexible framework’, offered a basic typology and concluded with the history of their development.

Part Two is a comparative assessment of the 10 standards, concluding with an in-depth review of the content of eight of them.

I will conclude the post in due course with a Coda which examines in more detail the multiple uses to which well-designed quality standards can be put and the significant benefits they can deliver.

Comparative Analysis: Structure and Purpose

We begin with an examination of the shape and structure of the 10 standards. Table 1 sets out the basic factual information, showing the standards in the order of their development.

Table 1

Standard Country Date No. of elements No. of levels
NAGC v1 USA 1998 7 2 (Minimum, Exemplary)
IQS v1 England 2005 14 3 (Entry, Developing, Exemplary)
CPS Netherlands 2005 6 1
CQS England 2007 7 3 (Entry, Developing, Exemplary)
Welsh Assembly Wales 2008 10* 1
LAQS England 2009 13 3 (Entry, Developing, Exemplary)
TKI New Zealand 2009 9 3 (Entry, Developing, Exemplary)*
MSP Saudi Arabia 2009 9 4 (Limited, Developing, Good, Excellent)*
NAGC v2 USA 2010 6 1
IQS v2 England 2010 14 3 (Entry, Developing, Exemplary)


*It could be argued that the TKI standard has five levels, because there is a column devoted to what it means to fall short of the entry level standard, while the improving standard contains two different columns

*The MSP standard explains that schools will be assessed on this 4-level scale but, additionally, one of the elements relates exclusively to schools aspiring to advanced partnership, whereas other schools need only to meet the standards in the eight other elements

*The Welsh Assembly standard divides two of its elements into three significant sub-elements, so it is arguable that it really comprises 14 elements.

The number of levels is typically either one or three, with just a couple of exceptions. In the MSP example, the four gradings are not actually built into the standard, but imposed on a single set of statements. Something similar is found in the Dutch standard, which invites schools to score themselves on a 1-5 scale against each statement.

The inclusion of a ‘not meeting the standard’ column is unique to the New Zealand example and is worthy of wider consideration. It could be helpful to settings considering whether or not they currently meet the entry level statements, giving them additional context for that judgement.

The number of elements within each standard ranges from 6 to 14, with the UK Standards at the upper end of the range and the US and Dutch examples at the lower end.

Table 2, below, shows that there is relatively little common practice in the division into elements or the order in which they appear. (This is also true of the placement of material within specific elements).

Table 2

Identification Standards and progress Conditions for learning Leadership Student identification Learning and development Organisation and policy Professional learning Student achievement A whole school strategy/action plan
Effective provision in the classroom Effective provision in the classroom Development of Learning Policy Professional development Assessment Education and learning Definition Leadership and management Identification strategies and criteria
Standards Identification Knowledge of subjects + themes Ethos and pastoral care Socio-emotional guidance and counselling Curriculum planning and instruction Support and counselling Policies/procedure School ethos A target for improvement of school’s provision/pupils’ performance
Enabling curriculum entitlement + choice Assessment Understanding learners’ needs Resources Program Evaluation Learning environments Communication with parents pupil and environment Resources Teaching and learning Learning styles, teaching approaches, organisational strategies Curriculum offers breadth, depth and flexibility Provision addresses pastoral care
Assessment for learning Transfer and transition Planning Engaging with the community families + beyond Program design Programming Quality improvement and assurance Identification Classroom management Reviews to identify underachievement and support individual pupils
Transfer and transition Enabling curriculum entitlement + choice Engagement with learners + learning Identification Program administration and management Professional development Benefit to other pupils Maori dimension Student personal development Improve the skills of all staff
Leadership Leadership Links beyond the classroom Effective provision in the classroom Curriculum and instruction Cultural diversity Parental involvement Support for exceptionally able
Policy Monitoring and evaluation Learning beyond the classroom Effective Teaching and Learning Commitment to and evidence of Resources including ICT
Ethos + pastoral care Policy Enabling curriculum entitlement + choice Beyond the regular classroom Advanced Partnership Taking account of pupils views + encouraging them to take responsibility for learning       Taking account of parents’ views + encouraging them to take responsibility for supporting their child’s learning; Working with partners to enhance provision
Staff development Ethos + pastoral care Transfer + transition Monitoring action plan and effectiveness of school’s policy.
Resources Staff development Staff development
Monitoring and evaluation Resources Standards
Engaging with community families and beyond Engaging with community families and beyond Monitoring + evaluation
Learning beyond the classroom Learning beyond the classroom

There is no suggestion that those preparing the standards are drawing on a shared understanding of how gifted education practice should be broken down into its constituent parts. Nor is there any evidence to suggest a tendency towards consensus over the 12-year period.

The elements provide a basic architecture for the standard that the authors believe will be logical and rational for the users. There is no particular merit in having a specific number of elements although one can see that the range we have probably marks the parameters of ‘useability’. Arguably, 14 is at the upper end of manageable while six is the barest minimum for such a complex set of processes.

Table 3 shows that the Standards and their supporting resources together identify some 20 underpinning aims and purposes that their gifted education quality standards are intended to address.

There is inevitably a degree of subjectivity in this analysis, since some objectives are more overtly stated than others – and different descriptions of purpose are sometimes provided in different materials, depending on the intended audience.

Self-evaluation and improvement planning by settings are by far the most common and almost ubiquitous.

Table 3

Define the shape and constituent elements of gifted education x
Establish generic understanding across subjects and phases x x x x
Common language for discussion x
Reflection by teachers on their own practice x
Improve pupil and school level achievement x x x
Improve gifted education locally, regionally and nationally x
Set minimum expectations for schools x
Self-evaluation x x x x x x x x x
External assessment x x x x
Improvement planning x x x x x x x x x
Peer review x x x
Curriculum planning x
Professional development x x x x
Innovation x
Advocacy x
Cross-school collaboration x x x
Select schools into a partnership x
Accreditation of schools x
Structure guidance x x
Catalogue resources x x x

The table also demonstrates that, as far as public declaration is concerned, the English quality standards are relatively more ambitious in terms of the number of tricks they seek to take.

This is arguably because the design and development process was tied explicitly to the expansion of a national programme for gifted education and overseen by the body responsible for the country’s wider education policy. The standards were designed with a strategic function, rather than being produced for a specific project or subset of schools, or by an advocacy-driven organisation such as NAGC.

The ways in which the standard could support other aspects of the national programme – and vice versa – were at the forefront of our thinking, as were the opportunities to anchor the standards firmly in other areas of policy. To give an example, we tried very hard to persuade OFSTED, our schools inspectorate, that they should adopt the standards publicly as the basis for inspection judgements in schools.

We were only partly successful. Although the User Guide makes clear that the three levels of the IQS are explicitly aligned with specific OFSTED grades – satisfactory, good, excellent – and the standards are mapped against the self-evaluation framework OFSTED had in place for schools at the time, it was a bridge too far for the fiercely independent inspectorate to adopt entirely a quality framework developed through a process that they did not control.

The substantive point remains that, with the right degree of support and influence, gifted education quality standards can be embedded in the very fabric of education accountability measures, and indeed in many other dimensions of national or state education policy, so making them potentially a very powerful policy lever indeed.

No other country has come closer than England to achieving this outcome.

Comparative Analysis: Content

The remainder of this analysis considers eight of the ten standards (I have excluded the English classroom and local authority quality standards because they add relatively little additional value given their specialist focus. It is worth remembering their existence, however: they explain apparent omissions in the English IQS, which does not need to address in detail matters of pedagogy and supra-school administration because they are covered elsewhere.)

Rather than undertake an exhaustive analysis of every single similarity and difference between the eight examples, I have tried to highlight some of the more interesting variations.

I have also focused on the incidence of wording that appears to invite all settings to follow specific practice – even though this may not be supported universally as best or even effective practice – rather than giving them flexibility to implement the standards as they see fit.

The selection of such ‘non-negotiables’ throws a particularly interesting light on the priorities of the standards’ authors. It is a perfectly valid aim for a quality standard to embed such practice universally, across all settings, though an excessive number of ‘non negotiables’ will inevitably compromise a flexible framework approach.

Some – the Welsh Assembly example springs to mind – contain quite a few of these apparent requirements, others – notably the Saudi MSP standard – are almost bereft of them. Sometimes of course it is hard to tell, for there are several different ways to promote aspects of provision on the face of quality standards while stopping short of absolute compulsion.

I have divided this treatment into sections headed by the name of particular standards, though I have taken the two IQS standards and the two NAGC standards together. Each section also addresses specific themes, so readers will find that they are flitting constantly between standards. I could find no other way to organise the material short of a huge and unreadable grid.

The English IQS (2005 and 2010)

Courtesy of Richard Croft

The IQS contains an explicit requirement for a co-ordinator or lead teacher in each school with overall responsibility for gifted education. In fact this requirement is common to all the standards except the two NAGC versions, which may be explained by their status as district standards. (The 1998 edition does include an oblique reference to a district co-ordinator, specifying that such a post-holder should have appropriate qualifications).

Other ‘non-negotiables’ in the IQS (though they may only apply at certain levels) include:

  • securing through identification a gifted and talented population representative of the whole school population (something similar can be found in the later NAGC standards but this is otherwise unique;
  • supporting those with multiple exceptionalities and the exceptionally able (2005 only). With the exception of NAGC (2010), the former do not seem to get the same positive treatment in any other standards, but the latter feature even more significantly in the Welsh Assembly standards (though nowhere else). Significantly, both references are dropped from IQS 2010. Support for pupils of different cultures and backgrounds is also referenced in the 2005 edition but dropped in the later version;
  • links with local and national providers of out-of-school gifted education (2005 only) replaced in the 2010 edition by a reference to collaboration with other schools. It is as if provision offered by universities and voluntary organisations has become irrelevant given a policy change around this time that shifted English gifted education towards being more school-led.

The English standards are also notable for incorporating expectations about the academic performance of gifted learners. These change subtly between the two editions: the earlier focuses on standards relative to gifted learners in similar schools; the later switches to national averages and also introduces expectations for pupils’ progress. Both refer exclusively to high attainers, who are of course only a subset of the gifted population.

There is nothing as explicit in the later NAGC standards, even though they are built entirely around pupil outcomes. They seem to address every dimension of competence except academic achievement. Students are to demonstrate ‘important learning progress’ but nowhere is this quantified, whether in relative or absolute terms.

The MSP standards do refer to high student achievement , but only in general terms. The Welsh Assembly Standards demand targets for students’ performance as well as for the improvement of the school’s provision, though without setting any kind of benchmark for either.

It may be that the authors of these standards decided to do without such a reference because it was impossible to find a formulation that would apply equally to all gifted learners. But, if we accept that high attainers are a subset of the gifted population, it seems rather absurd for standards (especially those based on outcomes like NAGC 2010) to exclude some sort of expectation relating to their academic achievement.

Conversely, the English standards are typically coy about funding, referring only to ‘appropriate budgets’. The Welsh opt for a similar reference to the school governors ‘allocating appropriate resources’

The first NAGC standards are slightly better, calling for gifted education to be equitably and adequately funded compared with other programmes, and for funding to be tied to programme goals (and adequate to meet them at exemplary level). Similar terminology is retained in the 2010 edition. The other standards are almost silent on this critical issue.

The NAGC Standards (1998 and 2010)

courtesy AlaskaGM Photos

‘Non-negotiables’ include:

  • the evidence base supporting identification should draw on multiple assessments ‘including off-level testing’ and make use of ‘culturally sensitive checklists’ (2010);
  • a personalised assessment profile must be developed for each student (1998);
  • gifted programmes must be ‘an integral part of the general education school day’ (1998);
  • flexible grouping and, ‘suitable adaptations’ which are specified at exemplary level as early entrance, grade-skipping, ability grouping and dual enrolment (1998); by 2010 this is modified to: ‘educators regularly use multiple alternative approaches to accelerate learning’;
  • students interact with educators who meet the national teacher preparation standards in gifted education; educators participate in ongoing professional development to support students’ social and emotional needs (2010).

The US standards are typical in devoting significant attention and space to professional development, although this is clearly an input rather than an outcome.

At exemplary level, the TKI standards require that all teachers in the school have undertaken relevant professional learning, that an induction process is available for new staff, and that gifted education specialists have specialist qualifications.

Similarly, Saudi schools must ensure that a differentiated professional development programme is available for all teachers which incorporates some work on the theory and practice of giftedness and creativity.

In Wales, staff training must cover a range of bases including identification, formative assessment, strengthening pupils’ self-esteem, differentiation, learning styles, thinking skills and problem-solving. Support staff must also receive appropriate training.

While the English standards maintain a curious separation between identification and the assessment of gifted learners, the later US standards sensibly take the view that identification is integral to assessment.

Identification is subsumed within ‘support and counselling by the Dutch, but is almost entirely absent from the MSP standard, presumably on the grounds that it is undertaken external to the school.

The 1998 NAGC standards places heavy emphasis on what they call ‘socio-emotional guidance and counselling’ devoting an entire element to this. All learners must receive guidance to support their socio-emotional development, on top of dedicated guidance and counselling for vulnerable learners.

The phrasing is redolent of a ‘deficit model’ approach, consistent with a strand of US gifted education thinking built on the assumption that gifted students typically have social and emotional ‘issues’. This is redressed somewhat in the 2010 standards where emphasis is rightly placed instead on developing all students’ personal, social, cultural and communications competence, as well as their leadership skills.

There is also another element called ‘Learning and Development’ which addresses students’ personal development, including: self-knowledge and understanding; understanding of and respect for similarities and differences within their peer group; and understanding of their own cognitive and affective development.

Such ‘soft skills’ are almost entirely lacking from either version of the IQS, which merely contain brief references to support for learners’ social and emotional needs and action to combat bullying and stress. The Welsh also tend to concentrate on the more tangible issues within this spectrum, such as careers education and guidance and pupils’ attitudes to learning.

The pastoral support dimension is slightly better developed in the Dutch CPS standards but only the Saudi version rivals the coverage in the later NAGC standards, covering students’ self-esteem, resilience and perseverance, as well as their tolerance and respect for each other.

CPS – Netherlands (2005)

Courtesy of Tambako the Jaguar

The Dutch standards are amongst those with relatively few ‘non-negotiables’ other than expectations that there will be a school co-ordinator, partnerships with higher education and business and links into a regional and national network of schools.

There is some emphasis placed on action planning for improvement, a feature that is also found in the English IQS and, in pronounced fashion, in the Welsh Assembly standards.

But, whereas this tends to foreground SMART targets and data, the Dutch begin a stage earlier, with an expectation that schools with have articulated a vision for gifted education and defined the model of provision they will follow. Only then can an action plan be produced.

New Zealand incorporates something similar: entry level involves developing an appropriate definition of gifted and talented which recognises different types of giftedness.

The CPS standards are particularly noteworthy for the inclusion of an element called ‘Benefits to Other Pupils’ which does not appear in any of the other examples (although there are briefer references to a ‘rising tide lifting all ships’).

TKI – New Zealand (2008)

courtesy of etnobofin

   The TKI standards are again more tightly specified:

  • entry level identification must involve more than two sources of information (eg parents, teachers, peers)       and more than two types of information (eg tests, observations, interviews). A register and individual profiles are required at improving level;
  • exemplary teaching and learning involves the co-construction of differentiated work modules by students and staff, something similar also features in the Welsh standards and in NAGC 2010;
  • out of school provision in improving schools will include the deployment of expert coaches, tutors or mentors.

But the tightest level of specification is reserved for schools’ response to diversity. The TKI standards devotes an entire element to The Maori Dimension and another to Cultural Differences.

Under the former, all schools are expected to demonstrate an understanding of the Maori world view and consultation with Maori staff. At the middle level, Maori theories and conceptions of giftedness should be acknowledged and respected, while exemplary schools should reflect Maori beliefs throughout their provision.

This progression is repeated in the Cultural Differences segment, which causes some unnecessary duplication. It as if inclusion of a separate Maori element of the standard was itself a ‘non-negotiable’, even though a single section could have covered both quite comfortably.

This heavy diet of diversity puts into perspective claims by the authors of the 2010 NAGC standards that they give significant attention to the same issue. The 2010 standards are certainly a big advance on their predecessors, but do not begin to match the Kiwi approach.

One can appreciate the distance between the two by replacing ‘Maori’ with ‘African American’ or ‘Hispanic’ and parachuting the TKI element into the NAGC standards…

MSP – Saudi Arabia (2009)

courtesy of Saudi

We have noted already that there are few ‘non-negotiables’ in the Saudi standards. Perhaps the only substantive example is the reference to a school co-ordinator – a post for a committed key professional working within the school’s senior leadership team who is supported with the time and resources to model best practice, be the resident expert in teaching and learning and act as a key driver in bringing about ‘deep’ change.

The MSP standards are strong on teaching and learning, encompassing much of the material that is covered by England’s separate CQS. They advocate a wide repertoire of teaching strategies, the development of subject knowledge and an understanding of how to use and apply it, student involvement in negotiating their work, collaborative learning, higher order questioning, problem-solving, independent research and a risk-taking culture.

Parental engagement is strongly featured relative to most other standards, and is probably only matched by the Welsh document. A flavour is given by the expectation that:

‘parents are helped to understand the complex nature of ability and and the importance of opportunities and personal motivation in the fulfilment of a child’s potential’

Maintaining a positive school ethos is also prominent, exemplified by a reference to the school community working:

‘together in harmony, upholding a shared set of values such as respect, honesty, courage and responsibility’.

Welsh Assembly (2008)

courtesy of Plaid

The Welsh standards seem rather paternalistic compared with most of the others. The list of ‘non-negotiables’ is relatively extensive, encompassing several of the themes addressed above, such as providing a school co-ordinator, staff training and development, support for the exceptionally able and careers guidance. It even extends to a requirement that learners can school library and IT facilities out of school hours!

This is not in itself an unreasonable expectation, but it is surely too insignificant to feature on the face of the standards.

As one might expect, these standards are very much consistent with NACE policy. There is no reference to accelerative practice or a faster pace of learning, even in the section about supporting exceptionally able learners. Whereas the US standards may seem a little too ready to embrace acceleration, the Welsh are very much the opposite.

Welsh schools should also have:

‘a clear rationale for identification that is inclusive and encompasses all children who have abilities and talents above those normally found in the school’.

There is very strong emphasis on multi-faceted progress monitoring against the required action plan. Alongside the priority placed on parental engagement, a sub-element is also devoted to pupil voice, which is much more developed in these standards than any of the others.

Schools must listen regularly to pupils’ views about the experience of being a gifted and talented pupil, including feedback about their aspirations, what helps them to learn and what barriers exist to their achievement. They must also demonstrate that they have acted on such views.

What Lessons Can We Draw?

It is not my purpose to produce a league table of gifted education quality standards. They must be judged against the objectives set by those who designed them and considered in the very different educational contexts to which they apply.

None of the standards is head-and-shoulders above the rest. All have outstanding features; all have shortcomings. This reinforces the importance of taking a global perspective and reviewing all existing standards whenever a new one is produced. Insularity is never the route to best practice.

But I very much prefer those that come closest to the ‘flexible framework’ ideal, rather than those which seem overly prescriptive and over-detailed.

This optimal approach, while it encompasses the full span of what is important in gifted education, is relatively sparing in its insistence on specific practice, confining such prescription to fundamentally significant issues and a handful of policy priorities.

It is otherwise all too easy to devise standards that become a straitjacket, serving only to constrict the divergence and innovation that is always necessary to improve our shared understanding of what works.

For the fundamental purpose of quality standards must be to support and release the creativity and commitment displayed in every single setting, harnessing it – though loosely – for the ultimate benefit of all gifted learners.

As indicated above, this post will conclude with a Coda dedicated to reviewing the multiple purposes of gifted education quality standards and how these can be pursued simultaneously without compromising each other.

We will look at the score or so which feature in Table 2 above but my own work suggests that there are several others besides. I am at 25 and counting…

Whenever I am asked which gifted education reform has had most impact I always reference quality standards. Six countries to date have understood their power and value. Let’s hope that many more will follow their example.


November 2011

A Comparative View of Gifted Education Quality Standards: Part 1

This post looks at the nature and purpose of gifted education quality standards, reviews the history of their development and draws lessons from a comparative analysis.

It is designed partly as background reading for those interested in developing state or national gifted education quality standards of their own. As far as I can ascertain, no such analysis has ever been published before, although one hopes that the authors of all quality standards produced to date have conducted their own private analysis.

Colleagues in Hong Kong are considering whether to proceed with such standards and this post is partly a spin-off from a consultancy and professional development package prepared specifically for them. In matters of gifted education, Hong Kong often leads where other states follow.

The post is organised into two main sections and a coda:

  • Part 1 sets out my idea of what constitutes a good quality standard, which differs somewhat from most of those in existence, offers a basic typology of quality standards and outlines the history of their development to date;
  • Part 2 is a comparative analysis of eight gifted education quality standards produced in five different countries;
  • The Coda will take a closer look at the many and varied purposes of a well-designed quality standard and will be published a little further downstream.

What is a Good Quality Standard?

There is a difference between describing the nature of extant gifted education quality standards and explaining what such standards should be like.

Paradoxically, the notion of a standard betokens a certain level of precision that does not necessarily coincide with my idea of best practice.

The august British Standards Institute has defined a standard as:

‘A published document that contains a technical specification or other precise criteria designed to be used consistently as a rule, guideline or definition. Standards help to make life simpler and to increase the reliability and the effectiveness of many goods and services we use. They are a summary of best practice…’

But, during several years creating and working with gifted education quality standards, the phrase I have found most neatly captures their fundamental essence is ‘a flexible framework’.

For effective quality standards must be precise enough to:

  • capture, clearly and succinctly, all the elements of effective practice in gifted education at a specified level in the education system; and so
  •  equip all stakeholders with a common language to describe effective practice, so they can communicate effectively with each other.

On the other hand, they must be imprecise enough that they can:

  • apply universally, to every setting, regardless of phase, sector, status, funding or any other variable;
  • reconcile into consensus the wildly differing perspectives of experts – be they practitioners, academics or policy makers – and the full range of other stakeholders; and
  • allow sufficient scope to meet widely varying circumstances, support divergent interpretation, promote innovation and allow for changes to the paradigm and the wider policy context (at least up to the point where they need to be revised).

Hence producing a quality standard should be a careful balancing act between these two conflicting priorities. One is not always convinced that this important principle has been grasped by those charged with their development.

Flexibility in Design and Development

This spirit of compromise is part of a bigger bargain between the centre – typically an educational arm of government – and the myriad of local settings to which the standard applies.

The centre has a vested interest in deploying the quality standards as a policy lever that drives improvement throughout the system – and also as an instrument to push particular priorities that it deems significant (though too many of these will overload the standard, so it needs to be selective).

Meanwhile, local settings are typically seeking a practical tool they can use to evaluate, improve and validate their own practice; an instrument that will support school improvement and professional development alike; and a platform for local partnership and collaboration.

While expert practitioners may be relatively inclined towards pragmatism and eclecticism when developing an understanding and appreciation of what works best in gifted education, expert academics in the research community may be more inclined to argue for a particular model, approach or paradigm in gifted education.

They need to be prepared for the possibility that their most cherished beliefs will not appear on the face of the standard (though the standard can nevertheless accommodate them).

To give an example, it may not be desirable for a gifted education quality standard to explicitly advocate grade-skipping, so making it in effect a universal requirement, even though grade-skipping may be perfectly permissible within the terms of the standard.

Sometimes researchers can take the opposite track. I well remember resistance to our classroom quality standards (see below) on the grounds that such a standard would be based by definition on one pedagogy, so preventing other pedagogies from being freely expressed in the classroom!

Such a criticism is based on a fundamental misunderstanding of the flexible framework paradox.

By virtue of being a consensual compromise, a single framework can support many different outcomes, whereas a more tightly-drawn document could not reconcile the very different objectives of stakeholders.

But, even with this principle of flexibility, quality standards have a limited shelf life. Once the gifted education paradigm has shifted significantly, or wider education policy has undergone radical reform – perhaps as a consequence of the election of a new government – they need to be revised and updated, given a new lease of life.

Otherwise they will ossify gifted education by holding practitioners to outdated assumptions of what constitutes effective practice.

Flexibility in Application

And a gifted education quality standard should be applied in a similar vein.

Because there is a need to break down provision into its constituent elements, often to define more than one level of performance and invariably to ensure that the resulting tool is readily accessible, quality standards are typically published as a grid.

The rows and columns might suggest precision and tight specification, but closer analysis should reveal a more elastic approach.

In constructing standards containing different levels, it is not imperative to maintain consistency of approach between the levels. Statements relating to the same element at a higher level can be cumulative – covering the same issue but with extra ‘demand’ built in – or they can add greater breadth by introducing another related issue, or they may do both simultaneously.

Settings should be encouraged not to apply the standards to their practice with a slavish ‘tick-box’ mentality. The process of discussing and agreeing which ratings apply in a particular setting is at least as important as the outcome, quite probably more important, because it invites communication and supports the building of consensus.

Moreover, reaching an overall judgement is not a matter of calculating an overall ‘score’ which determines a given category: settings should use discretion and professional judgement to reach a ‘best fit’ judgement which they can collectively agree and which is supported by the evidence.

Varieties of Gifted Education Quality Standard

Quality Standards can be categorised according to three key variables:

First, the layer of the education system to which they apply

Standards may be applied to:

  • learning settings (I am using this term in preference to ‘classroom’ in recognition that such a standard may be fully applicable to out-of-hours and out-of-school learning);
  •  institutions – most typically schools, but we opted to call our whole-school standards ‘institutional quality standards’ (IQS) in recognition that they should apply equally to colleges of further education, nursery schools, pupil referral units (PRUs) and so on;
  • local authorities, or school districts, or indeed any other grouping of several institutions, regardless of whether it is based on geography or some other relationship. So, for example, a ‘local authority quality standard’ could apply just as well to a chain of academies or charter schools;
  •  regional, state or national gifted education programmes and services. Although none exist as far as I am aware, I have argued before on this blog that the development of a national – more exactly an international quality standard – would be an important step towards effective international collaboration.

Second, the levels of performance that they specify

  • some standards have a single level, typically pitched to be reachable by all settings, or almost all settings, allowing for the fact that there will always be some outliers – such as failing schools – that we should not seek to accommodate;
  • others have two levels – typically a baseline standard and a standard for advanced or exemplary performance to which all settings should aspire and which some centres of excellence will be able to achieve;
  • others still have three levels, inserting a level for improving settings between the entry and exemplary levels, so providing an extra step in the ladder to support a process of steady but continuous improvement (though it should be possible for settings to continue working at their current level if they prefer and, certainly, there should be no ceiling on the top level, so even the very best schools can never say that they have ‘completed’ a standard).

I know of no gifted education quality standards with more than three declared levels, though a couple get close to specifying five, as we shall see later.

Third, the core purpose(s) of the standard

  • If a standard is intended primarily as an instrument for self-review, or external assessment, rather than the ideal of a multi-faceted instrument with several different purposes, it will look somewhat different;
  •  It is critical to understand that a quality standard cannot have as one of its purposes the assessment of personal competence. Personal competence frameworks are entirely different animals, necessary to personal training and development and to performance management.

Quality standards are fundamentally school (institutional) improvement tools. They should align with personal competence frameworks, and both should inform professional development (because one is about the acquisition and demonstration of personal knowledge, understanding and skills; the other about the application of that knowledge and skills to bring about institutional improvement).

 This distinction is sometimes difficult to maintain, particularly when a quality standard is pitched at the level of classroom settings, but it is important to recognise that, even there, many other factors than personal competence will affect the quality of education provided, most obviously if there is one or more para-professionals present. A quality standard should reflect the cumulative impact of all inputs and processes, not just the single teacher.

The History of Quality Standards

This map shows the geographical spread and the historical development of gifted education quality standards taking account of all of which I am aware. All are available in the English language, which makes comparison that much easier.

1998: The first gifted education standards were produced, under the auspices of the US National Association for Gifted Children (NAGC) by an 18-strong task force. They applied at the school district level and applied the following principles, which begin to embody some (but not all) of the ideas set out above:

  • Standards should encourage but not dictate approaches of high quality;
  • Standards represent requisite programme outcomes and standards for excellence;
  • Standards establish the level of performance to which all educational school districts and agencies should aspire;
  • Standards represent professional consensus on critical practice in gifted education that almost everyone is likely to find acceptable;
  • Standards are observable aspects of educational programming and are directly connected to the continuous growth and development of gifted learners.

2005: Originally conceived in 2003, influenced in part by the NAGC standards, but envisaged from the outset as a school-level tool, the original English Institutional Quality Standards were developed, trialled and consulted on by a small team of consultants working with the support of an expert advisory group and eventually published in 2005.

The original User Guide embodies much of the thinking that we developed from the initial idea, which first emerged from a series of discussions between yours truly and the first director of NAGTY’s Student Academy.

The IQS were updated in 2010 though changes were fairly minimal.

The institutional standards were followed by classroom quality standards (CQS) in 2007, which amplified the teaching and learning dimensions of the IQS and applied them to learning settings rather than to whole school practice.

The CQS were conceived as a scaffolded support tool with three different layers, each undertaking a subtly different function.

  • The first layer was designed as a set of prompts to encourage reflection and discussion by classroom teachers of the application to all learners of seven key features of challenge and support within teaching and learning.
  • The middle layer applied these features specifically to gifted learners and provided a basis for a more thorough self-evaluation process. This is initially conducted within a generic rather than a subject-specific context, but a subject-specific treatment was also provided for English, maths, science, ICT and PE.
  • The third layer was originally envisaged as a comprehensive online resource base containing exemplification, case studies, action research and interactive discussion, predominantly provided ‘from the bottom up’, not least to exemplify the ownership and shaping of these standards by the professionals using them.

In 2009, the set of English quality standards was completed with the introduction of local authority quality standards (LAQS) – analagous to the US district standards, but based on the assumption that the role of local authorities is, first and foremost, to support the improvement processes instigated by schools.

All subsequent standards developed outside the US were influenced to some extent by the IQS:

  • the Quality Standards in Education for More Able and Talented Pupils published in 2008. These weredeveloped by the Welsh Assembly Government in collaboration with NACE and based on NACE’s Challenge Award, a commercially available standard which emerged at about the same time as the IQS (each informed the other’s development). The Challenge Award materials cost £250 while assessment costs from £700 to £1,900+ depending on the size of the school;
  • the self-evaluation instrument published in New Zealand in 2009 (though this was also informed by several earlier versions developed in that country);
  •  the assessment instrument developed in Saudi Arabia for the Mawhiba Schools Partnership in 2009 (though it also drew on professional standards for teachers and research on school effectiveness).

NAGC radically revised and updated its US district standards in 2010.

A working group undertook the work, according to a new (and rather curious) set of principles:

  • ‘Giftedness is dynamic and is constantly developing; therefore, students are defined as those with gifts and talents rather than those with stable traits.
  • Giftedness is found among students from a variety of backgrounds; therefore, a deliberate effort was made to ensure that diversity was included across all standards. Diversity was defined as differences among groups of people and individuals based on ethnicity, race, socioeconomic status, gender, exceptionalities, language, religion, sexual orientation, and geographical area.
  • Standards should focus on student outcomes rather than practices. The number of practices used or how they are used is not as important as whether or not the practice is effective with students. Consequently, the workgroup decided not to identify acceptable versus exemplary standards. Moreover, such a distinction would be difficult to support with the research.
  • Because all educators are responsible for the education of students with gifts and talents, educators were broadly defined as administrators, teachers, counsellors, and other instructional support staff from a variety of professional backgrounds (e.g., general education, special education, and gifted education).
  • Students with gifts and talents should receive services throughout the day and in all environments based on their abilities, needs, and interests. Therefore, the Workgroup decided to use the word “programming” rather than the word “program,” which might connote a one-dimensional approach (e.g., a once-a-week type of programme option)’.

Source: ‘Frequently Asked Questions about the 2010 pre-K-Grade 12 Gifted Programming Standards’

Most of these are unexceptionable, though the last two are perhaps a little prosaic to be regarded as ‘principles’ in their own right. The section in bold is the most problematic.

For, while the new standards quite reasonably include student outcomes, they continue to include a whole range of practices alongside. There is nothing on the face of the standards – or in the guidance available on NAGC’s website – to suggest that the practices are intended to be illustrative rather than binding.

Indeed, the Q and A explains that:

‘The revised standards will elucidate the next steps toward excellence in gifted programming by helping school districts move beyond the focus on practices alone to the relationship between certain practices and desired student outcomes’

It is as if the group developing the standards has been persuaded of the case for a flexible framework, has considered offering maximum flexibility by basing its framework on student outcomes alone, only to decide that ‘evidence-based practice’ must be included alongside so that users of the standards can anchor their effective practice in inputs and processes as well as outcomes.

The two-fold justification for removing the exemplary level is even more puzzling. Presumably, had they wished to, they could have defined ‘exemplariness’ entirely in terms of student outcomes.

There is no explanation of why ‘the research’ would not easily support the exemplary distinction. One can only conclude that the researchers engaged on this project found it impossible to agree, on the basis of aggregated research findings, a framework to define flexibly what constitutes exemplary practice in gifted education.

If so, that is a sad indictment of the contribution of the gifted education research community. It suggests to me that researchers may have had too much control of the revision process relative to other stakeholders.

The documentation does not say whether the working group examined international examples of quality standards before revising their own.

Let us hope that they did, for neglecting to review and learn from other models would not be consistent with good research practice – and also runs counter to the fundamental principles upon which this blog rests!

Actually I think the 2010 Standards are rather good, though some way from my idea of perfection…


November 2011

School Vouchers and Gifted Education (Part Two)

In the first part of this Post we looked at school vouchers in theoretical terms before embarking on a detailed critique of the ‘Step Change’ proposals insofar as they impact on gifted learners.

Part Two concludes that critique, then broadens out the argument to take in other contexts and finally offers a preliminary framework for designing a viable gifted education voucher scheme.

The ‘Step Change’ Voucher

‘Step Change’ does not define specific objectives for the voucher scheme it proposes, other than to state that it will produce unspecified ‘measurably improved outcomes’, so it is not possible to trace how the various elements of the scheme are expected to impact on the two very different sets of problems that have been identified.

Following a rather cursory review of free schools in Sweden, charter schools in the US and academies in England (there is nothing on the Netherlands and Ireland as required by the terms of reference), it devotes much more space to a review of personalised education.

Although this section is rather confused, it does convey the central ideas that:

  • learning does not any longer take place exclusively in school and may involve a range of different providers;
  • a tailored programme can be captured in a personal learning plan that draws together these different elements into a coherent whole;
  • there is a potential role for ‘learning broker mentors’ to negotiate these plans with learners, secure provision against the plans from one or more providers and monitor and support learners’ progress.

This provides the central foundation for the eight-stage proposal that follows, which is expressed as a series of sequential steps in a voucher-driven process. The treatment below includes my commentary on the proposal:

First, learners are identified to participate in the programme on the basis of ‘National Standards and other age-based assessments’. We know that the Standards are not properly calibrated to identify G&T learners, showing only whether a learner is achieving at, above or below the expected standard for their age. The other assessments are unspecified. We are left unclear whether ‘gifted’ in this context is intended to denote high achievers but the reference to a fixed ‘quota’ of 5% might support this assumption.

Second, education providers are selected for the programme on the basis of their record of success, reputation for high quality leadership and teaching and capacity to deliver the specified outcomes. This must be assumed to apply to all kinds of providers covered by the scheme, whether or not they are schools. The wording implies that a high quality threshold will be imposed on the supply side from the outset, so we might expect a significant proportion of schools to be excluded on the basis of the ERO evidence. Later the report says that providers who fail to deliver will be dropped from the scheme. Capacity is clearly critical, especially for schools, since voucher holders will need to be accommodated alongside their existing students.

Third, the selected providers publish information (‘prospectuses’) about ‘how they lift and extend student performance’ including details of pedagogy, curriculum, IT, learner-teacher ratios ‘and other factors leading to student success and satisfaction’. This is to ensure that the demand side of the market has sufficient information to make sensible and rational decisions. As we have seen above, the media selected and the arrangements for dissemination of such material are critical, especially if the scheme is targeting ‘hard to reach’ families.

Fourth, voucher holders choose one or more providers who will meet their needs. The use of the term ‘provider’ allows for the possibility that none may be a school, eg in the case of home-educated learners. The drafting seems to suggest – rather sensibly – that, if a range of providers is involved, one must take the role of ‘principal provider’. Part of that role is to co-ordinate assessment and monitoring. There is to be scope for learners to change their provider(s) if their learning pathways change, but it is not clear how frequently such opportunities will be available.

Fifth, the selected provider(s) ‘sign up’ to the voucher holder’s personal learning plan. This formulation implies that the plan will have been prepared at an earlier stage in the process, but this is recognised only in three ‘optional steps’ which cover the selection of a leaning broker mentor. It is therefore unclear how and when the plan is prepared if no broker is involved. If the arrangements are such that the plan is drawn up by the school which then provides the bulk of the learning programme, that seems rather at odds with the principles of market choice, because the supply side has too much influence. For the system to work properly, there really needs to be ‘clear blue water’ between these two functions.

Sixth, providers receive a first tranche of the voucher payment ‘up front’. The value of the voucher (‘Step Change’ calls it a scholarship) is determined through a formula weighted to reflect the students’ needs. It will be necessary to remove funding from existing generic grants and reallocate it to voucher holders on a per capita basis. Reference is made to initial thinking on the design of the Pupil Premium in England – of which more later. We are told here and later that the scheme will be fiscally neutral – ie it will require no additional funding – but also that providers will ‘be incentivised by receiving more per capita than they currently receive’. It is hard to reconcile these two statements.

Dunedin courtesy of Zanthia

Seventh, the pupil’s performance is reviewed and assessed and the personal plan revised as appropriate. Monitoring information is collected into a central database which is accessible to all the users, including non-school providers. Data analysis may also inform professional development associated with the scheme.

Eighth, the provider(s) receive a second tranche of the fee as a ‘success bonus’ for:

‘substantially lifting the performance of low achieving students or gifted students to new levels’.

This is odd given that we thought we had already established that the problem to be addressed by the vouchers is not the performance of gifted students but the quality of the support provided to them by their schools.

It might be explained by making the assumption that students’ performance is the sole measure of their schools’ success, but we have already shown that New Zealand’s high achievers achieve their results in spite of the questionable support provided by a sizeable minority of their schools, so a much better success measure would be the proportion of providers judged to be of suitably high quality by ERO. The bonus would then depend on that assessment rather than on students’ performance.

It is not clear how the two payments would relate to the actual value of the voucher but I infer that the bonus would not be received unless the improvement is secured. Failure to deliver would therefore impact on the provider’s budget and, if sufficient learners were involved, could potentially put it out of business. The division of these payments between multiple providers would also be potentially problematic.

For all these reasons the bonus is a non-starter. Besides, the voucher concept rests on the assumption that sufficient incentive is generated by the increased demand from families for the most popular providers, so an added incentive in the shape of a financial bonus is not strictly necessary.

Because we are assured that the overall funding will be fiscally neutral, any additional costs, such as those attributable to the scheme’s administration and the potential employment of ‘broker learning mentors’ would need to be found from savings elsewhere in the education system. There is no costing whatsoever in the report so we have no idea how much this would cost compared with the current system.

The Report suggests that the scheme might support several different models of provision, eg ‘wrap around’ school-based provision, a school within a school, a school plus provision offered by independent providers, pupils working with several different providers, even one-to-one tuition.

It identifies implications for the recruitment and training of staff, especially the ‘learning broker mentors’. It anticipates that a range of new providers will enter the education system and that arrangements will need to be in place to facilitate the expansion of popular providers. There will also need to be a relaxation of the rules governing school admissions.

The text gets rather repetitive at this stage, before concluding with the proposal that a Taskforce is established to work up the initiative for implementation in 2011 (a pretty heroic timetable given the huge number of policy issues to be resolved and accommodations that would be necessary to get even a relatively small pilot scheme off the ground).

What does ‘Free to Learn’ add to the case for vouchers?

The companion minority report ‘Free to Learn’ has a much wider canvas which approximates more closely to the terms of reference given to the Working Group.

It considers the wider school choice reforms necessary to implement a universal voucher scheme: increased choice, increased autonomy for schools, improved teacher quality, capacity for the expansion of popular schools and the decline and closure of poorly performing schools, more and better information to support choice of school, freedom to choose between different schooling options and, finally, funding tied to the student.

This final section opens with a treatment of education reform through choice and competition which utilises many of the arguments outlined above:

‘The key, then, arguably, to improving education outcomes…is to permit public provision of schooling to be decentralised. It is to allow a competitive market in education and the funding to facilitate it; flexibility with ease of entry and exit for schools and learning environments; and families’ choice from among competing providers. Such a market flourishes when there are clear pricing signals for providers and the profit motive. Such a market also shifts education out of the hands of government quangos and into the hands of parents and teachers.’

It uses two of the three principles articulated by Mark Harrison in ‘Education Matters: Government Markets and New Zealand Schools’ (2004) to justify a neutral per capita funding scheme: schools are funded according to the number of learners they attract and the per capita sum is the same regardless of whether the school is public, private or integrated, so creating a ‘level playing field’ for market competition.

It considers the third principle – that parents should be allowed to top up this funding – to be ‘the most politically unpalatable’ and eventually discards it in favour of weighted funding to reflect students’ needs:

‘Governments weight scholarships for equity reasons to try to ensure that every child obtains a quality education. Recognising that some children are more difficult or costly to teach than others or require travel subsidies, central agencies add extra value to some of their scholarships in an attempt to increase the incentives for educators to take them’.

It suggests the decision is finely balanced since a weighted voucher arguably:

‘distorts the market making it difficult for providers to add improvements, ascertain their value and to meet the indicated demands of the people they serve…generating perverse incentives that keep those in need where they are because of the benefits accrued to them for remaining there’.

Following an analysis of current NZ funding arrangements and practice abroad, it devotes significant space to the consideration of tax credits as an alternative payment mechanism to a straightforward voucher scheme, but fails to choose between them.

There are no further practical details to add to the treatment in the main report.

Have gifted education voucher schemes been tried elsewhere?

Much of the preceding analysis is devoted to the not inconsiderable shortcomings of the IPWG’s work. Frankly, it is no surprise that this half-baked report failed to gain traction in New Zealand. But what of the case in principle for a gifted education voucher scheme?

I wondered whether the idea of targeting gifted learners came from an external source, or whether it originated with Heather Roy’s own personal interest in this issue. All the evidence suggests the latter, because there are no references in ‘Step Change’ or ‘Free to Learn’ to any existing schemes with such a focus.

NZ Bridge courtesy of Lockhear

I am personally aware of one near-precedent in the UK and, having sifted through the available online literature, I have traced just one further reference. So, unless readers of this post can tell us otherwise, it appears that there are no targeted voucher schemes explicitly designed to support gifted learners (as opposed to generic schemes that include them alongside other learners, or scholarship schemes that offer places only in private schools).

The single reference I have found is from an article published in 1998 by B D Baker called ‘Equity Through Vouchers: The Special Case of Gifted Education’

It argues that the widespread cuts then being made to free public sector gifted education services across the United States coincide with significant increases in fee-paying programmes, such as the residential summer programmes offered by the Center for Talented Youth at Johns Hopkins and many other similar university-based providers.

These tend not to offer significant financial aid so the vast majority of attendees are from relatively advantaged backgrounds. As a consequence, many poor gifted students are denied any support. One solution would be to provide ‘wealth-equalised vouchers’ to enable gifted disadvantaged students to attend these out-of-school programmes (though it is questionable whether these would compensate for a year-round school-based programme).

Compared with the ‘Step Change’ proposal, such a scheme would have very narrow scope. It could be regarded as little more than a variant on private school scholarships, but it is at least an earlier exposition of the idea that vouchers can be deployed to support access by gifted learners to education provided outside a school setting.

Almost a decade later, the UK Government awarded a contract to CfBT – a private contractor – to support the out-of-school education of gifted and talented learners throughout England, then numbered at about 800,000.

CfBT’s initial ‘pitch’ for the contract was voucher-based, as demonstrated by these contemporary press reports, from the Daily Telegraph, The Times and The Guardian.

The nub of the proposal was that all learners identified by their schools as gifted and talented would receive, regardless of parental income or socio-economic background, an initial stock of 151 learning credits with a monetary value, to be used by their schools to purchase a range of additional learning opportunities from approved providers.

The reports suggest that £65 million would initially be allocated for this purpose with additional support drawn from the national budget for personalised learning.

The Telegraph says:

‘The scheme also introduces to schools for the first time the concept of “vouchers” as part of an education market in which pupils are the consumers and decide how and what they want to learn. It follows a decision by the Tories last month to drop plans for a full-blown voucher, in which parents would get £5,000 a year to spend at the school of their choice — state or private.’

I can’t say whether Ministers were attracted by the political advantages of ‘borrowing’ a policy previously espoused by (and closely associated with) the Conservative Opposition – it would not have been the first time if so.

I do know that the scheme did not proceed in line with the ‘pitch’ because, when it came to the crunch, the Government would not contemplate additional ringfenced funding on this scale for gifted education, particularly at a time when their wider policy was to create larger pools of generic funding that schools could use to address their own priorities.

Although the basic idea was retained for subsequent use in a smaller element of the overall programme – the City Challenge Gifted and Talented Scheme, designed to support progression by gifted disadvantaged young people aged 14-18 to competitive universities – it did not amount to a genuine voucher scheme.

More information about how CfBT’s programme developed is set out in their Memorandum to the Education Select Committee which examined gifted education in April 2010.

Both of these examples fall short of the ‘Step Change’ proposal because they do not extend into mainstream schooling, covering only the additional out-of-school activities that complement the normal classroom experience. Although it is riddled with inconsistencies and has major shortcomings, ‘Step Change’ does seem to break genuinely new ground.

Other targeted voucher schemes (and broadly similar funding models)

Several US voucher schemes have been targeted specifically at learners with special educational needs and disabilities.

According to this article four states—Florida (1999), Georgia (2007), Ohio (2003), and Utah (2005) operate schemes that together support over 22,000 students.

Most of these are scholarship schemes that allow parents to transfer their children into selected private sector institutions, but not all.

For example the Georgia scheme outlined here provides for learners who meet the eligibility criteria to request transfer to:

  • Another public school within their district;
  • Another public school district;
  • One of the three state schools for the visually or hearing-impaired; or
  • A private school within the programme.

The article makes clear that special education vouchers do not escape the criticisms levelled at vouchers more generally. But they do demonstrate that vouchers can be designed to support other groups of learners whose needs are not entirely met by the schools in which they are currently enrolled.

A similar strain of thinking has informed England’s recent Special Needs Green Paper, which commits to the further development of personal budgets for families of children with SEN and disabilities:

‘Personal budgets…will enable parents to have a much greater say in the way their child is supported and give them a clear role in designing a personalised package of support for their child and family…

…the Government is already testing approaches to personal budgets, through the personal health budgets pilots and the children’s individual budget pilots. The children’s individual budget pilots have given parents control over funding for elements of their child’s support. This involves a combination of notional budgets, where parents can say how the funding for their child is spent (but do not receive this in a cash payment), and direct payments, where they receive the cash for the services they need and can then purchase the support they need directly….

We want to build on the positive experiences of these pilots and extend the scope of what can be included in personal budgets in a way that is beneficial to families…In particular, we want the pilot areas to test whether any school-based services could be included, and to provide more evidence about the cost and impact of providing support in this way.’

While on the subject of UK reforms, I should also mention briefly the Pupil Premium, because it is referenced in the two New Zealand reports.

The original concept for the Premium has been described as a ‘positively discriminating voucher’ – a more equitable version of the free market voucher. But in its current form it is not strictly a voucher at all. It is a supply-side per capita payment to schools for each learner aged up to 16 who is eligible for free school meals. This year the payment is just £430 per student, but it is expected to increase to something over £1,500 by 2014-15.

Moreover, the money does not need to be spent exclusively on the learner who brings the entitlement. Schools are free to decide how to use the Premium payments they receive, including by bundling them together into a single purchase, though they will need to publish details of how they have been spent.

So the Pupil Premium is really a red herring in the context of this discussion, and it is time to draw the argument to a conclusion

Oceania New Zealand courtesy of Anita363

Key features of a workable gifted education voucher scheme

It is no easy task to pull these various strands together.

The commentary that follows is very much a work in progress. It offers some starting points for further discussion of the core elements within a workable gifted education voucher scheme, benefiting from the example offered by ‘Step Change’.

But it is tricky to get the tone exactly right. For, if the statements that follow are too generic and vague they will have little practical value. Conversely, they cannot be too specific, for every voucher scheme must be carefully designed to achieve SMART objectives that respond to identified needs in a given educational environment. It is not feasible to generate hard and fast rules with universal application.

‘Step Change’ offers the salutary lesson that the ‘problem’ a voucher scheme is designed to tackle must be very carefully pinned down. Its failure to engage with this essential groundwork ensures that the proposal it outlines is a house of cards.

I have divided the commentary into two sections: ‘upside’ captures the elements that seem to me to be relatively positive; ‘downside’ outlines the most significant problems that would need to be resolved, and for which I have only limited solutions.


Despite all its faults, ‘Step Change’ introduces the important idea that the increasingly fragmented nature of contemporary education provides a new and compelling justification for the introduction of vouchers.

A voucher is seemingly a good mechanism for funding a personalised education plan that draws together inputs from a range of different providers.

Gifted learners are amongst those least likely to have their needs fully met by the school at which they are enrolled. They are relatively more likely to access significant external enrichment and extension opportunities to supplement their core in-school learning experience.

A voucher would provide a mechanism to ensure that all eligible learners receive the same quantum of support, which is then divided as appropriate between these providers. So if the core provision from the learner’s own school covers 50% of the plan, it would receive 50% of the voucher’s total value.

If the value of the voucher is set at the same value as the average annual per capita cost of public schooling in the host education system, this will ensure that gifted learners do not receive preferential funding through the voucher,compared with their ineligible peers.

But if the voucher was set at this level it would be unlikely to match the fees at the majority of private schools, which are likely to be relatively higher. So if one wanted to introduce a fully transferable voucher, one would need to ensure that, when the voucher is deployed at a private school, the funding gap between the fees and the voucher is met. One way this might be achieved is through a means-tested bursary, with full scholarships available to those from disadvantaged backgrounds.

It should not be necessary to include in the policy design a system for financially penalising providers who fail to improve student outcomes and rewarding those who succeed. It would be complex to attribute responsibility for a student’s success or failure between several different providers. If results are published openly, it would be sufficient to rely on the market to deliver reward through increased numbers attracted to successful provision.

A gifted education voucher may be a valid policy response to the reduction or even the removal of funding from many public sector gifted education programmes – part of the squeeze on public expenditure now enforced in many countries around the world.

The elimination of poor quality publicly-funded gifted education may be no bad thing. Parents of gifted learners are often persistent critics of the quality of gifted education and support in the schools their children attend. Parent participants in #gtchat complain that their children’s teachers are inadequately trained, the leadership is unresponsive to parental concerns and the school ethos is concentrated not on educational excellence but on bringing all students up to the same minimum level of achievement.

If the market mechanism works properly, a gifted education voucher scheme will widen choice, enabling the families of gifted learners to ‘vote with their feet’. Funding will be concentrated on the most effective providers, who will expand, while poorer providers will go to the wall. So supply-side funding should end up being concentrated on the best providers rather than being used to subsidise poor and good providers alike.

A voucher scheme can also be used as an instrument of equity, targeting limited public funding at those from disadvantaged backgrounds who cannot afford to meet the costs of private sector provision. This may reduce the deadweight cost associated with provision for all students regardless of income – those that can afford to pay from their own pockets will still do so.

Developing this equity argument further, gifted learners from disadvantaged backgrounds – including twice exceptional learners – have a double justification for voucher support. This might be addressed by weighting the voucher, as suggested in ‘Step Change’ to reflect their additional needs, perhaps by adding a flat rate per capita addition along the lines of the Pupil Premium. If the weighting reflected that which already applied to existing per capita school funding, gifted disadvantaged students would continue to receive exactly the same level of support as their disadvantaged peers.

The additional weighted funding available to gifted disadvantaged learners might be used to meet the cost of additional support, designed to equip them with the skills, aspirations, social and cultural capital of their more advantaged peers, helping to address the ‘excellence gap’ and strengthening social mobility through improved progression to university and on into professional careers.

Returning to the impact on the supply side, a voucher scheme should lead to a situation where some schools in both the public and private sectors would emerge as specialist centres of excellence in core delivery.

A wider range of providers, including universities, private sector businesses, charitable foundations – could be expected to compete with schools, especially as ‘non-core, providers, offering a choice of face-to-face, blended and online out-of-school learning opportunities.

Schools themselves would be likely to invest more significantly in out-of-school provision for gifted learners, so that they could recoup income potentially lost to competing providers. Collaborative arrangements would emerge enabling gifted students to spend part of their time in other schools within a network or partnership – and the partnership might offer joint out-of-school opportunities to all their gifted students.

Productive partnerships would also be likely to emerge between schools and other providers. For example, a group of schools could work with a university to enable school-aged gifted learners to take courses at undergraduate level.

Auckland by Night courtesy of Light Knight


It is all too easy to design a scheme that does not reach the students who most need support. The market will always favour those equipped to make rational choices and act on them. Strong safeguards are needed to ensure that any given scheme does not become dominated by learners from relatively advantaged backgrounds, or even the ‘impoverished middle classes’.

It might be a requirement that the proportion of disadvantaged gifted learners engaged in the programme broadly reflects their wider distribution in their school or local authority. Crude quotas would be avoided, but arrangements should embody the principle that ability is evenly distributed throughout the population, whether by gender, ethnicity or socio-economic background.

But it is arguably the case that any market-driven model will leave behind the hardest-to-reach, because the safeguards we can introduce will not address the fundamental problem of low aspirations, low expectations and disengagement amongst some elements of the learner population.

It is also necessary to contain the additional central costs associated with a scheme, particularly the financial management of voucher payments and the ‘learning broker mentor’ role which seems essential to the successful implementation of a programme that brings together several different providers.

Some central funding will have to be devoted to providing the necessary quality assurance systems, a management information system and thorough formative and summative evaluation. There is also further cost attributable to the additional places that have to be maintained in the system to allow choice to operate.

Part of this extra cost might be recouped by charging all approved providers a relatively modest subscription in return for their inclusion in the programme. But it will not be feasible to achieve a fiscally neutral scheme if all these elements are included in the balance sheet.

Thirdly, and most problematic of all, is how to avoid a negative impact on students not selected for the programme, particularly if they are stuck in schools denuded of their gifted peers.

One might make a case that such negative peer effects would be balanced and potentially outweighed by the additional attention the school can now give to addressing the needs of the remaining, more homogeneous population.

The more fundamental question is whether selection and ‘labelling’ for inclusion in a voucher scheme would inevitably put a brake on the whole system’s capacity to develop excellence in the maximum proportion of its learners.

The answer may depend on your personal philosophy of gifted education – and how you position yourself against the three polarities I identified in this early post.

Last words

I conclude that a targeted voucher scheme could potentially form a valuable element of a more holistic policy to improve the quality of gifted education, though it would be unlikely to work as a solitary measure.

If the purpose is to improve the quality of supply side provision, as seems to be the case in New Zealand, the market effects of a voucher might usefully be supplemented by collaborative efforts to disseminate and embed effective practice across the system, provided that competition and collaboration can co-exist.

It is unlikely that any voucher scheme can rid itself entirely of some of the negative effects outlined above, so a decision to proceed would depend on a judgement that the potential benefits outweigh the disbenefits.

From time to time, vouchers rise back to the top of the agenda in different parts of the world. It may be that Prime Minister Key’s response to ‘Step Change’ means that they are now off limits in New Zealand, although they may perhaps reappear on the other side of the General Election, depending on the outcome.

For the time being in England the Liberal Democrat element of the Coalition will probably ensure that any voucher proposals are ‘translated’ into something more palatable to their supporters, although this story from November 2010 shows that proponents of the pure voucher concept still nurse their ambitions.

Although the term itself has not appeared in recent policy documents and consultations on school funding reform, it would not be impossible for ‘real vouchers’ to be introduced on top of existing reforms at a later date, should our politics lurch to the Right.

But it is in the USA where interest in vouchers remains at its strongest. Perhaps we should look in that direction for the the first substantive pilot of vouchers targeted at gifted learners.

Meanwhile, all credit to the ‘Step Change’ working group for generating such food for thought from a distinctly dodgy report!


June 2011

School Vouchers and Gifted Education (Part One)

It’s a great honour to be included in the Blog Tour celebrating New Zealand Gifted Awareness Week 2011. This post, which is Kiwi-inspired, considers whether school vouchers could be an effective tool to improve gifted education.

I ought to begin with a health warning, intended for those who may encounter this post outside its normal environment. Gifted Phoenix defies some of the standard blogging conventions, in that the posts are typically long and rather complex.

My idealised imaginary reader has an informed, possibly academic or professional interest in gifted education and is attracted by evidence-based argument, thorough analysis and synthesis of existing online material and an effort to offer a different perspective, occasionally even to inject some small element of new thinking.

Put another way, Gifted Phoenix posts are a touch idiosyncratic, an acquired taste, not everyone’s cup of tea, maybe just the tiniest bit…Aspergery.

This one is no exception.

Why Vouchers?

To add insult to injury, I’ve chosen for your delectation a topic that offers no hint of empathy or practical support to those wrestling daily with the challenges presented by the education and parenting of gifted learners.

While researching a short piece on gifted education in New Zealand – part of a series I’ve been writing for G&T Update(£) on gifted education worldwide – I re-encountered the two reports produced in February 2010 by Heather Roy’s Inter-Party Working Group (IPWG) for School Choice.

I’d scanned them when they were first published but hadn’t really engaged with them properly. As far as I could establish from this distance, they met with a fair degree of hostility from the professional audience in New Zealand, but otherwise sank without trace.

For, on the same August day that Roy was sacked from her posts as Associate Education Minister and Deputy ACT Party Leader, New Zealand’s Prime Minister quietly let it be known that the IPWG’s recommendations had also bitten the dust.

The school choice debate is politically polarised. It is rare to find a balanced treatment of the arguments for and against because we tend to adopt our different positions on ideological grounds.

The IPWG reports were pro-voucher propaganda, typically selective in their use of evidence, but were also quite rightly criticised for failing to address many of the practical implications of the reforms they proposed. So, like everyone else, I pretty much dismissed them.

But, this time round, I paused over the innovative proposal for a voucher scheme targeted specifically (but not exclusively) at gifted learners. I began to consider more seriously whether such a scheme might help to address the issues currently facing gifted education in New Zealand, England and many other countries around the world.

I was also curious to find out how far the academic debate on education vouchers had moved on since I last engaged with it seriously, back in the 1980s.

So this post will examine the IPWG proposal in that wider context, explore the arguments for and against vouchers – hopefully in an even-handed and non-partisan fashion – make connections with related English education policy and, finally, offer some starting points for the development of a viable gifted education voucher scheme.

As I write this introduction I feel like an experimental chemist, about to mix two highly combustible elements that are not normally forced together. The compound created by the ‘V’ word and the ‘G’ word may be extremely unstable, even potentially toxic.

It won’t be a panacea; it could have sufficient potential to warrant further consideration. Or maybe you were right first time and it’s merely a damp squib that should have been left to fizzle out.

NZ Parliament Building courtesy of wiifm

What are vouchers and what are they for?

Education vouchers are a funding instrument, normally advocated by those who believe that a competitive, market-based approach is more likely to deliver quality and efficiency across the education system.

They contend that government-led education is monopolistic and bureaucratic – inherently unresponsive to the various and rapidly changing needs of parents as consumers – and inefficient because there is too little incentive to control costs.

Vouchers are a mechanism for distributing to families public funding for the education of their children. A proportion of education funding is tied to the individual learner and typically channelled through the demand side of the market, instead of being paid as a block grant on the supply side.

Parents and learners choose which schools will benefit from their custom, so they have much more influence on the education they provide. The balance of power switches away from the supply side and towards the family as consumers.

The proponents – I will call them ‘the centre’ (convenient shorthand for the body or bodies responsible for the education system in question) – must decide to what extent they wish to regulate the market they have created. This they can do by superimposing a selection of ‘checks and balances’.

These will impact on parents’ free choice of schools and/or on schools’ freedom to deliver a distinctive educational offer to the maximum number of learners.

The exercise of choice depends critically on popular schools having the flexibility to expand in response to greater demand from parents for places. Since overall demand is relatively fixed (determined by the total number of learners in the system) it follows that less popular schools will contract. If they become unsustainable they will ultimately close unless the ‘checks and balances’ prevent this.

Supply side flexibility will be controlled by the continued imposition of any universal requirements that are judged necessary to protect standards. They will include the framework set in place to hold schools accountable for their performance. This typically sets out arrangements for inspection and review, and for the publication of performance data, both of which also help to inform school choice decisions on the demand side.

There may also be requirements or incentives for schools to collaborate for the purpose of system-wide improvement. But there is inevitably a tension between competition and collaboration.

The circle is squared if it can be demonstrated that successful schools are driving up their own standards while also improving – rather than damaging – standards in their competitors. Although it is sometimes argued that competition alone can drive system-wide improvement, a judicious blend of competition and collaboration may be more successful, provided that it can be made to work.

But, if improvements in one school invariably result in falling standards elsewhere in the system, opponents of a market-driven approach will quite rightly object that the overall impact on system-wide standards is undermined, especially if the distribution of high quality provision continues to favour learners from advantaged backgrounds.

It is relatively rare now to find educators in advanced education systems who support a rigid, top-down ‘command and control’ system that tightly prescribes the freedom of schools. This method of ensuring that every learner in every school achieves a defined national educational standard is more associated with developing systems.

So this is not really a debate between proponents of two polarised approaches. Since the majority accept that market forces are helpful to some degree, the real issue is how to secure the right balance between market forces and market regulation.

Vouchers are wrongly regarded as a kind of badge or label denoting the more extreme market-driven models, but the reality is that voucher systems can support very different degrees of marketisation.

The arguments for and against vouchers

It follows from the point above that the advantages and disadvantages of a voucher scheme will depend on the particular design of that scheme and whether it is achieving the outcomes it was intended to achieve.

One cannot reasonably extend to all vouchers the benefits and disbenefits identified in the evaluations of specific schemes. Nor can one assume that the theoretical pros and cons will apply to each and every real-life scheme.

That said, a brief resume of the standard arguments for and against vouchers will provide helpful context for the remainder of this post, allowing us to take a more rounded view of the IPWG’s proposals.

Some of the arguments in favour have already been touched on in the previous section and I will not repeat them here. But the proponents of vouchers will also assert that:

  •  Parents from disadvantaged backgrounds should not have their choice of school limited to lower-performing public sector institutions in which their children are already concentrated (because more advantaged families typically secure the available places in higher-achieving schools or else pay for private education). Since these parents normally have limited opportunity to exercise meaningful choice, they are empowered by voucher schemes. The consequent redistribution of learners strengthens inclusion and cohesion and promotes social mobility.
  • When voucher-bearing learners move from a low-performing school to a higher-performing school, they achieve more highly because they are exposed to higher-quality teaching and benefit from learning alongside higher-achieving peers. This narrows achievement gaps between rich and poor and so improves educational standards overall.
  •  The market generates efficiency and drives up performance across the system as good schools strive to keep their market share and poorer schools strive to improve or face closure. In the case of transfer to the private sector, private schools can often provide a personalised education at lower cost because they do not have to meet expensive centrally-imposed requirements (various of the ‘checks and balances’) that apply across the public sector.

Auckland courtesy of Sids 1

Those opposed to vouchers will respond that:

  • Take-up is typically dominated by motivated learners from families with strong educational aspirations. Because they do not tackle the underlying issue of low educational aspiration, vouchers tend to increase the segregation of poor students with relatively less motivation and less parental support. They are left behind in the low-performing public sector schools and no longer benefit from the proximity of their higher-aspiring peers. Choice is further limited because the families who most need the support are least able to afford to move house or meet the cost of transport to another school. They are also least likely to access information about such schemes in the first place. As a result, vouchers may actually increase achievement gaps between rich and poor, so pulling down standards overall.
  •  Parents may not choose schools on the basis of educational standards, particularly if they do not access or give relatively little weight to school performance data. They may be more influenced by geographical proximity, local reputation, the attendance of friends and family. They may be disinclined to select schools with low proportions of learners from similar backgrounds or from their own minority ethnic group. In any case, it is not always possible to judge reliably in which school a particular pupil will achieve the best outcomes.
  • Vouchers weaken the public sector as a whole because, other than in quasi-voucher schemes (see ‘types of voucher schemes’ below) they divert resources away from it and into the private sector. Because accountability is less strong in the private sector, taxpayers’ money can be misused, or it can be applied in educational settings to which taxpayers might reasonably object. Overall costs will increase as private sector recipients become over-reliant on voucher funding. Administrative costs of the voucher scheme must also be factored in, as well as the cost of maintaining an over-supply of school places to enable the market to operate.
  •  Schools that lose students as a consequence of vouchers may find it hard to turn round their performance. Whereas a commercial operation might ditch non-core business, relocate to a cheaper building or change its suppliers, none of these options is available to a school. In practice, the closure of schools is not straightforward. If families and staff offer resistance, this creates additional political and economic costs.

The pragmatic and rather simplistic conclusion I draw from this argument is that an effective voucher scheme must get as close as possible to securing the advantages whilst making every effort to avoid the disadvantages. There may not be such a thing as a perfect voucher scheme, but a really good scheme will have the minimum of unintended, negative consequences.

This creates significant implications for key aspects of any scheme’s design, such as:

  • The rules governing eligibility;
  • How information about the scheme is disseminated, especially to those least likely to access it;
  • Whether parents must opt in or opt of a scheme;
  • How vouchers are allocated if demand exceeds supply;
  • The monetary value of the voucher and whether means-testing is applied;
  •  Whether receiving schools have any control over the voucher holders they take in;
  • Whether there are costs not covered by the voucher and, if so, how those are met;
  •  How lower-performing schools are managed (to improve or to close)
  • The overall costs of the voucher scheme – including the full administrative cost – and, of course,
  •  The SMART (specific, measurable, achievable, realistic, timebound) objectives the particular scheme is designed to achieve.

Different kinds of voucher

The design of a voucher scheme is, in itself, part of the system of ‘checks and balances’. For example, the centre can regulate the market by providing vouchers to those learners who meet specific criteria, or by imposing restrictions on where vouchers can be spent, so insuring against market effects deemed undesirable.

Different kinds of voucher scheme have been developed for different contexts:

  • Universal schemes (sometimes called full schemes) are introduced across all schools in an education system, regardless of whether they are in the public or private sectors;
  •  Scholarship schemes support learners currently educated in the public sector to attend private sector schools (and are often designed specifically for learners from disadvantaged backgrounds);
  • Quasi-voucher schemes operate only in public sector schools, acting as a redistributive mechanism by increasing pupil numbers in popular schools at the expense of those that are relatively less popular;
  •  Targeted schemes are for a defined subset of learners, such as those from disadvantaged backgrounds, or those with special educational needs or, indeed, those identified as gifted and talented.

This post is not directly concerned with universal schemes which, by definition, are designed to cater for all students in all schools. Nor is it concerned with scholarship models, although they account for a relatively large proportion of the voucher schemes currently under way in different parts of the world.

I want to concentrate on targeted schemes that select eligible pupils at least partly on the basis of academic ability or academic achievement but which operate across the public and private sectors, or (as quasi-vouchers) in the public sector only.

Heather Roy courtesy of cleOpatra

The Inter-Party Working Group (IPWG) for School Choice

As Kiwi readers will know, a minority National Government was elected in New Zealand in November 2008. The National Party did not establish a formal coalition but instead formed ‘Confidence and Supply Agreements’ with three smaller parties: ACT, the Maori Party and United Future.

The agreement between ACT and the National Party included provision for a report on

‘policy options relating to the funding and regulation of schools that will increase parental choice and school autonomy.’

So a Working Group was convened in April 2009, comprising representatives of the National Party, ACT and the Maori Party under the Chairmanship of Heather Roy.

The terms of reference were to:

  • ‘Review school funding and examine options that will reduce central control and treat all schools on a more equal basis according to enrolments;
  • Consider whether funding mechanisms should include alternative arrangements for special factors(eg transport, special needs) and decile funding, and for additional fees;
  • Review enrolment scheme policy and other regulations which may limit parental choice and the ability of schools to respond to parental demand;
  • Examine the concept of trust schools and other models which might facilitate greater self management and innovation, and the registration and accountability mechanisms for such schools that might accompany the relaxation of detailed controls;
  •  Consider the interface elements of the education system such as Maori education, school property, school transport, special education and the Correspondence School with a more choice-oriented system; and
  • Review policies in other countries, in particular Australia, Sweden, the Netherlands and Ireland, for lessons that may be relevant to the Working Group’s task.’

In the event, the Group could not agree how to address all of the items on this agenda and so produced two separate reports. The main report is called ‘ Step Change: Success the Only Option’ but there is also a much longer minority report called ‘Free to Learn’ which carries the signatures of the two ACT members: Roy herself and Sir Roger Douglas.

The preface to the minority report describes the relationship between the two in the following terms:

Free to Learn shares the chief concerns of Step Change: Success the Only Option, the final report of the IPWG. It commends Step Change: Success the Only Option for its emphasis on New Zealand’s poorest performing (20 percent) students and its gifted and talented (5 percent) students.

 Beyond this, however, Free to Learn holds that the recommendations in Step Change: Success the Only Option will have much greater impact if the remaining 75 percent of New Zealand students are allowed to benefit from them. Freedom of choice for parents, school innovation, better results and cost savings as a result of competition should be available to every New Zealander.’

So ‘Free to Learn’ is advocating a universal voucher scheme, whereas ‘Step Change’ is recommending a targeted scheme. Whereas ‘Free to Learn’ draws significantly (and rather one-sidedly) on the international evidence base, the main report barely references it. Indeed the main report is markedly narrow in scope – it does not properly address any of the terms of reference given above.

Nor does ‘Step Change’ propose its targeted scheme as a formal pilot for the universal scheme recommended in ‘Free to Learn’ (and it is arguable whether it could properly operate as a pilot given that its target groups are likely to have somewhat different needs to the bulk of the school-age population) but simply declares, with some naivety:

‘If the initiative is successful it can be extended to the remaining 75 percent of New Zealand students’.

I have no insight into the internal discussion and disagreement that prevented the working group from producing a single report. Perhaps some Kiwi readers may be able to shed more light on this.

It would be particularly interesting to understand the rationale that led the non-ACT members to support vouchers for gifted and low-attaining learners, but to oppose their universal application. Presumably they must have been convinced that vouchers were relatively better suited to meeting the needs of the two minority groups. But neither report addresses this issue. Perhaps it was simply a political compromise and the educational arguments were irrelevant.

The target groups for the proposed ‘Step Change’ vouchers?

‘Step Change’ begins with a brief review of the performance of the New Zealand education system, focusing particularly on either end of the achievement spectrum. It concentrates on the 20% lowest-achieving and ‘the top 5% who are gifted and talented’ (without defining the latter apart from specifying a 6-16 age range).

There is no justification whatsoever for the choice of either of these two percentages. The 5% figure for the gifted and talented may be more correctly interpreted as an achievement measure rather than an ability measure, but there is no clue beyond the choice of a fixed percentage.

These two groups are of course very different, with markedly different needs. The report identifies them as equally deserving of support, but its very different approach to the diagnosis of their predicaments is revealing.

The lowest-achieving 20% are defined exclusively by low student performance, against NCEA levels 1 and 2 (those not achieving Level 2 are dismissed by the Report as ‘this failing 33%’). There is specific reference to the significant proportions of Maori and Pasifika students who did not achieve NCEA level 1.

Conversely, the focus on gifted students is not justified by evidence of their underachievement, as one might expect, but by poor school-based provision and support:

‘The Education Review Office (ERO) has found that provision is highly responsive and appropriate in only one in five schools, with 58 percent of schools, programmes and provisions being either somewhat or not appropriate and responsive.’

This is in fact an incorrect reference to the 2008 Report from ERO ‘Schools’ Provision for Gifted and Talented Students’. The table on page 25 of this report shows that provision is ‘highly appropriate and responsive’ in only 5% (one in 20) schools, not 20% as suggested by ‘Step Change’.

It is responsive and appropriate in a further 37%, ‘somewhat responsive and appropriate in 41% and ‘not responsive and appropriate’ in 17% (the two latter giving the 58% figure).

It is not clear why this indicator is singled out from the other four that ERO considered, and why the Report did not rely instead on the overall findings:

Wellington courtesy of pingnews.com

  • 17% percent of schools had good provision across all five areas;
  • 48% had good provision in some areas but not others; and
  • 35% did not have good provision in any of the five areas.

But the bigger issue is the decision to ignore pupil performance evidence in the case of gifted learners. It could potentially be explained by the lack of national achievement statistics for a properly identified gifted and talented population, but the (admittedly imperfect) proxy offered by the results of high achievers would surely have been better than nothing.

Then again, it would be problematic to use those results to support a claim that vouchers would help to secure a much-needed improvement in the achievement of gifted learners.

For the inconvenient truth is that New Zealand’s high achievers are performing very well indeed, especially when compared with their peers in other countries.

This was acknowledged by Education Minister Anne Tolley following the publication of PISA 2009:

‘”We have a lot to be proud of, as this study confirms our top students are among the best in the world,” says Mrs Tolley…

Our challenge is to work together to address the issues raised in the report.

New Zealand continues to have a disproportionate number of lower achievers, and this hasn’t changed in the past nine years”’.

My own analysis of PISA data on the performance of high achievers confirms that New Zealand’s high achievers are outperforming most other countries, including England’s, particularly in maths and science.

So, on the basis of this evidence, one can only conclude that those gifted students in New Zealand who are also high achievers produce very high levels of performance in spite of the limited support offered to gifted and talented learners by one-third of New Zealand schools.

This is a serious issue for ‘Step Change’ because it begs the question whether a single voucher scheme can be designed to meet the markedly different circumstances impacting on the two different ends of the target population.

If the primary objective for the low-achieving 20% is to improve their personal achievement, while the primary objective for the gifted 5% is to improve the quality of support provided to them by their schools, then – arguably at least – a single scheme would be too crude an instrument to address both these issues simultaneously.

If we accept that voucher schemes are not a panacea and need to be designed carefully to address specific policy problems, then it would be far better to create a separate customised scheme for the gifted 5%.

But ‘Step Change’ is far from consistent on the critical question of what exactly the gifted element of the scheme is supposed to achieve, as we shall see next time.

In Part Two…

The arguments advanced in ‘Step Change’ have already begun to unravel under this initial scrutiny, but it may still be possible to salvage a reasonable case in principle for introducing vouchers for New Zealand’s gifted learners.

In Part Two of this post we will explore whether the central voucher proposal in ‘Step Change’ is likely to address the identified problem. Then we can position this analysis in a wider international and historical context. Finally we can consider whether the accumulated evidence provides a basis for some guiding principles for a workable gifted education voucher scheme.


June 2011

PISA 2009: International Comparisons of Gifted High Achievers’ Performance

This post is an initial review of what PISA 2009 tells us about the performance of gifted high achievers in England and other English-speaking countries compared with the countries at the top of the PISA 2009 rankings.

It concentrates on what we can deduce from the figures rather than causation: that will be addressed in subsequent posts. It examines:

  • average performance by country, including changes between PISA 2006 and PISA 2009
  • the performance of high achievers, comparing the relative results of different countries in 2009 and how those have changed since PISA 2006
  • relative differences between the performance of high achievers and average performance by country – expressed in terms of rankings – and how those have altered between 2006 and 2009.

The twelve countries and regions included in the analysis are the highest performers – Hong Hong (China), Korea, Taiwan, Finland and, for 2009 only, Shanghai (China) and Singapore – plus Australia, Canada, Ireland, New Zealand the UK and the USA.

I should state at the outset that I am not a statistician: this is a lay analysis and I apologise in advance for any transcription errors. Nevertheless, I hope it reveals some significant findings, including points which have received scant attention in the wider media coverage of the PISA results.

Background to PISA

The Programme for International Student Assessment (PISA) is a triennial OECD survey of the performance of 15 year-old students in science, mathematics and reading. Science was the main focus in 2006; reading is the main focus in 2009.

Fifty-seven countries took part in PISA 2006; a total of sixty-seven countries have taken part in PISA 2009. The effect of this increase in numbers on rankings should be borne in mind, especially the inclusion of very high-performing areas, notably Shanghai and Singapore.

It is also worth noting at the outset that PISA rankings do not reflect the overall numbers of students achieving specific levels: a small country that has a high percentage of its students achieving a high achievement level outscores a bigger country with a lower percentage of high achievers, even though the overall number of high achievers in the bigger country is greater.

PISA assesses reading, scientific, mathematical literacy. It is important to have a clear understanding of exactly what is being assessed, not least so we can understand to what extent this differs from the nature of our own national assessments.

If a country’s national assessments are congruent with PISA then it will be likely to perform much better in PISA than a similar country which is domestically focused on quite different priorities.

According to the PISA 2009 Assessment Framework:

Reading literacy…is defined in terms of students’ ability to understand, use and reflect on written text to achieve their purposes…the capacity not just to understand a text but to reflect on it, drawing on one’s own thoughts and experiences. In PISA, reading literacy is assessed in relation to the:

Text format…continuous texts or prose organised in sentences and paragraphs…non-continuous texts that present information in other ways, such as in lists, forms, graphs, or diagrams… a range of prose forms, such as narration, exposition and argumentation…both print and electronic texts…these distinctions are based on the principle that individuals will encounter a range of written material in their civic and work-related adult life (e.g. application, forms, advertisements) and that it is not sufficient to be able to read a limited number of types of text typically encountered in school.

Reading processes (aspects): Students are not assessed on the most basic reading skills, as it is assumed that most 15-year-old students will have acquired these. Rather, they are expected to demonstrate their proficiency in accessing and retrieving information, forming a broad general understanding of the text, interpreting it, reflecting on its contents and reflecting on its form and features.

Situations: These are defined by the use for which the text was constructed. For example, a novel, personal letter or biography is written for people’s personal use; official documents or announcements for public use; a manual or report for occupational use; and a textbook or worksheet for educational use. Since some groups may perform better in one reading situation than in another, it is desirable to include a range of types of reading in the assessment items.

Mathematical literacy… is concerned with the ability of students to analyse, reason, and communicate ideas effectively as they pose, formulate, solve, and interpret solutions to mathematical problems in a variety of situations. The PISA mathematics assessment has, so far, been designed in relation to the:

Mathematical content: This is defined mainly in terms of four overarching ideas (quantity, space and shape, change and relationships, and uncertainty) and only secondarily in relation to curricular strands (such as numbers, algebra and geometry).

Mathematical processes: These are defined by individual mathematical competencies. These include the use of mathematical language, modelling and problem-solving skills…

Situations: These are defined in terms of the ones in which mathematics is used, based on their distance from the students. The framework identifies five situations: personal, educational, occupational, public and scientific.

However, a major revision of the PISA mathematics framework is currently underway in preparation for the PISA 2012 assessment.

Scientific literacy… is defined as the ability to use scientific knowledge and processes not only to understand the natural world but to participate in decisions that affect it. The PISA science assessment is designed in relation to:

Scientific knowledge or concepts: These constitute the links that aid understanding of related phenomena. In PISA, while the concepts are the familiar ones relating to physics, chemistry, biological sciences and earth and space sciences, they are applied to the content of the items and not just recalled.

Scientific processes: These are centred on the ability to acquire, interpret and act upon evidence. Three such processes present in PISA relate to: 1) describing, explaining and predicting scientific phenomena, 2) understanding scientific investigation, and 3) interpreting scientific evidence and conclusions.

Situations or contexts: These concern the application of scientific knowledge and the use of scientific processes applied. The framework identifies three main areas: science in life and health, science in Earth and environment, and science in technology.’

Defining high achievers in PISA

PISA performance scales are designed so that the average student score in OECD countries is 500 or thereabouts. Student performance is divided into 6 proficiency levels (only 5 for reading in PISA 2006), defined in terms of the competences demonstrated by students achieving that level.


In PISA 2006 in reading, the highest proficiency level 5 was achieved by 8.6% of OECD students with a lower score limit of 625.6. In PISA 2009 a level 6 was introduced (lower score limit of 698.3) which was achieved by 0.8% of OECD students. Levels 5 and 6 combined (lower score limit of 625.6) was achieved by 7.6% of OECD students. This analysis assumes therefore that levels 5 and 6 together in 2009 can be compared with level 5 in 2006.

We can conclude that overall higher level performance in OECD countries fell by 1.0% between 2006 and 2009. This may well be attributable to changes in the level of demand in the assessment framework rather than an overall dip in performance.

According to the PISA 2009 Assessment Framework (or the PISA Results book Volume I in the case of reading) tasks at level 6:

‘typically require the reader to make multiple inferences, comparisons and contrasts that are both detailed and precise. They require demonstration of a full and detailed understanding of one or more texts and may involve integrating information from more than one text. Tasks may require the reader to deal with unfamiliar ideas, in the presence of prominent competing information, and to generate abstract categories for interpretations. Reflect and evaluate tasks may require the reader to hypothesise about or critically evaluate a complex text on an unfamiliar topic, taking into account multiple criteria or perspectives, and applying sophisticated understandings from beyond the text. A salient condition for access and retrieve tasks at this level is precision of analysis and fine attention to detail that is inconspicuous in the texts.’

And tasks at level 5:

‘that involve retrieving information require the reader to locate and organise several pieces of deeply embedded information, inferring which information in the text is relevant. Reflective tasks require critical evaluation or hypothesis, drawing on specialised knowledge. Both interpretative and reflective tasks require a full and detailed understanding of a text whose content or form is unfamiliar. For all aspects of reading, tasks at this level typically involve dealing with concepts that are contrary to expectations.


In PISA 2006, science level 6 was achieved by 1.3% of OECD students and required a lower score limit of 707.9. Level 5 and above was achieved by 9.0% requiring a lower score of 633.3.

In 2009, these figures were: level 6 achieved by 1.1% of OECD students with a lower score limit of 707.9; level 5 and above achieved by 8.5% of OECD students with a lower score limit of 633.3.

The science framework does not seem to have changed significantly between the two assessments, so we can provisionally identify a small overall dip in higher level performance between 2006 and 2009.

Level 6 students can:

‘consistently identify, explain and apply scientific knowledge and knowledge about science in a variety of complex life situations. They can link different information sources and explanations and use evidence from those sources to justify decisions. They clearly and consistently demonstrate advanced scientific thinking and reasoning, and they use their scientific understanding in support of solutions to unfamiliar scientific and technological situations. Students at this level can use scientific knowledge and develop arguments in support of recommendations and decisions that centre on personal, social or global situations.

At Level 5, students can:

‘identify the scientific components of many complex life situations, apply both scientific concepts and knowledge about science to these situations, and can compare, select and evaluate appropriate scientific evidence for responding to life situations. Students at this level can use well-developed inquiry abilities, link knowledge appropriately and bring critical insights to situations. They can construct explanations based on evidence and arguments based on their critical analysis.


In PISA 2006 mathematics, level 6 was achieved by 3.3 % of OECD students with a lower score limit of 669.3 and level 5 and above by 13.3% of OECD students with a lower score of 607.

In PISA 2009, level 6 was achieved by 3.1% of OECD students with a lower score limit of 669.3 and level 5 and above by 12.7% of OECD students with a lower score of 607.

As with science, the framework does not appear significantly changed and so we can provisionally identify a small drop overall in the proportion of OECD students achieving these higher levels.

The PISA 2009 rubric says:

‘At ‘Level 6 students can conceptualise, generalise, and utilise information based on their investigations and modelling of complex problem situations. They can link different information sources and representations and flexibly translate among them. Students at this level are capable of advanced mathematical thinking and reasoning. These students can apply this insight and understandings along with a mastery of symbolic and formal mathematical operations and relationships to develop new approaches and strategies for attacking novel situations. Students at this level can formulate and precisely communicate their actions and reflections regarding their findings, interpretations, arguments, and the appropriateness of these to the original situations.

‘At Level 5 students can develop and work with models for complex situations, identifying constraints and specifying assumptions. They can select, compare, and evaluate appropriate problem solving strategies for dealing with complex problems related to these models. Students at this level can work strategically using broad, well-developed thinking and reasoning skills, appropriate linked representations, symbolic and formal characterisations, and insight pertaining to these situations. They can reflect on their actions and formulate and communicate their interpretations and reasoning.’

Comparing PISA 2006 and 2009 results by country for all participants

Table 1 below compares average scores by country in PISA 2006 and PISA 2009. These are essentially the headline figures which attract most media attention and they are included here primarily for the purposes of comparison.

Country Reading Maths Science
2009 2006 2009 2006 2009 2006
score rank score rank score rank score rank score rank score rank
Aus 515 9 513 7 514 15 520 13 527 10 527 8
Can 524 6 527 4 527 10 527 7 529 8 534 3
Fin 536 3 547 2 541 6 548 2 554 2 563 1
HK 533 4 536 3 555 3 547 3 549 3 542 2
Ire 496 21 517 6 487 32 501 22 508 20 508 20
Korea 539 2 556 1 546 4 547 4 538 6 522 11
NZ 521 7 521 5 519 13 522 11 532 7 530 7
Shang 556 1 N/A N/A 600 1 N/A N/A 575 1 N/A N/A
Sing 526 5 N/A N/A 562 2 N/A N?A 542 4 N/A N/A
Taiwan 495 23 496 16 543 5 549 1 520 12 532 4
UK 494 25 495 17 492 27 495 24 514 16 515 14
US 500 17 N/A N/A 487 31 474 35 502 23 489 29
Average 493 495 496 497 501 498

However, it is worth drawing attention to some key points arising from the table:

  • As indicated above, there have been small falls in overall OECD performance in reading and maths between 2006 and 2009 and a corresponding small increase in science performance. The change in reading in particular may be more attributable to a tightening of the assessment framework
  • In reading, the average score has increased slightly in Australia, remained unchanged in New Zealand, and fallen slightly in Canada, Hong Kong, Taiwan and the UK. Given the relatively tougher assessment framework and the associated overall dip in cross-OECD performance, these countries have arguably done well to maintain their scores
  • However, there have been more significant falls in reading performance in Finland, Ireland and Korea – all three strong performers in PISA 2006. Only Ireland has experienced a significant drop in ranking as a consequence, but these results should be a matter of concern in all three countries, perhaps suggesting they may need to focus more on aspects of reading newly introduced into the 2009 assessment framework
  • In maths, the average score has increased significantly in Hong Kong and the US, remained largely unchanged in Canada, New Zealand and the UK and fallen significantly in Australia, Finland, Ireland and Taiwan. Only Ireland has experienced a significant drop in its ranking
  • Nevertheless, Australia, Finland and Canada should be concerned about the dip in their performance of 6-7 points in each case. This cannot be attributable to other countries leapfrogging them in the table
  • In science, Hong Kong, Korea and the US have all made significant improvements since 2006, while performance is largely unchanged in Australia, Ireland, New Zealand and the UK and has declined significantly in Canada, Finland and Taiwan. The latter should be concerned.
  • In all three areas, loss of rank combined with fairly static performance is attributable to other countries improving at a faster rate and is a matter of relative competition. It is not possible to depress the performance of a competitor so these countries must concentrate on improving their own performance. That said, they should take some comfort from their capacity to sustain their 2006 performance when their competitors are clearly not doing so
  • On the basis of this evidence, the countries with the biggest overall headaches are Canada, Finland and especially Taiwan, all three lauded to some degree as PISA world leaders.

Comparing Percentages of High Achievers in PISA2009 and PISA2006

Table 2 compares the percentages of high achievers in each of our 12 countries who achieved the higher levels in reading, maths and science in 2006 and 2009 respectively.

Country Reading Maths Science
2009 2006 2009 2006 2009 2006
Level 6 Levels 5+6 Level 5 Level 6 Levels 5+6 Level 6 Levels 5+6 Level 6 Levels 5+6 Level 6 Levels 5+6
Aus 2.1 12.8 10.6 4.5 16.4 4.3 16.4 3.1 14.6 2.8 14.6
Can 1.8 12.8 14.5 4.4 18.3 4.4 18 1.6 12.1 2.4 14.4
Fin 1.6 14.5 16.7 4.9 21.6 6.3 24.4 3.3 18.7 3.9 20.9
HK 1.2 12.4 12.8 10.8 30.7 9 27.7 2 16.2 2.1 15.9
Ire 0.7 7 11.7 0.9 6.7 1.6 10.2 1.2 8.7 1.1 9.4
Korea 1 12.9 21.7 7.8 25.5 9.1 27.1 1.1 11.6 1.1 10.3
NZ 2.9 15.8 15.9 5.3 18.9 5.7 18.9 3.6 17.6 4 17.6
Shang 2.4 19.4 N/A 26.6 50.7 N/A N/A 3.9 24.3 N/A N/A
Sing 2.6 15.7 N/A 15.6 35.6 N/A N/A 4.6 19.9 N/A N/A
Taiwan 0.4 5.2 4.7 11.3 28.5 11.8 31.9 0.8 8.8 1.7 14.6
UK 1 8 9 1.8 9.9 2.5 11.2 1.9 11.4 2.9 13.7
US 1.5 9.9 N/A 1.9 9.9 1.3 7.7 1.3 9.2 1.5 9.1
Average 1 7 8.6 3.1 12.7 3.3 13.4 1.1 8.5 1.3 8.8


  • The 2006 leaders amongst our subset of countries were Korea, Finland and New Zealand respectively whereas, in 2009, the leaders were Shanghai, New Zealand and Singapore (Shanghai and Singapore did not take part in the 2006 assessment).
  • All except Taiwan and Ireland exceeded the OECD average, although the percentage of the highest level 6 achievers in 2009 was lower than the OECD average in Taiwan and Ireland and equivalent to it in the UK and Korea. These four countries arguably need to concentrate more on the very top of their achievement range.
  • The percentage achieving levels 5/6 has increased over the 3-year period in Australia and Taiwan, remained largely unchanged in Hong Kong and New Zealand, fallen slightly in the UK and fallen substantially in Canada, Finland, Ireland and Korea. The decline in Korea is particularly startling.


  • The 2006 leaders in our subset were Taiwan, Korea and Hong Kong respectively at level 6 and Taiwan, Hong Kong and Korea respectively at levels 5/6. In 2009, the leaders are Shanghai, Singapore and Taiwan respectively at level 6 and Shanghai, Singapore and Hong Kong respectively at levels 5/6.
  • In 2006, the UK, US and Ireland were below the OECD average for level 6 performance and the other nine countries were above it. This continued to be the case in 2009 though, whereas the US was moving in the right direction, level 6 performance declined in the UK and Ireland, identifying this as an aspect potentially requiring attention in both countries;
  • In 2006, the same three countries were below the OECD average for level 5/6 performance and this continued to be the case in 2009. As with level 6, the US has improved its performance, drawing level with the UK, but the UK’s performance has declined somewhat and Ireland’s has declined significantly. This suggests that higher achievers also need more attention in both countries
  • Between 2006 and 2009, other countries improving their performance included Australia and Hong Kong (level 6) and Canada and Hong Kong (levels 5 and 6) though only Hong Kong managed significant improvement. Performance was relatively unchanged in Canada (level 6) and New Zealand (levels 5 and 6). There was a decline in Finland, Korea, New Zealand and Taiwan at level 6, most noticeably in Finland and Korea, and in Finland, Korea and Taiwan at levels 5 and 6 together.
  • If we compare rates of change for level 6 and levels 5/6 respectively, we see that countries doing relatively better with their highest achievers (level 6) include Australia, New Zealand, Taiwan, the UK and the US, while countries doing relatively better with their higher achievers (levels 5 and 6) include Canada, Finland and Korea.


  • The 2006 leaders in terms of level 6 performance were New Zealand, Finland and the UK. Finland, New Zealand and Hong Kong led the field for level 5 and 6 performance. In 2009 Singapore, Shanghai and New Zealand respectively were leaders in level 6 performance and Shanghai, Singapore and Finland respectively for levels 5 and 6 together
  • In 2006, Ireland and Korea were below the OECD average for level 6 performance but all countries were above the average for levels 5 and 6 combined. In 2009, Taiwan had fallen below the OECD average for level 6, Korea matched it and Ireland had exceeded it; all countries were still above the OECD average for levels 5 and 6 together. This suggests that Ireland as well as Korea deserve credit for the progress made with their highest achievers in science.
  • Australia was the only other country to improve its level 6 performance in science during this period while Hong Kong, Korea and the US (very slightly) improved their performance for levels 5 and 6 together.
  • There were declines in performance at level 6 for Canada, Finland, Hong Kong Kong, New Zealand, Taiwan, the UK and the US, while Korea flatlined. The worst declines were in Canada, Taiwan and the UK.
  • In terms of levels 5 and 6 combined, improvement was made in the period by Hong Kong, Korea and the US (very slightly in the case of the US). There were declines in performance in Canada, Finland, Ireland, Korea, Taiwan and the UK, the fall in Taiwan being particularly marked.
  • Examining the rate of change for level 6 compared with levels 5 and 6, it is hard to detect a clear pattern but Australia and Ireland seem to be doing relatively better with level 6 while, conversely, Canada, Korea the UK and the US seem to be doing relatively worse

As we have noted above, performance across all OECD countries fell slightly across the board between 2006 and 2009. Insofar as this is not attributable to changes to the assessment frameworks, we might reasonably note that the OECD’s effort in producing PISA has not of itself resulted in improved performance across OECD countries for high achievers over this 3-year period.

Comparing ranks for high achievers versus all achievers, 2006 and 2009

The third and final table compares rank positions for high achievers and all achievers in 2006 and 2009 respectively.

This comparison could also be undertaken on the basis of percentages achieving the different levels and/or the average scores achieved, but the rankings are more readily available and are a reasonable guide to changes in the relative performance of countries, if not absolute changes.

Reading Maths Science
2009 2006 2009 2006 2009 2006
Level 6 Levels 5+6 All Level 5 All Level 6 Levels 5+6 All Level 6 Levels 5+6 All Level 6 Levels 5+6 All Level 6 Levels 5+6 All
Aust 4 7 9 9 7 13 16 15 14 14 13 5 7 10 4 5 8
Can 6 7 6 4 4 14 12 10 13 12 7 10 9 8 6 7 3
Fin 7 4 3 2 2 11 7 6 6 4 2 4 3 2 2 1 1
HK 10 9 4 5 3 4 3 3 3 2 3 7 6 3 9 4 2
Ire 17 22 21 6 6 40 36 32 32 28 22 15 19 20 18 19 20
Korea 12 6 2 1 1 5 5 4 2 3 4 18 18 10 18 16 11
NZ 1 2 7 3 5 9 11 13 9 8 11 3 4 7 1 2 7
Shang 3 1 1 N/A N/A 1 1 1 N/A N/A N/A 2 1 1 N/A N/A N/A
Sing 2 3 5 N/A N/A 2 2 2 N/A N/A N/A 1 2 4 N/A N/A N/A
Taiw 27 29 23 29 16 3 4 5 1 1 1 23 18 12 12 5 4
UK 12 18 25 9 27 30 30 27 24 23 22 8 11 16 3 8 14
US 8 11 17 N/A N/A 29 30 31 33 29 35 14 17 23 14 9 29

  • For reading, Taiwan and the UK were relatively unusual in 2006 because of the dissonance between their rankings – the UK because it did so much better for its higher achievers; Taiwan because it did so much worse. By 2009, Hong Kong and Korea are beginning to follow the same trend as Taiwan, while Australia, New Zealand and the US have joined the UK. These latter four countries might therefore be expected to concentrate disproportionately on their lower achievers in future
  • For maths, there are some clear disparities between relative national ranks in 2006. In the case of Canada and Ireland, the rank is significantly lower for level 5 and above than it is for all students. By 2009, however, these gaps have almost invariably narrowed, perhaps suggesting a degree of responsiveness to PISA 2006, although in some cases it appears that slippage down the overall rankings has influenced matters. Certainly there is no real evidence here that high achievers in maths are particularly neglected compared with their peers, or vice versa.
  • For science, it is clear that in 2006, Australia, New Zealand, the UK and the US were stronger performers, relatively speaking, with their higher achievers – and this is particularly pronounced in the last three of these countries. By 2009, these distinctions continue but are becoming relatively less clear-cut, following a similar pattern to maths.


The analysis above provides some detailed pointers for future support for high achievers, but what overall assessment can we offer for each of our English-speaking countries?


In PISA 2006, Australia achieved high overall rankings in reading (7) and science (8) and a relatively good ranking in maths (13). It fell two places in the rankings in all three areas in PISA 2009, although its average score increased slightly in maths, was unchanged in science and fell significantly in maths.

In 2006, its ranking for high achievers (levels 5 and 6) was slightly higher than its overall ranking in science, but not in reading or maths. By 2009, this remained true of science and had became true of reading as well.

The percentage of higher achievers (levels 5 and 6) in reading has increased significantly between 2006 and 2009, but the equivalent percentages in maths and science remain largely unchanged, except for small improvements for the highest achievers (level 6).

Moving forward, the priorities for Australia are likely to be improvement in maths across the board and probably for relatively low achievers in reading and science.


PISA 2006 showed Canada achieving very highly overall in reading (4) and science (3) and highly in maths (7). In 2009 it fell two places in reading (6), three places in maths (10) and five places in science (8), although average scores remained unchanged in maths and fell somewhat in science and reading.

Its 2006 rankings for high achievers were significantly lower than its overall ranking in maths and science but identical in reading. In 2009, there was still little difference in the relative rankings for science and reading and now little difference in maths either, although the change in maths is attributable to a fall in overall ranking rather than an improvement for high achievers.

The percentage of higher achievers has declined in reading and in science between 2006 and 2009 but has increased slightly in maths.

Canada has a relatively ‘balanced scorecard’ and will likely continue to focus on improving its results in all three areas and all achievement levels, though maths may be a relatively higher priority.


Ireland’s overall rankings from PISA 2006 were high for reading (6) and mid-table for maths (22) and science (20). In PISA 2009 its ranking for science remained unchanged (20) but fell very significantly in maths (32) and especially reading (21). Average scores also fell significantly in maths and reading and were unchanged in science.

The 2006 rankings for higher achievers showed very little difference to overall rankings in science and reading, but somewhat lower relative rankings for high achievers in maths. The position is similar in 2009, and there is marked slippage down the rankings in reading – and to a lesser extent maths – for higher achievers as well as for all achievers.

The percentage of higher achievers has fallen significantly in maths and reading and slightly in science.

For the future, Ireland will need to reverse its downward trend in maths and reading while not neglecting improvements in science. It needs to focus on all levels of achievement, including its higher achievers.

New Zealand

In PISA 2006, New Zealand achieved a very high overall ranking in reading (5), a high ranking in science (7) and a relatively high ranking in maths (11). In PISA 2009, it slipped 2 places in reading and maths but retained its position in science. Average scores were unchanged in reading, fell slightly in maths and increased slightly in science.

Rankings for higher achievers in 2006 were significantly higher than overall rankings in science and slightly higher in reading and maths. By 2009 the difference between science rankings had closed somewhat, but this is attributable to slippage in the higher achieving rankings. In maths the position is broadly unchanged, but in reading the relatively higher ranking of the higher achievers is now more pronounced.

In terms of the percentages achieving higher levels, there has been little relatively change in reading, maths or science.

New Zealand is another country with a relatively ‘balanced scorecard’ but its higher achievers seem to be doing relatively well and it may wish to concentrate more on lower end of the achievement spectrum.


The UK achieved good to mid-table rankings in PISA 2006 for science (14), reading (17) and maths (24). In PISA 2009 it fell slightly in science (16) and maths (27) and significantly in reading (25). Average scores fell slightly in all three areas.

In 2006, rankings for higher achievers were significantly higher than overall rankings in science and reading, but very similar in maths. This continues to be the case in 2009 with the decline shared across achievement levels.

The percentage achieving higher levels has fallen significantly between 2006 and 2009 in science and maths, and fallen slightly in reading.

The UK has to improve in all three areas, but particularly maths and reading. High achievers must be a priority in maths especially, but effort is required across all levels of achievement to ensure that lower achievers do not improve at the expense of their higher-achieving peers.


The PISA 2006 overall rankings for the US were low to mid-table in science (29) and maths (35). No result was declared for reading because of problems with the administration of the assessment. The PISA 2009 outcomes show that the US has improved its ranking by six places in science (23) and four places in maths (31) while it achieved a ranking of 17 in reading. Average scores increased significantly in both maths and science.

2006 rankings for higher achievers were much higher than the overall ranking in science and slightly higher in maths. By 2009, the gap had narrowed in science and maths. In reading higher achievers are ranked significantly higher than the overall ranking.

The percentage achieving higher levels is little changed in science between 2006 and 2009 but there is a significant improvement in maths.

The US is moving in broadly the right direction but has to continue to improve in all three areas, especially maths. This evidence suggests that the focus should be predominantly on lower achievers – except in maths where there is a problem across the board – but, as with the UK, care is needed to ensure that higher achievers are not neglected as a result.

The UK and the US are therefore in very similar positions, but whereas the UK needs to arrest a downward trajectory, the US is already moving in the right direction.

There is an agenda for improvement in all these countries, should they choose – as the UK has done – to align their priorities firmly with those assessed by PISA and other international comparisons studies.

And this analysis has also shown that there is clear room for improvement in the performance of other world leaders, such as Finland, Hong Kong and Korea: we should take with a big pinch of salt the news headlines that say we need only emulate them to be successful.


December 2010

Building a Federation of UK G&T interests – Learning from New Zealand

The History

In Autumn 2009 I invited England’s Gifted and Talented Stakeholder Group to consider a short paper I had prepared about the potential benefits of closer collaboration between the interests they represented.

I argued that this would be necessary to ensure the survival of a national programme for G&T education given the:

  • imminent end of the contract with CfBT to run the Young Gifted and Talented (YG&T) programme;
  • limited transfer of responsibility from CfBT to the National Strategies;
  • limited scope of National Strategies activity in their final year, culminating in their termination in March 2011 and
  • impending cuts to public expenditure

The ensuing discussion was predictably disappointing. Many of the stakeholders had become accustomed to – perhaps even dependent on – a ‘top-down’ programme and couldn’t easily visualise the picture of the future that I was painting.

I suppose I had anticipated that it would be too soon for the Group to engage seriously with the issues, but it seemed to me important to plant the seeds of subsequent discussion.

Unknown to me, that discussion began very shortly afterwards during the last few months of 2009.

Recent developments

I first became aware that talks were under way when invited to get involved in April 2010, following my retirement.

I argued for rapid action to establish a national federation or network. This was slightly before a General Election that the Conservatives were expected to win.

Their policy agenda was built around a ‘big society’ concept which involves delegating responsibilities away from Government to the voluntary and third sectors. At least part of the purpose – if undeclared – was to help them to manage the swingeing public expenditure cuts that they were also committed to securing.

I produced a first draft plan for the network – designed to secure initial consensus about its aims and purposes.

I offered to undertake the related development and secretariat work necessary to secure its establishment on a firm footing…only to be asked to stop because some factions were reportedly suspicious of my proximity to the Government. This on the verge of an Election that was about to introduce an entirely new one!

It was cowardly of those factions not to discuss their concerns with me face to face.  But the situation was also intensely frustrating as I was convinced that having a network in place as soon as the new Government assumed power could pay major dividends.

It would have allowed us to ‘get in on the ground floor’ in terms of the new Government’s policy agenda for education and ‘the big society’ and to make vital policy connections with other interests while their plans were at the earliest stage of development. It might even have secured a fleeting reference in the forthcoming Schools White Paper.

In May I wrote an article for G&T Update (subscription required) setting out the case for an inclusive ‘G&T coalition’ and outlining some important links to the Coalition Government’s policy agenda.

I ended the article by urging that an entity must be in place by September at the latest, with an agreed 5-year strategy and an outline business plan. That timetable will not now be met and the potential benefits I identified are much less likely to be realised.

The draft proposal

The article was published in July. Meanwhile, an initial open meeting had taken place in June 2010 to discuss the prospects for a network. Progress felt painfully slow. There was lots of talking around the issue but the only practical outcomes have been a draft outline proposal and a commitment to meet again in September.

The draft proposal says:

‘There was broad agreement at the meeting that the establishment of a national group to enhance and promote the profile of GT education is imperative.

GT education is unlikely to be a Government priority in the foreseeable future and impending funding cuts will impact significantly on this policy area. It is only as a unified group of GT education supporters that we will be able to provide a degree of clarity to those seeking support and serve as a pressure group for change at local, national and international levels, by:

  • advocating for equitable educational opportunities for those with high learning potential, including GT students;
  • working pro-actively to raise the profile of the needs of GT learners with a range of stakeholders;
  • working collaboratively to develop policy and delivery models that take account of wider educational change, and helping to secure funding where appropriate;
  • developing a professional community to network, support and learn from each other;
  • encouraging the pursuit and sharing of best practice in GT education;
  • helping ensure that GT education can make a significant contribution to social mobility;
  • engaging in practical research that sets out to demonstrate the value of focusing on GT provision.

As far as possible, the group will undertake these activities without compromising the autonomy, influence and income-generating capacity of its members.’

The international dimension

I was responsible for the introduction of the word ‘international’, not least because such a network could have an important role in supporting Hungary’s plans for an EU initiative – a welcome development that I have covered in a previous post.

I also suggested the final sentence.

Incidentally, I should mention in passing that discussion at the meeting was confined largely to England. An important future consideration is whether we can and should create a UK-wide network taking full account of the interests of Northern Ireland, Scotland and Wales. Maybe Eire would also like to be affiliated.

We can learn much from giftEDnz a similar coalition of interests established in New Zealand.

GiftedEDnz is impressive in many ways. It successfully attracted start-up and development funding from the Todd Foundation ($NZ 15,900 and $NZ 47,130 respectively). It has an established constitution, a website and newsletter.

It is piloting special interest groups (using the second tranche of Todd Foundation funding). It has already hosted a mini-conferences and is working towards its first majorevent in 2011.

But the New Zealand organisation has one major weakness – it is not fully inclusive. By confining itself to professional interests and not including the New Zealand Association for Gifted Children (NZAGC) it is potentially missing a trick.

I am clear from discussion with the chair of giftEDnz that this is all perfectly amicable and that her organisation enjoys a close relationship with NZAGC.

But I can see no reason for the UK to follow the same path.

A professional network or an inclusive network?

As I write, the draft proposal is circulating with the following proposed title and strapline:

G&T – One Voice

the national professional community
for the support and nurture
of gifted and talented young people,
and their families and educators

One doesn’t need a Nobel Prize to spot the contradiction in this statement, emphasised as it is by italicising the word ‘professional’

I for one shall be arguing strongly against such an exclusive approach when we meet again in September.

One fundamental purpose of the network is to bring all parties to the table in an inclusive fashion. No-one’s interests are served by excluding parents, carers and learners from proceedings.

It means that key topics such as parental engagement and student voice will be addressed from a narrow professional perspective. It runs directly counter to the Government’s direction of travel in encouraging groups of parents to establish their own free schools.

Were I a betting man, I would lay a wager that this emphasis originates with…

…Well perhaps I won’t name them, for the time being at least. Discretion is the better part of valour – and I want to give them an opportunity to prove me wrong.

For now  I will confine myself to making three cautious observations of a general nature:

  • Firstly, there are players in UK  G&T education that have considerable pride in their professional credentials. In some cases there are widely divergent views as to whether such pride is justified by the quality of output and the capacity to improve provision. When some of the most positive statements emanate from the entity itself, that tends to indicate a degree of insecurity rather than full and complete confidence in one’s own performance;
  • Secondly, anyone bringing a ‘not invented here’ mentality to future discussions will sabotage our best efforts to secure a full and effective partnership between G&T interests in this country. That would not be in the best interests of our gifted and talented learners, even if some believe that it would better serve the needs of their educators;
  • Thirdly, by the same token, anyone susceptible to that mentality will need to be thoroughly confident of their capacity to ‘go it alone’, potentially in head-to-head competition with a coalition of all the other interests in the field. They may be wise to adopt a ‘wait and see’ stance, reserving their position until they can judge more accurately whether or not the network is likely to be successful.

Let’s wait and see what happens.

For the avoidance of doubt, these are my personal views and not those of any organisation with which I am associated.


July 2010