‘Poor but Bright’ v ‘Poor but Dim’

 

light-147810_640This post explores whether, in supporting learners from disadvantaged backgrounds, educators should prioritise low attainers over high attainers, or give them equal priority.

 

 

 

Introduction

Last week I took umbrage at a blog post and found myself engaged in a Twitter discussion with the author, one Mr Thomas.

 

 

Put crudely, the discussion hinged on the question whether the educational needs of ‘poor but dim’ learners should take precedence over those of the ‘poor but bright’. (This is Mr Thomas’s shorthand, not mine.)

He argued that the ‘poor but dim’ are the higher priority; I countered that all poor learners should have equal priority, regardless of their ability and prior attainment.

We began to explore the issue:

  • as a matter of educational policy and principle
  • with reference to inputs – the allocation of financial and human resources between these competing priorities and
  • in terms of outcomes – the comparative benefits to the economy and to society from investment at the top or the bottom of the attainment spectrum.

This post presents the discussion, adding more flesh and gloss from the Gifted Phoenix perspective.

It might or might not stimulate some interest in how this slightly different take on a rather hoary old chestnut plays out in England’s current educational landscape.

But I am particularly interested in how gifted advocates in different countries respond to these arguments. What is the consensus, if any, on the core issue?

Depending on the answer to this first question, how should gifted advocates frame the argument for educationalists and the wider public?

To help answer the first question I have included a poll at the end of the post.

Do please respond to that – and feel free to discuss the second question in the comments section below.

The structure of the post is fairly complex, comprising:

  • A (hopefully objective) summary of Mr Thomas’s original post.
  • An embedded version of the substance of our Twitter conversation. I have removed some Tweets – mostly those from third parties – and reordered a little to make this more accessible. I don’t believe I’ve done any significant damage to either case.
  • Some definition of terms, because there is otherwise much cause for confusion as we push further into the debate.
  • A digressionary exploration of the evidence base, dealing with attainment data and budget allocations respectively. The former exposes what little we are told about how socio-economic gaps vary across the attainment spectrum; the latter is relevant to the discussion of inputs. Those pressed for time may wish to proceed directly to…
  • …A summing up, which expands in turn the key points we exchanged on the point of principle, on inputs and on outcomes respectively.

I have reserved until close to the end a few personal observations about the encounter and how it made me feel.

And I conclude with the customary brief summary of key points and the aforementioned poll.

It is an ambitious piece and I am in two minds as to whether it hangs together properly, but you are ultimately the judges of that.

 

What Mr Thomas Blogged

The post was called ‘The Romance of the Poor but Bright’ and the substance of the argument (incorporating several key quotations) ran like this:

  • The ‘effort and resources, of schools but particularly of business and charitable enterprise, are directed disproportionately at those who are already high achieving – the poor but bright’.
  • Moreover ‘huge effort is expended on access to the top universities, with great sums being spent to make marginal improvements to a small set of students at the top of the disadvantaged spectrum. They cite the gap in entry, often to Oxbridge, as a significant problem that blights our society.’
  • This however is ‘the pretty face of the problem. The far uglier face is the gap in life outcomes for those who take least well to education.’
  • Popular discourse is easily caught up in the romance of the poor but bright’ but ‘we end up ignoring the more pressing problem – of students for whom our efforts will determine whether they ever get a job or contribute to society’. For ‘when did you last hear someone advocate for the poor but dim?’ 
  • ‘The gap most damaging to society is in life outcomes for the children who perform least well at school.’ Three areas should be prioritised to improve their educational outcomes:

o   Improving alternative provision (AP) which ‘operates as a shadow school system, largely unknown and wholly unappreciated’ - ‘developing a national network of high–quality alternative provision…must be a priority if we are to close the gap at the bottom’.

o   Improving ‘consistency in SEN support’ because ‘schools are often ill equipped to cope with these, and often manage only because of the extraordinary effort of dedicated staff’. There is ‘inconsistency in funding and support between local authorities’.

o   Introducing clearer assessment of basic skills, ‘so that a student could not appear to be performing well unless they have mastered the basics’.

  • While ‘any student failing to meet their potential is a dreadful thing’, the educational successes of ‘students with incredibly challenging behaviour’ and ‘complex special needs…have the power to change the British economy, far more so than those of their brighter peers.’

A footnote adds ‘I do not believe in either bright or dim, only differences in epigenetic coding or accumulated lifetime practice, but that is a discussion for another day.’

Indeed it is.

 

Our ensuing Twitter discussion

The substance of our Twitter discussion is captured in the embedded version immediately below. (Scroll down to the bottom for the beginning and work your way back to the top.)

 

 

Defining Terms

 

Poor, Bright and Dim

I take poor to mean socio-economic disadvantage, as opposed to any disadvantage attributable to the behaviours, difficulties, needs, impairments or disabilities associated with AP and/or SEN.

I recognise of course that such a distinction is more theoretical than practical, because, when learners experience multiple causes of disadvantage, the educational response must be holistic rather than disaggregated.

Nevertheless, the meaning of ‘poor’ is clear – that term cannot be stretched to include these additional dimensions of disadvantage.

The available performance data foregrounds two measures of socio-economic disadvantage: current eligibility for and take up of free school meals (FSM) and qualification for the deprivation element of the Pupil Premium, determined by FSM eligibility at some point within the last 6 years (known as ‘ever-6’).

Both are used in this post. Distinctions are typically between the disadvantaged learners and non-disadvantaged learners, though some of the supporting data compares outcomes for disadvantaged learners with outcomes for all learners, advantaged and disadvantaged alike.

The gaps that need closing are therefore:

  • between ‘poor and bright’ and other ‘bright’ learners (The Excellence Gap) and 
  • between ‘poor and dim’ and other ‘dim’ learners. I will christen this The Foundation Gap.

The core question is whether The Foundation Gap takes precedence over The Excellence Gap or vice versa, or whether they should have equal billing.

This involves immediate and overt recognition that classification as AP and/or SEN is not synonymous with the epithet ‘poor’, because there are many comparatively advantaged learners within these populations.

But such a distinction is not properly established in Mr Thomas’ blog, which applies the epithet ‘poor’ but then treats the AP and SEN populations as homogenous and somehow associated with it.

 

By ‘dim’ I take Mr Thomas to mean the lowest segment of the attainment distribution – one of his tweets specifically mentions ‘the bottom 20%’. The AP and/or SEN populations are likely to be disproportionately represented within these two deciles, but they are not synonymous with it either.

This distinction will not be lost on gifted advocates who are only too familiar with the very limited attention paid to twice exceptional learners.

Those from poor backgrounds within the AP and/or SEN populations are even more likely to be disproportionately represented in ‘the bottom 20%’ than their more advantaged peers, but even they will not constitute the entirety of ‘the bottom 20%’. A Venn diagram would likely show significant overlap, but that is all.

Hence disadvantaged AP/SEN are almost certainly a relatively poor proxy for the ‘poor but dim’.

That said I could find no data that quantifies these relationships.

The School Performance Tables distinguish a ‘low attainer’ cohort. (In the Secondary Tables the definition is determined by prior KS2 attainment and in the Primary Tables by prior KS1 attainment.)

These populations comprise some 15.7% of the total population in the Secondary Tables and about 18.0% in the Primary Tables. But neither set of Tables applies the distinction in their reporting of the attainment of those from disadvantaged backgrounds.

 

It follows from the definition of ‘dim’ that, by ‘bright’, MrThomas probably intends the two corresponding deciles at the top of the attainment distribution (even though he seems most exercised about the subset with the capacity to progress to competitive universities, particularly Oxford and Cambridge. This is a far more select group of exceptionally high attainers – and an even smaller group of exceptionally high attainers from disadvantaged backgrounds.)

A few AP and/or SEN students will likely fall within this wider group, fewer still within the subset of exceptionally high attainers. AP and/or SEN students from disadvantaged backgrounds will be fewer again, if indeed there are any at all.

The same issues with data apply. The School Performance Tables distinguish ‘high attainers’, who constitute over 32% of the secondary cohort and 25% of the primary cohort. As with low attainers, we cannot isolate the performance of those from disadvantaged backgrounds.

We are forced to rely on what limited data is made publicly available to distinguish the performance of disadvantaged low and high attainers.

At the top of the distribution there is a trickle of evidence about performance on specific high attainment measures and access to the most competitive universities. Still greater transparency is fervently to be desired.

At the bottom, I can find very little relevant data at all – we are driven inexorably towards analyses of the SEN population, because that is the only dataset differentiated by disadvantage, even though we have acknowledged that such a proxy is highly misleading. (Equivalent AP attainment data seems conspicuous by its absence.)

 

AP and SEN

Before exploring these datasets I ought to provide some description of the different programmes and support under discussion here, if only for the benefit of readers who are unfamiliar with the English education system.

 

Alternative Provision (AP) is intended to meet the needs of a variety of vulnerable learners:

‘They include pupils who have been excluded or who cannot attend mainstream school for other reasons: for example, children with behaviour issues, those who have short- or long-term illness, school phobics, teenage mothers, pregnant teenagers, or pupils without a school place.’

AP is provided in a variety of settings where learners engage in timetabled education activities away from their school and school staff.

Providers include further education colleges, charities, businesses, independent schools and the public sector. Pupil Referral Units (PRUs) are perhaps the best-known settings – there are some 400 nationally.

A review of AP was undertaken by Taylor in 2012 and the Government subsequently embarked on a substantive improvement programme. This rather gives the lie to Mr Thomas’ contention that AP is ‘largely unknown and wholly unappreciated’.

Taylor complains of a lack of reliable data about the number of learners in AP but notes that the DfE’s 2011 AP census recorded 14,050 pupils in PRUs and a further 23,020 in other settings on a mixture of full-time and part-time placements. This suggests a total of slightly over 37,000 learners, though the FTE figure is unknown.

He states that AP learners are:

‘…twice as likely as the average pupil to qualify for free school meals’

A supporting Equality Impact Assessment qualifies this somewhat:

‘In Jan 2011, 34.6% of pupils in PRUs and 13.8%* of pupils in other AP, were eligible for and claiming free school meals, compared with 14.6% of pupils in secondary schools. [*Note: in some AP settings, free school meals would not be available, so that figure is under-stated, but we cannot say by how much.]’

If the PRU population is typical of the wider AP population, approximately one third qualify under this FSM measure of disadvantage, meaning that the substantial majority are not ‘poor’ according to our definition above.

Taylor confirms that overall GCSE performance in AP is extremely low, pointing out that in 2011 just 1.4% achieved five or more GCSE grades A*-C including [GCSEs in] maths and English, compared to 53.4% of pupils in all schools.

By 2012/13 the comparable percentages were 1.7% and 61.7% respectively (the latter for all state-funded schools), suggesting an increasing gap in overall performance. This is a cause for concern but not directly relevant to the issue under consideration.

The huge disparity is at least partly explained by the facts that many AP students take alternative qualifications and that the national curriculum does not apply to PRUs.

Data is available showing the full range of qualifications pursued. Taylor recommended that all students in AP should continue to receive ‘appropriate and challenging English and Maths teaching’.

Interestingly, he also pointed out that:

‘In some PRUs and AP there is no provision for more able pupils who end up leaving without the GCSE grades they are capable of earning.’

However, he fails to offer a specific recommendation to address this point.

 

Special Educational Needs (SEN) are needs or disabilities that affect children’s ability to learn. These may include behavioural and social difficulties, learning difficulties or physical impairments.

This area has also been subject to a major Government reform programme now being implemented.

There is significant overlap between AP and SEN, with Taylor’s review of the former noting that the population in PRUs is 79% SEN.

We know from the 2013 SEN statistics that 12.6% of all pupils on roll at PRUs had SEN statements and 68.9% had SEN without statements. But these populations represent only a tiny proportion of the total SEN population in schools.

SEN learners also have higher than typical eligibility for FSM. In January 2013, 30.1% of all SEN categories across all primary, secondary and special schools were FSM-eligible, roughly twice the rate for all pupils. However, this means that almost seven in ten are not caught by the definition of ‘poor’ provided above.

In 2012/13 23.4% of all SEN learners achieved five or more GCSEs at A*-C or equivalent, including GCSEs in English and maths, compared with 70.4% of those having no identified SEN – another significant overall gap, but not directly relevant to our comparison of the ‘poor but bright’ and the ‘poor but dim’.

 

Data on socio-economic attainment gaps across the attainment spectrum

Those interested in how socio-economic attainment gaps vary at different attainment levels cannot fail to be struck by how little material of this kind is published, particularly in the secondary sector, when such gaps tend to increase in size.

One cannot entirely escape the conviction that this reticence deliberately masks some inconvenient truths.

  • The ideal would be to have the established high/middle/low attainer distinctions mapped directly onto performance by advantaged/disadvantaged learners in the Performance Tables but, as we have indicated, this material is conspicuous by its absence. Perhaps it will appear in the Data Portal now under development.
  • Our next best option is to examine socio-economic attainment gaps on specific attainment measures that will serve as decent proxies for high/middle/low attainment. We can do this to some extent but the focus is disproportionately on the primary sector because the Secondary Tables do not include proper high attainment measures (such as measures based exclusively on GCSE performance at grades A*/A). Maybe the Portal will come to the rescue here as well. We can however supply some basic Oxbridge fair access data.
  • The least preferable option is deploy our admittedly poor proxies for low attainers – SEN and AP. But there isn’t much information from this source either. 

The analysis below looks consecutively at data for the primary and secondary sectors.

 

Primary

We know, from the 2013 Primary School Performance Tables, that the percentage of disadvantaged and other learners achieving different KS2 levels in reading, writing and maths combined, in 2013 and 2012 respectively, were as follows:

 

Table 1: Percentage of disadvantaged and all other learners achieving each national curriculum level at KS2 in 2013 in reading, writing and maths combined

L3  or below L4  or above L4B or above L5 or above
Dis Oth Gap Dis Oth Gap Dis Oth Gap Dis Oth Gap
2013 13 5 +8 63 81 -18 49 69 -20 10 26 -16
2012 x x x 61 80 -19 x x x 9 24 -15

 

This tells us relatively little, apart from the fact that disadvantaged learners are heavily over-represented at L3 and below and heavily under-represented at L5 and above.

The L5 gap is somewhat lower than the gaps at L4 and 4B respectively, but not markedly so. However, the L5 gap has widened slightly since 2012 while the reverse is true at L4.

This next table synthesises data from SFR51/13: ‘National curriculum assessments at key stage 2: 2012 to 2013’. It also shows gaps for disadvantage, as opposed to FSM gaps.

 

Table 2: Percentage of disadvantaged and all other learners achieving each national curriculum level, including differentiation by gender, in each 2013 end of KS2 test

L3 L4 L4B L5 L6
D O Gap D O Gap D O Gap D O Gap D O Gap
Reading All 12 6 +6 48 38 +10 63 80 -17 30 50 -20 0 1 -1
B 13 7 +6 47 40 +7 59 77 -18 27 47 -20 0 0 0
G 11 5 +6 48 37 +11 67 83 -16 33 54 -21 0 1 -1
GPS All 28 17 +11 28 25 +3 52 70 -18 33 51 -18 1 2 -1
B 30 20 +10 27 27 0 45 65 -20 28 46 -18 0 2 -2
G 24 13 +11 28 24 +4 58 76 -18 39 57 -18 1 3 -2
Maths All 16 9 +7 50 41 +9 62 78 -16 24 39 -15 2 8 -6
B 15 8 +7 48 39 +9 63 79 -16 26 39 -13 3 10 -7
G 17 9 +8 52 44 +8 61 78 -17 23 38 -15 2 7 -5

 

This tells a relatively consistent story across each test and for boys as well as girls.

We can see that, at Level 4 and below, learners from disadvantaged backgrounds are in the clear majority, perhaps with the exception of L4 GPS.  But at L4B and above they are very much in the minority.

Moreover, with the exception of L6 where low percentages across the board mask the true size of the gaps, disadvantaged learners tend to be significantly more under-represented at L4B and above than they are over-represented at L4 and below.

A different way of looking at this data is to compare the percentages of advantaged and disadvantaged learners respectively at L4 and L5 in each assessment.

  • Reading: Amongst disadvantaged learners the proportion at L5 is -18 percentage points fewer than the proportion at L4, but amongst advantaged learners the proportion at L5 is +12 percentage points higher than at L4.
  • GPS: Amongst disadvantaged learners the proportion at L5 is +5 percentage points more than the proportion at L4, but amongst advantaged learners the proportion at L5 is +26 percentage points higher than at L4.
  • Maths: Amongst disadvantaged learners the proportion at L5 is -26 percentage points fewer than the proportion at L4, but amongst advantaged learners the proportion at L5 is only 2 percentage points fewer than at L4.

If we look at 2013 gaps compared with 2012 (with teacher assessment of writing included in place of the GSP test introduced in 2013) we can see there has been relatively little change across the board, with the exception of L5 maths, which has been affected by the increasing success of advantaged learners at L6.

 

Table 3: Percentage of disadvantaged and all other learners achieving  national curriculum levels 3-6 in each of reading, writing and maths in 2012 and 2013 respectively

L3 L4 L5 L6
D O Gap D O Gap D O Gap D O Gap
Reading 2012 11 6 +5 46 36 +10 33 54 -21 0 0 0
2013 12 6 +6 48 38 +10 30 50 -20 0 1 -1
Writing 2012 22 11 +11 55 52 +3 15 32 -17 0 1 -1
2013 19 10 +9 56 52 +4 17 34 -17 1 2 -1
Maths 2012 17 9 +8 50 41 +9 23 43 -20 1 4 -3
2013 16 9 +9 50 41 +9 24 39 -15 2 8 -6

 

To summarise, as far as KS2 performance is concerned, there are significant imbalances at both the top and the bottom of the attainment distribution and these gaps have not changed significantly since 2012. There is some evidence to suggest that gaps at the top are larger than those at the bottom.

 

Secondary

Unfortunately there is a dearth of comparable data at secondary level, principally because of the absence of published measures of high attainment.

SFR05/2014 provides us with FSM gaps (as opposed to disadvantaged gaps) for a series of GCSE measures, none of which serve our purpose particularly well:

  • 5+ A*-C GCSE grades: gap = 16.0%
  • 5+ A*-C grades including English and maths GCSEs: gap = 26.7%
  • 5+ A*-G grades: gap = 7.6%
  • 5+ A*-G grades including English and maths GCSEs: gap = 9.9%
  • A*-C grades in English and maths GCSEs: gap = 26.6%
  • Achieving the English Baccalaureate: gap = 16.4%

Perhaps all we can deduce is that the gaps vary considerably in size, but tend to be smaller for the relatively less demanding and larger for the relatively more demanding measures.

For specific high attainment measures we are forced to rely principally on data snippets released in answer to occasional Parliamentary Questions.

For example:

  • In 2003, 1.0% of FSM-eligible learners achieved five or more GCSEs at A*/A including English and maths but excluding equivalents, compared with 6.8% of those not eligible, giving a gap of 5.8%. By 2009 the comparable percentages were 1.7% and 9.0% respectively, giving an increased gap of 7.3% (Col 568W)
  • In 2006/07, the percentage of FSM-eligible pupils securing A*/A grades at GCSE in different subjects, compared with the percentage of all pupils in maintained schools doing so were as shown in the table below (Col 808W)

 

Table 4: Percentage of FSM-eligible and all pupils achieving GCSE A*/A grades in different GCSE subjects in 2007

FSM All pupils Gap
Maths 3.7 15.6 11.9
Eng lit 4.1 20.0 15.9
Eng lang 3.5 16.4 12.9
Physics 2.2 49.0 46.8
Chemistry 2.5 48.4 45.9
Biology 2.5 46.8 44.3
French 3.5 22.9 19.4
German 2.8 23.2 20.4

 

  • In 2008, 1% of FSM-eligible learners in maintained schools achieved A* in GCSE maths compared with 4% of all pupils in maintained schools. The comparable percentages for Grade A were 3% and 10% respectively, giving an A*/A gap of 10% (Col 488W)

There is much variation in the subject-specific outcomes at A*/A described above. But, when it comes to the overall 5+ GCSEs high attainment measure based on grades A*/A, the gap is much smaller than on the corresponding standard measure based on grades A*-C.

Further material of broadly the same vintage is available in a 2007 DfE statistical publication: ‘The Characteristics of High Attainers’.

There is a complex pattern in evidence here which is very hard to explain with the limited data available. More time series data of this nature – illustrating Excellence and Foundation Gaps alike – should be published annually so that we have a more complete and much more readily accessible dataset.

I could find no information at all about the comparative performance of disadvantaged learners in AP settings compared with those not from disadvantaged backgrounds.

Data is published showing the FSM gap for SEN learners on all the basic GCSE measures listed above. I have retained the generic FSM gaps in brackets for the sake of comparison:

  • 5+ A*-C GCSE grades: gap = 12.5% (16.0%)
  • 5+ A*-C grades including English and maths GCSEs: gap = 12.1% (26.7%)
  • 5+ A*-G grades: gap = 10.4% (7.6%)
  • 5+ A*-G grades including English and maths GCSEs: gap = 13.2% (9.9%)
  • A*-C grades in English and maths GCSEs: gap = 12.3% (26.6%)
  • Achieving the English Baccalaureate: gap = 3.5% (16.4%)

One can see that the FSM gaps for the more demanding measures are generally lower for SEN learners than they are for all learners. This may be interesting but, for the reasons given above, this is not a reliable proxy for the FSM gap amongst ‘dim’ learners.

 

When it comes to fair access to Oxbridge, I provided a close analysis of much relevant data in this post from November 2013.

The chart below shows the number of 15 year-olds eligible for and claiming FSM at age 15 who progressed to Oxford or Cambridge by age 19. The figures are rounded to the nearest five.

 

Chart 1: FSM-eligible learners admitted to Oxford and Cambridge 2005/06 to 2010/11

Oxbridge chart

 

In sum, there has been no change in these numbers over the last six years for which data has been published. So while there may have been consistently significant expenditure on access agreements and multiple smaller mentoring programmes, it has had negligible impact on this measure at least.

My previous post set out a proposal for what to do about this sorry state of affairs.

 

Budgets

For the purposes of this discussion we need ideally to identify and compare total national budgets for the ‘poor but bright’ and the ‘poor but dim’. But that is simply not possible. 

Many funding streams cannot be disaggregated in this manner. As we have seen, some – including the AP and SEN budgets – may be aligned erroneously with the second of these groups, although they also support learners who are neither ‘poor’ nor ‘dim’ and have a broader purpose than raising attainment. 

There may be some debate, too, about which funding streams should be weighed in the balance.

On the ‘bright but poor’ side, do we include funding for grammar schools, even though the percentage of disadvantaged learners attending many of them is virtually negligible (despite recent suggestions that some are now prepared to do something about this)? Should the Music and Dance Scheme (MDS) be within scope of this calculation?

The best I can offer is a commentary that gives a broad sense of orders of magnitude, to illustrate in very approximate terms how the scales tend to tilt more towards the ‘poor but dim’ rather than the ‘poor but bright’, but also to weave in a few relevant asides about some of the funding streams in question.

 

Pupil Premium and the EEF

I begin with the Pupil Premium – providing schools with additional funding to raise the attainment of disadvantaged learners.

The Premium is not attached to the learners who qualify for it, so schools are free to aggregate the funding and use it as they see fit. They are held accountable for these decisions through Ofsted inspection and the gap-narrowing measures in the Performance Tables.

Mr Thomas suggests in our Twitter discussion that AP students are not significant beneficiaries of such support, although provision in PRUs features prominently in the published evaluation of the Premium. It is for local authorities to determine how Pupil Premium funding is allocated in AP settings.

One might also make a case that ‘bright but poor’ learners are not a priority either, despite suggestions from the Pupil Premium Champion to the contrary.

 

 

As we have seen, the Performance Tables are not sharply enough focused on the excellence gaps at the top of the distribution and I have shown elsewhere that Ofsted’s increased focus on the most able does not yet extend to the impact on those attracting the Pupil Premium, even though there was a commitment that it would do so. 

If there is Pupil Premium funding heading towards high attainers from disadvantaged backgrounds, the limited data to which we have access does not yet suggest a significant impact on the size of Excellence Gaps. 

The ‘poor but bright’ are not a priority for the Education Endowment Foundation (EEF) either.

This 2011 paper explains that it is prioritising the performance of disadvantaged learners in schools below the floor targets. At one point it says:

‘Looking at the full range of GCSE results (as opposed to just the proportions who achieve the expected standards) shows that the challenge facing the EEF is complex – it is not simply a question of taking pupils from D to C (the expected level of attainment). Improving results across the spectrum of attainment will mean helping talented pupils to achieve top grades, while at the same time raising standards amongst pupils currently struggling to pass.’

But this is just after it has shown that the percentages of disadvantaged high attainers in its target schools are significantly lower than elsewhere. Other things being equal, the ‘poor but dim’ will be the prime beneficiaries.

It may now be time for the EEF to expand its focus to all schools. A diagram from this paper – reproduced below – demonstrates that, in 2010, the attainment gap between FSM and non-FSM was significantly larger in schools above the floor than in those below the floor that the EEF is prioritising. This is true in both the primary and secondary sectors.

It would be interesting to see whether this is still the case.

 

EEF Capture

 

AP and SEN

Given the disaggregation problems discussed above, this section is intended simply to give some basic sense of orders of magnitude – lending at least some evidence to counter Mr Thomas’ assertion that the ‘effort and resources, of schools… are directed disproportionately at those who are already high achieving – the poor but bright’.

It is surprisingly hard to get a grip on the overall national budget for AP. A PQ from early 2011 (Col 75W) supplies a net current expenditure figure for all English local authorities of £530m.

Taylor’s Review fails to offer a comparable figure but my rough estimates based on the per pupil costs he supplies suggests a revenue budget of at least £400m. (Taylor suggests average per pupil costs of £9,500 per year for full-time AP, although PRU places are said to cost between £12,000 and £18,000 per annum.)

I found online a consultation document from Kent – England’s largest local authority – stating its revenue costs at over £11m in FY2014-15. Approximately 454 pupils attended Kent’s AP/PRU provision in 2012-13.

There must also be a significant capital budget. There are around 400 PRUs, not to mention a growing cadre of specialist AP academies and free schools. The total capital cost of the first AP free school – Derby Pride Academy – was £2.147m for a 50 place setting.

In FY2011-12, total annual national expenditure on SEN was £5.77 billion (Col 391W). There will have been some cost-cutting as a consequence of the latest reforms, but the order of magnitude is clear.

The latest version of the SEN Code of Practice outlines the panoply of support available, including the compulsory requirement that each school has a designated teacher to be responsible for co-ordinating SEN provision (the SENCO).

In short, the national budget for AP is sizeable and the national budget for SEN is huge. Per capita expenditure is correspondingly high. If we could isolate the proportion of these budgets allocated to raising the attainment of the ‘poor but dim’, the total would be substantial.

 

Fair Access, especially to Oxbridge, and some related observations

Mr Thomas refers specifically to funding to support fair access to universities – especially Oxbridge – for those from disadvantaged backgrounds. This is another area in which it is hard to get a grasp on total expenditure, not least because of the many small-scale mentoring projects that exist.

Mr Thomas is quite correct to remark on the sheer number of these, although they are relatively small beer in budgetary terms. (One suspects that they would be much more efficient and effective if they could be linked together within some sort of overarching framework.)

The Office for Fair Access (OFFA) estimates University access agreement expenditure on outreach in 2014-15 at £111.9m and this has to be factored in, as does DfE’s own small contribution – the Future Scholar Awards.

Were any expenditure in this territory to be criticised, it would surely be the development and capital costs for new selective 16-19 academies and free schools that specifically give priority to disadvantaged students.

The sums are large, perhaps not outstandingly so compared with national expenditure on SEN for example, but they will almost certainly benefit only a tiny localised proportion of the ‘bright but poor’ population.

There are several such projects around the country. Some of the most prominent are located in London.

The London Academy of Excellence (capacity 420) is fairly typical. It cost an initial £4.7m to establish plus an annual lease requiring a further £400K annually.

But this is dwarfed by the projected costs of the Harris Westminster Sixth Form, scheduled to open in September 2014. Housed in a former government building, the capital cost is reputed to be £45m for a 500-place institution.

There were reportedly disagreements within Government:

‘It is understood that the £45m cost was subject to a “significant difference of opinion” within the DfE where critics say that by concentrating large resources on the brightest children at a time when budgets are constrained means other children might miss out…

But a spokeswoman for the DfE robustly defended the plans tonight. “This is an inspirational collaboration between the country’s top academy chain and one of the best private schools in the country,” she said. “It will give hundreds of children from low income families across London the kind of top quality sixth-form previously reserved for the better off.”’

Here we have in microcosm the debate to which this post is dedicated.

One blogger – a London College Principal – pointed out that the real issue was not whether the brightest should benefit over others, but how few of the ‘poor but bright’ would do so:

‘£45m could have a transformative effect on thousands of 16-19 year olds across London… £45m could have funded at least 50 extra places in each college for over 10 years, helped build excellent new facilities for all students and created a city-wide network to support gifted and talented students in sixth forms across the capital working with our partner universities and employers.’

 

Summing Up

There are three main elements to the discussion: the point of principle, the inputs and the impact. The following sections deal with each of these in turn.

 

Principle

Put bluntly, should ‘poor but dim’ kids have higher priority for educators than ‘poor but bright’ kids (Mr Thomas’ position) or should all poor kids have equal priority and an equal right to the support they need to achieve their best (the Gifted Phoenix position)?

For Mr Thomas, it seems this priority is determined by whether – and how far – the learner is behind undefined ‘basic levels of attainment’ and/or mastery of ‘the basics’ (presumably literacy and numeracy).

Those below the basic attainment threshold have higher priority than those above it. He does not say so but this logic suggests that those furthest below the threshold are the highest priority and those furthest above are the lowest.

So, pursued to its logical conclusion, this would mean that the highest attainers would get next to no support while a human vegetable would be the highest priority of all.

However, since Mr Thomas’ focus is on marginal benefit, it may be that those nearest the threshold would be first in the queue for scarce resources, because they would require the least effort and resources to lift above it.

This philosophy drives the emphasis on achievement of national benchmarks and predominant focus on borderline candidates that, until recently, dominated our assessment and accountability system.

For Gifted Phoenix, every socio-economically disadvantaged learner has equal priority to the support they need to improve their attainment, by virtue of that disadvantage.

There is no question of elevating some ahead of others in the pecking order because they are further behind on key educational measures since, in effect, that is penalising some disadvantaged learners on the grounds of their ability or, more accurately, their prior attainment.

This philosophy underpins the notion of personalised education and is driving the recent and welcome reforms of the assessment and accountability system, designed to ensure that schools are judged by how well they improve the attainment of all learners, rather than predominantly on the basis of the proportion achieving the standard national benchmarks.

I suggested that, in deriding ‘the romance of the poor but bright’, Mr Thomas ran the risk of falling into ‘the slough of anti-elitism’. He rejected that suggestion, while continuing to emphasise the need to ‘concentrate more’ on ‘those at risk of never being able to engage with society’.

I have made the assumption that Thomas is interested primarily in KS2 and GCSE or equivalent qualifications at KS4 given his references to KS2 L4, basic skills and ‘paper qualifications needed to enter meaningful employment’.

But his additional references to ‘real qualifications’ (as opposed to paper ones) and engaging with society could well imply a wider range of personal, social and work-related skills for employability and adult life.

My preference for equal priority would apply regardless: there is no guarantee that high attainers from disadvantaged backgrounds will necessarily possess these vital skills.

But, as indicated in the definition above, there is an important distinction to be maintained between:

  • educational support to raise the attainment, learning and employability skills of socio-economically disadvantaged learners and prepare them for adult life and
  • support to manage a range of difficulties – whether behavioural problems, disability, physical or mental impairment – that impact on the broader life chances of the individuals concerned.

Such a distinction may well be masked in the everyday business of providing effective holistic support for learners facing such difficulties, but this debate requires it to be made and sustained given Mr Thomas’s definition of the problem in terms of the comparative treatment of the ‘poor but bright’ and the ‘poor but dim’.

Having made this distinction, it is not clear whether he himself sustains it consistently through to the end of his post. In the final paragraphs the term ‘poor but dim’ begins to morph into a broader notion encompassing all AP and SEN learners regardless of their socio-economic status.

Additional dimensions of disadvantage are potentially being brought into play. This is inconsistent and radically changes the nature of the argument.

 

Inputs

By inputs I mean the resources – financial and human – made available to support the education of ‘dim’ and ‘bright’ disadvantaged learners respectively.

Mr Thomas also shifts his ground as far as inputs are concerned.

His post opens with a statement that ‘the effort and resources’ of schools, charities and businesses are ‘directed disproportionately’ at the poor but bright – and he exemplifies this with reference to fair access to competitive universities, particularly Oxbridge.

When I point out the significant investment in AP compared with fair access, he changes tack – ‘I’m measuring outcomes not just inputs’.

Then later he says ‘But what some need is just more expensive’, to which I respond that ‘the bottom end already has the lion’s share of funding’.

At this point we have both fallen into the trap of treating the entirety of the AP and SEN budgets as focused on the ‘poor but dim’.

We are failing to recognise that they are poor proxies because the majority of AP and SEN learners are not ‘poor’, many are not ‘dim’, these budgets are focused on a wider range of needs and there is significant additional expenditure directed at ‘poor but dim’ learners elsewhere in the wider education budget.

Despite Mr Thomas’s opening claim, it should be reasonably evident from the preceding commentary that my ‘lion’s share’ point is factually correct. His suggestion that AP is ‘largely unknown and wholly unappreciated’ flies in the face of the Taylor Review and the Government’s subsequent work programme.

SEN may depend heavily on the ‘extraordinary effort of dedicated staff’, but at least there are such dedicated staff! There may be inconsistencies in local authority funding and support for SEN, but the global investment is colossal by comparison with the funding dedicated on the other side of the balance.

Gifted Phoenix’s position acknowledges that inputs are heavily loaded in favour of the SEN and AP budgets. This is as it should be since, as Thomas rightly notes, many of the additional services they need are frequently more expensive to provide. These services are not simply dedicated to raising their attainment, but also to tackling more substantive problems associated with their status.

Whether the balance of expenditure on the ‘bright’ and ‘dim’ respectively is optimal is a somewhat different matter. Contrary to Mr Thomas’s position, gifted advocates are often convinced that too much largesse is focused on the latter at the expense of the former.

Turning to advocacy, Mr Thomas says ‘we end up ignoring the more pressing problem’ of the poor but dim. He argues in the Twitter discussion that too few people are advocating for these learners, adding that they are failed ‘because it’s not popular to talk about them’.

I could not resist countering that advocacy for gifted learners is equally unpopular, indeed ‘the word is literally taboo in many settings’. I cannot help thinking – from his footnote reference to ‘epigenetic coding’ – that Mr Thomas is amongst those who are distinctly uncomfortable with the term.

Where advocacy does survive it is focused exclusively on progression to competitive universities and, to some extent, high attainment as a route towards that outcome. The narrative has shifted away from concepts of high ability or giftedness, because of the very limited consensus about that condition (even amongst gifted advocates) and even considerable doubt in some quarters whether it exists at all.

 

Outcomes

Mr Thomas maintains in his post that the successes of his preferred target group ‘have the power to change the British economy, far more so than those of their brighter peers’. This is because ‘the gap most damaging to society is in life outcomes for the children that perform least well at school’.

As noted above, it is important to remember that we are discussing here the addition of educational and economic value by tackling underachievement amongst learners from disadvantaged backgrounds, rather than amongst all the children that perform least well.

We are also leaving to one side the addition of value through any wider engagement by health and social services to improve life chances.

It is quite reasonable to advance the argument that ‘improving the outcomes ‘of the bottom 20%’ (the Tail) will have ‘a huge socio-economic impact’ and ‘make the biggest marginal difference to society’.

But one could equally make the case that society would derive similar or even higher returns from a decision to concentrate disproportionately on the highest attainers (the Smart Fraction).

Or, as Gifted Phoenix would prefer, one could reasonably propose that the optimal returns should be achieved by means of a balanced approach that raises both the floor and the ceiling, avoiding any arbitrary distinctions on the basis of prior attainment.

From the Gifted Phoenix perspective, one should balance the advantages of removing the drag on productivity of an educational underclass against those of developing the high-level human capital needed to drive economic growth and improve our chances of success in what Coalition ministers call the ‘global race’.

According to this perspective, by eliminating excellence gaps between disadvantaged and advantaged high attainers we will secure a stream of benefits broadly commensurate to that at the bottom end.

These will include substantial spillover benefits, achieved as a result of broadening the pool of successful leaders in political, social, educational and artistic fields, not to mention significant improvements in social mobility.

It is even possible to argue that, by creating a larger pool of more highly educated parents, we can also achieve a significant positive impact on the achievement of subsequent generations, thus significantly reducing the size of the tail.

And in the present generation we will create many more role models: young people from disadvantaged backgrounds who become educationally successful and who can influence the aspirations of younger disadvantaged learners.

This avoids the risk that low expectations will be reinforced and perpetuated through a ‘deficit model’ approach that places excessive emphasis on removing the drag from the tail by producing a larger number of ‘useful members of society’.

This line of argument is integral to the Gifted Phoenix Manifesto.

It seems to me entirely conceivable that economists might produce calculations to justify any of these different paths.

But it would be highly inequitable to put all our eggs in the ‘poor but bright’ basket, because that penalises some disadvantaged learners for their failure to achieve high attainment thresholds.

And it would be equally inequitable to focus exclusively on the ‘poor but dim’, because that penalises some disadvantaged learners for their success in becoming high attainers.

The more equitable solution must be to opt for a ‘balanced scorecard’ approach that generates a proportion of the top end benefits and a proportion of the bottom end benefits simultaneously.

There is a risk that this reduces the total flow of benefits, compared with one or other of the inequitable solutions, but there is a trade-off here between efficiency and a socially desirable outcome that balances the competing interests of the two groups.

 

The personal dimension

After we had finished our Twitter exchanges, I thought to research Mr Thomas online. Turns out he’s quite the Big-Cheese-in-Embryo. Provided he escapes the lure of filthy lucre, he’ll be a mover and shaker in education within the next decade.

I couldn’t help noticing his own educational experience – public school, a First in PPE from Oxford, leading light in the Oxford Union – then graduation from Teach First alongside internships with Deutsche Bank and McKinsey.

Now he’s serving his educational apprenticeship as joint curriculum lead for maths at a prominent London Academy. He’s also a trustee of ‘a university mentoring project for highly able 11-14 year old pupils from West London state schools’.

Lucky I didn’t check earlier. Such a glowing CV might have been enough to cow this grammar school Oxbridge reject, even if I did begin this line of work several years before he was born. Not that I have a chip on my shoulder…

The experience set me wondering about the dominant ideology amongst the Teach First cadre, and how it is tempered by extended exposure to teaching in a challenging environment.

There’s more than a hint of idealism about someone from this privileged background espousing the educational philosophy that Mr Thomas professes. But didn’t he wonder where all the disadvantaged people were during his own educational experience, and doesn’t he want to change that too?

His interest in mentoring highly able pupils would suggest that he does, but also seems directly to contradict the position he’s reached here. It would be a pity if the ‘poor but bright’ could not continue to rely on his support, equal in quantity and quality to the support he offers the ‘poor but dim’.

For he could make a huge difference at both ends of the attainment spectrum – and, with his undeniable talents, he should certainly be able to do so

 

Conclusion

We are entertaining three possible answers to the question whether in principle to prioritise the needs of the ‘poor but bright’ or the ‘poor but dim’:

  • Concentrate principally – perhaps even exclusively – on closing the Excellence Gaps at the top
  • Concentrate principally – perhaps even exclusively – on closing the Foundation Gaps at the bottom
  • Concentrate equally across the attainment spectrum, at the top and bottom and all points in between.

Speaking as an advocate for those at the top, I favour the third option.

It seems to me incontrovertible – though hard to quantify – that, in the English education system, the lion’s share of resources go towards closing the Foundation Gaps.

That is perhaps as it should be, although one could wish that the financial scales were not tipped so excessively in their direction, for ‘poor but bright’ learners do in my view have an equal right to challenge and support, and should not be penalised for their high attainment.

Our current efforts to understand the relative size of the Foundation and Excellence Gaps and how these are changing over time are seriously compromised by the limited data in the public domain.

There is a powerful economic case to be made for prioritising the Foundation Gaps as part of a deliberate strategy for shortening the tail – but an equally powerful case can be constructed for prioritising the Excellence Gaps, as part of a deliberate strategy for increasing the smart fraction.

Neither of these options is optimal from an equity perspective however. The stream of benefits might be compromised somewhat by not focusing exclusively on one or the other, but a balanced approach should otherwise be in our collective best interests.

You may or may not agree. Here is a poll so you can register your vote. Please use the comments facility to share your wider views on this post.

 

 

 

 

Epilogue

 

We must beware the romance of the poor but bright,

But equally beware

The romance of rescuing the helpless

Poor from their sorry plight.

We must ensure the Tail

Does not wag the disadvantaged dog!

 

GP

May 2014

Air on the ‘G’ String: Hoagies’ Bloghop, May 2014

 

medium_17873944As I see it, there are three sets of issues with the ‘G’ word:

  • Terminological – the term carries with it associations that make some advocates uncomfortable and predispose others to resist such advocacy.
  • Definitional – there are many different ways to define the term and the subset of the population to which it can be applied; there is much disagreement about this, even amongst advocates.
  • Labelling – the application of the term to individuals can have unintended negative consequences, for them and for others.

 

Terminological issues

We need shared terminology to communicate effectively about this topic. A huge range of alternatives is available: able, more able, highly able, most able, talented, asynchronous, high potential, high learning potential… and so on.

These terms – the ‘g’ word in particular – are often qualified by an adjective – profoundly, highly, exceptionally – which adds a further layer of complexity. Then there is the vexed question of dual and multiple exceptionality…

Those of us who are native English speakers conveniently forget that there are also numerous terms available in other languages: surdoue, hochbegabung, hochbegaabte, altas capacidades, superdotados, altas habilidades, evnerik and many, many more!

Each of these terms has its own good and bad points, its positive and negative associations.

The ‘g’ word has a long history, is part of the lingua franca and is still most widely used. But its long ascendancy has garnered a richer mix of associations than some of the alternatives.

The negative associations can be unhelpful to those seeking to persuade others to respond positively and effectively to the needs of these children and young people. Some advocates feel uncomfortable using the term and this hampers effective communication, both within the community and outside it.

Some react negatively to its exclusive, elitist connotations; on the other hand, it can be used in a positive way to boost confidence and self-esteem.

But, ultimately, the term we use is less significant than the way in which we define it. There may be some vague generic distaste for the ‘g’ word, but logic should dictate that most reactions will depend predominantly on the meaning that is applied to the term.

 

Definitional issues

My very first blog post drew attention to the very different ways in which this topic is approached around the world. I identified three key polarities:

  • Nature versus nurture – the perceived predominance of inherited disposition over effort and practice, or vice versa.
  • Excellence versus equity – whether priority is given to raising absolute standards and meritocracy or narrowing excellence gaps and social mobility.
  • Special needs versus personalisation – whether the condition or state defined by the term should be addressed educationally as a special need, or through mainstream provision via differentiation and tailored support.

These definitional positions may be associated with the perceived pitch or incidence of the ‘g’ condition. When those at the extreme of the distribution are under discussion, or the condition is perceived to be extremely rare, a nature-excellence-special needs perspective is more likely to predominate. A broader conceptualisation pushes one towards the nurture-equity-personalisation nexus.

Those with a more inclusive notion of ‘g’-ness – who do not distinguish between ‘bright’ and ‘g’, include all high attainers amongst the latter and are focused on the belief that ‘g’-ness is evenly distributed in the population by gender, ethnic and socio-economic background – are much more likely to hold the latter perspective, or at least tend towards it.

There are also differences according to whether the focus is the condition itself – ‘g’-ness – or schooling for the learners to whom the term is applied – ‘g’ education. In the first case, nature, excellence and special needs tend to predominate; in the second the reverse is true. This can compromise interaction between parents and educators.

In my experience, if the ‘g’ word is qualified by a careful definition that takes account of these three polarities, a mature discussion about needs and how best to meet them is much more likely to occur.

In the absence of a shared definition, the associations of the term will likely predominate unchecked. Effective communication will be impossible; common ground cannot be established; the needs that the advocate is pressing will remain unfulfilled. That is in no-one’s best interests, least of all those who are ‘g’.

 

Labelling Issues 

When the ‘g’ word is applied to an individual, it is likely to influence how that individual perceives himself and how others perceive him.

Labelling is normally regarded as negative, because it implies a fixed and immutable state and may subject the bearers of the label to impossibly high expectations, whether of behaviour or achievement, that they cannot always fulfil.

Those who do not carry the label may see themselves as second class citizens, become demotivated and much less likely to succeed.

But, as noted above, it is also possible to use the ‘g’ label to confer much-needed status and attention on those who do not possess the former or receive enough of the latter. This can boost confidence and self-esteem, making the owners of the label more likely to conform to the expectations that it carries.

This is particularly valuable for those who strive to promote equity and narrow excellence gaps between those from advantaged and disadvantaged backgrounds.

Moreover, much depends on whether the label is permanently applied or confers a temporary status.

I recently published a Twitter conversation explaining how the ‘g’ label can be used as a marker to identify those learners who for the time being need additional learning support to maximise their already high achievement.

This approach reflects the fact that children and young people do not develop through a consistent linear process, but experience periods of rapid development and comparative stasis.

The timing and duration of these periods will vary so, at any one time in any group of such individuals, some will be progressing rapidly and others will not. Over the longer term some will prove precocious; others late developers.

This is not to deny that a few learners at the extreme of the distribution will retain the marker throughout their education, because they are consistently far ahead of their peers and so need permanent additional support to maximise their achievement.

But, critically, the label is earned through evidence of high achievement rather than through a test of intelligence or cognitive ability that might have been administered once only and in the distant past. ‘G’-ness depends on educational success. It also forces educators to address underachievement at the top of the attainment spectrum.

If a label is more typically used as a temporary marker it must be deployed sensitively, in a way that is clearly understood by learners and their parents. They must appreciate that the removal of the marker is not a punishment or downgrading that leads to loss of self-esteem.

Because the ‘g’ label typically denotes a non-permanent state that defines need rather than expectation, most if not all of the negative connotations can be avoided.

Nevertheless, this may be anathema to those with a nature-excellence-special needs perspective!

 

Conclusion 

I have avoided using the ‘g’ word within this post, partly to see if it could be done and partly out of respect for those of you who dislike it so much.

But I have also advanced some provocative arguments using terminology that some of you will find equally disturbing. That is deliberate and designed to make you think!

The ‘g’ word has substantial downside, but this can be minimised through careful definition and the application of the label as a non-permanent marker.

It may be that the residual negative associations are such that an alternative is still preferable. The question then arises whether there is a better term with the same currency and none of the negative connotations.

As noted above there are many contenders – not all of them part of the English language – but none stands head-and-shoulders above its competitors.

And of course it is simply impossible to ban a word. Indeed, any attempt to do so would provoke many of us – me included – to use the ‘g’ word even more frequently and with much stronger conviction.

 

 

Hoagies bloghop

 

This blog is part of the Hoagies’ Gifted Education Page inaugural Blog Hop on The “G” Word (“Gifted”).  To read more blogs in this hop, visit this Blog Hop at www.hoagiesgifted.org/blog_hop_the_g_word.htm

 

 

GP

May 2014

 

 

 

 

 

photo credit: <a href=”http://www.flickr.com/photos/neurollero/17873944/”>neurollero</a&gt; via <a href=”http://photopin.com”>photopin</a&gt; <a href=”http://creativecommons.org/licenses/by-sa/2.0/”>cc</a&gt;

How well is Ofsted reporting on the most able?

 

 

This post considers how Ofsted’s new emphasis on the attainment and progress of the most able learners is reflected in school inspection reports.

My analysis is based on the 87 Section 5 secondary school inspection reports published in the month of March 2014.

keep-calm-and-prepare-for-ofsted-6I shall not repeat here previous coverage of how Ofsted’s emphasis on the most able has been framed. Interested readers may wish to refer to previous posts for details:

The more specific purpose of the post is to explore how consistently Ofsted inspectors are applying their guidance and, in particular, whether there is substance for some of the concerns I expressed in these earlier posts, drawn together in the next section.

The remainder of the post provides an analysis of the sample and a qualitative review of the material about the most able (and analogous terms) included in the sample of 87 inspection reports.

It concludes with a summary of the key points, a set of associated recommendations and an overall inspection grade for inspectors’ performance to date. Here is a link to this final section for those who prefer to skip the substance of the post.

 

Background

Before embarking on the real substance of this argument I need to restate briefly some of the key issues raised in those earlier posts:

  • Ofsted’s definition of ‘the most able’ in its 2013 survey report is idiosyncratically broad, including around half of all learners on the basis of their KS2 outcomes.
  • The evidence base for this survey report included material suggesting that the most able students are supported well or better in only 20% of lessons – and are not making the progress of which they are capable in about 40% of schools.
  • The survey report’s recommendations included three commitments on Ofsted’s part. It would:

 o   ‘focus more closely in its inspections on the teaching and progress of the most able students, the curriculum available to them, and the information, advice and guidance provided to the most able students’;

o   ‘consider in more detail during inspection how well the pupil premium is used to support the most able students from disadvantaged backgrounds’ and

o   ‘report its inspection findings about this group of students more clearly in school inspection, sixth form and college reports.’

  • Subsequently the school inspection guidance was revised somewhat  haphazardly, resulting in the parallel use of several undefined terms (‘able pupils’, ‘most able’, ‘high attaining’, ‘highest attaining’),  the underplaying of the attainment and progress of the most able learners attracting the Pupil Premium and very limited reference to appropriate curriculum and IAG.
  • Within the inspection guidance, emphasis was placed primarily on learning and progress. I edited together the two relevant sets of level descriptors in the guidance to provide this summary for the four different inspection categories:

In outstanding schools the most able pupils’ learning is consistently good or better and they are making rapid and sustained progress.

In good schools the most able pupils’ learning is generally good, they make good progress and achieve well over time.

In schools requiring improvement the teaching of the most able pupils and their achievement are not good.

In inadequate schools the most able pupils are underachieving and making inadequate progress.

  • No published advice has been made available to inspectors on the interpretation of these amendments to the inspection guidance. In October 2013 I wrote:

‘Unfortunately, there is a real risk that the questionable clarity of the Handbook and Subsidiary Guidance will result in some inconsistency in the application of the Framework, even though the fundamental purpose of such material is surely to achieve the opposite.’

  • Analysis of a very small sample of reports for schools reporting poor results for high attainers in the school performance tables suggested inconsistency both before and after the amendments were introduced into the guidance. I commented:

‘One might expect that, unconsciously or otherwise, inspectors are less ready to single out the performance of the most able when a school is inadequate across the board, but the small sample above does not support this hypothesis. Some of the most substantive comments relate to inadequate schools.

It therefore seems more likely that the variance is attributable to the differing capacity of inspection teams to respond to the new emphases in their inspection guidance. This would support the case made in my previous post for inspectors to receive additional guidance on how they should interpret the new requirement.’

The material below considers the impact of these revisions on a more substantial sample of reports and whether this justifies some of the concerns expressed above.

It is important to add that, in January 2014, Ofsted revised its guidance document ‘Writing the report for school inspections’ to include the statement that:

Inspectors must always report in detail on the progress of the most able pupils and how effectively teaching engages them with work that is challenging enough.’ (p8)

This serves to reinforce the changes to the inspection guidance and clearly indicates that coverage of this issue – at least in these terms – is a non-negotiable: we should expect to see appropriate reference in every single section 5 report.

 

The Sample

The sample comprises 87 secondary schools whose Section 5 inspection reports were published by Ofsted in the month of March 2014.

The inspections were conducted between 26 November 2013 and 11 March 2014, so the inspectors will have had time to become familiar with the revised guidance.

However up to 20 of the inspections took place before Ofsted felt it necessary to emphasise that coverage of the progress and teaching of the most able is compulsory.

The sample happens to include several institutions inspected as part of wider-ranging reviews of schools in Birmingham and schools operated by the E-ACT academy chain. It also incorporates several middle-deemed secondary schools.

Chart 1 shows the regional breakdown of the sample, adopting the regions Ofsted uses to categorise reports, as opposed to its own regional structure (ie with the North East identified separately from Yorkshire and Humberside).

It contains a disproportionately large number of schools from the West Midlands while the South-West is significantly under-represented. All the remaining regions supply between 5 and 13 schools. A total of 57 local authority areas are represented.

 

Chart 1: Schools within the sample by region

Ofsted chart 1

 

Chart 2 shows the different statuses of schools within the sample. Over 40% are community schools, while almost 30% are sponsored academies. There are no academy converters but sponsored academies, free schools and studio schools together account for some 37% of the sample.

 

Chart 2: Schools within the sample by status

Ofsted chart 2

 

The vast majority of schools in the sample are 11-16 or 11-18 institutions, but four are all-through schools, five provide for learners aged 13 or 14 upwards and 10 are middle schools. There are four single sex schools.

Chart 3 shows the variation in school size. Some of the studio schools, free schools and middle schools are very small by secondary standards, while the largest secondary school in the sample has some 1,600 pupils. A significant proportion of schools have between 600 and 1,000 pupils.

 

Chart 3: Schools within the sample by number on roll

Ofsted chart 3

The distribution of overall inspection grades between the sample schools is illustrated by Chart 4 below. Eight of the sample were rated outstanding, 28 good, 35 as requiring improvement and 16 inadequate.

Of those rated inadequate, 12 were subject to special measures and four had serious weaknesses.

 

Chart 4: Schools within the sample by overall inspection grade

 Ofsted chart 4

The eight schools rated outstanding include:

  • A mixed 11-18 sponsored academy
  • A mixed 14-19 studio school
  • A mixed 11-18 free school
  • A mixed 11-16 VA comprehensive;
  • A girls’ 11-18  VA comprehensive
  • A boys’ 11-18 VA selective school
  • A girls’ 11-18 community comprehensive and
  • A mixed 11-18 community comprehensive

The sixteen schools rated inadequate include:

  • Eight mixed 11-18 sponsored academies
  • Two mixed 11-16 sponsored academies
  • An mixed all-through sponsored academy
  • A mixed 11-16 free school
  • Two mixed 11-16 community comprehensives
  • A mixed 11-18 community comprehensive and
  • A mixed 13-19 community comprehensive

 

Coverage of the most able in main findings and recommendations

 

Terminology 

Where they were mentioned, such learners were most often described as ‘most able’ but a wide range of other terminology is deployed included ‘most-able’, ‘the more able’, ‘more-able’, ‘higher attaining’, ‘high-ability’, ‘higher-ability’ and ‘able students’.

The idiosyncratic adoption of redundant hyphenation is an unresolved mystery.

It is not unusual for two or more of these terms to be used in the same report. Because there is no glossary in existence, this makes some reports rather less straightforward to interpret accurately.

It is also more difficult to compare and contrast reports. Helpful services like Watchsted’s word search facility become less useful.

 

Incidence of commentary in the main findings and recommendations

Thirty of the 87 inspection reports (34%) addressed the school’s most able learners explicitly (or applied a similar term) in the sections setting out the report’s main findings and the recommendations respectively.

The analysis showed that 28% of reports on academies (including studios and free schools) met this criterion, whereas 38% of reports on non-academy schools did so.

Chart 5 shows how the incidence of reference in both main findings and recommendations varies according to the overall inspection grade awarded.

One can see that this level of attention is most prevalent in schools requiring improvement, followed by those with inadequate grades. It was less common in schools rated good and less common still in outstanding schools. The gap between these two categories is perhaps smaller than expected.

The slight lead for schools requiring improvement over inadequate schools may be attributable to a view that more of the latter face more pressing priorities, or it may have something to do with the varying proportions of high attainers in such schools, or both of these factors could be in play, amongst others.

 

Chart 5: Most able covered in both main findings and recommendations by overall inspection rating (percentage)

Ofsted chart 5

A further eleven reports (13%) addressed the most able learners in the recommendations but not the main findings.

Only one report managed to feature the most able in the main findings but not in the recommendations and this was because the former recorded that ‘the most able students do well’.

Consequently, a total of 45 reports (52%) did not mention the most able in either the main findings or the recommendations.

This applied to some 56% of reports on academies (including free schools and studio schools) and 49% of reports on other state-funded schools.

So, according to these proxy measures, the most able in academies appear to receive comparatively less attention from inspectors than those in non-academy schools. It is not clear why. (The samples are almost certainly too small to support reliable comparison of academies and non-academies with different inspection ratings.)

Chart 6 below shows the inspection ratings for this subset of reports.

 

Chart 6: Most able covered in neither main findings nor recommendations by overall inspection rating (percentage)

Ofsted chart 6

Here is further evidence that the significant majority of outstanding schools are regarded as having no significant problems in respect of provision for the most able.

On the other hand, this is far from being universally true, since it is an issue for one in four of them. This ratio of 3:1 does not lend complete support to the oft-encountered truism that outstanding schools invariably provide outstandingly for the most able – and vice versa.

At the other end of the spectrum, and perhaps even more surprisingly, over 30% of inadequate schools are assumed not to have issues significant enough to warrant reference in these sections. Sometimes this may be because they are equally poor at providing for all their learners, so the most able are not separately singled out.

Chart 7 below shows differences by school size, giving the percentage of reports mentioning the most able in both main findings and recommendations and in neither.

It divides schools into three categories: small (24 schools with a NOR of 599 or lower), medium (35 schools with a NOR of 600-999) and large (28 schools with a NOR of 1000 or higher.

 

Chart 7: Reports mentioning the most able in main findings and recommendations by school size 

 Ofsted chart 7

It is evident that ‘neither’ exceeds ‘both’ in the case of all three categories. The percentages are not too dissimilar in the case of small and large schools, which record a very similar profile.

But there is a much more significant difference for medium-sized schools. They demonstrate a much smaller percentage of ‘both’ reports and comfortably the largest percentage of ‘neither’ reports.

This pattern – suggesting that inspectors are markedly less likely to emphasise provision for the most able in medium-sized schools – is worthy of further investigation.

It would be particularly interesting to explore further the relationship between school size, the proportion of high attainers in a school and their achievement.

 

Typical references in the main findings and recommendations

I could detect no obvious and consistent variations in these references by school status or size, but it was possible to detect a noticeably different emphasis between schools rated outstanding and those rated inadequate.

Where the most able featured in reports on outstanding schools, these included recommendations such as:

‘Further increase the proportion of outstanding teaching in order to raise attainment even higher, especially for the most able students.’ (11-16 VA comprehensive).

‘Ensure an even higher proportion of students, including the most able, make outstanding progress across all subjects’ (11-18 sponsored academy).

These statements suggest that such schools have made good progress in eradicating underachievement amongst the most able but still have further room for improvement.

But where the most able featured in recommendations for inadequate schools, they were typically of this nature:

‘Improve teaching so that it is consistently good or better across all subjects, but especially in mathematics, by: raising teachers’ expectations of the quality and amount of work students of all abilities can do, especially the most and least able.’  (11-16 sponsored academy).

‘Improve the quality of teaching in order to speed up the progress students make by setting tasks that are at the right level to get the best out of students, especially the most able.’ (11-18 sponsored academy).

‘Rapidly improve the quality of teaching, especially in mathematics, by ensuring that teachers: have much higher expectations of what students can achieve, especially the most able…’ (11-16 community school).

These make clear that poor and inconsistent teaching quality is causing significant underachievement at the top end (and ‘especially’ suggests that this top end underachievement is particularly pronounced compared with other sections of the attainment spectrum in such schools).

Recommendations for schools requiring improvement are akin to those for inadequate schools but typically more specific, pinpointing particular dimensions of good quality teaching that are absent, so limiting effective provision for the most able. It is as if these schools have some of the pieces in place but not yet the whole jigsaw.

By comparison, recommendations for good schools can seem rather more impressionistic and/or formulaic, focusing more generally on ‘increasing the proportion of outstanding teaching’. In such cases the assessment is less about missing elements and more about the consistent application of all of them across the school.

One gets the distinct impression that inspectors have a clearer grasp of the ‘fit’ between provision for the most able and the other three inspection outcomes, at least as far as the distinction between ‘good’ and ‘outstanding’ is concerned.

But it would be misleading to suggest that these lines of demarcation are invariably clear. The boundary between ‘good’ and ‘requires improvement’ seems comparatively distinct, but there was more evidence of overlap at the intersections between the other grades.

 

Coverage of the most able in the main body of reports 

References to the most able rarely turn up in the sections dealing with behaviour and safety and leadership and management. I counted no examples of the former and no more than one or two of the latter.

I could find no examples where information, advice and guidance available to the most able are separately and explicitly discussed and little specific reference to the appropriateness of the curriculum for the most able. Both are less prominent than the recommendations in the June 2013 survey report led us to expect.

Within this sample, the vast majority of reports include some description of the attainment and/or progress of the most able in the section about pupils’ achievement, while roughly half pick up the issue in relation to the quality of teaching.

The extent of the coverage of most able learners varied enormously. Some devoted a single sentence to the topic while others referred to it separately in main findings, recommendations, pupils’ achievement and quality of teaching. In a handful of cases reports seemed to give disproportionate attention to the topic.

 

Attainment and progress

Analyses of attainment and progress are sometimes entirely generic, as in:

‘The most able students make good progress’ (inadequate 11-18 community school).

‘The school has correctly identified a small number of the most able who could make even more progress’ (outstanding 11-16 RC VA school).

‘The most able students do not always secure the highest grades’ (11-16 community school requiring improvement).

‘The most able students make largely expected rates of progress. Not enough yet go on to attain the highest GCSE grades in all subjects.’ (Good 11-18 sponsored academy).

Sometimes such statements can be damning:

‘The most-able students in the academy are underachieving in almost every subject. This is even the case in most of those subjects where other students are doing well. It is an academy-wide issue.’ (Inadequate 11-18 sponsored academy).

These do not in my view constitute reporting ‘in detail on the progress of the most able pupils’ and so probably fall foul of Ofsted’s guidance to inspectors on writing reports.

More specific comments on attainment typically refer explicitly to the achievement of A*/A grades at GCSE and ideally to specific subjects, for example:

‘In 2013, standards in science, design and technology, religious studies, French and Spanish were also below average. Very few students achieved the highest A* and A grades.’ (Inadequate 11-18 sponsored academy)

‘Higher-ability students do particularly well in a range of subjects, including mathematics, religious education, drama, art and graphics. They do as well as other students nationally in history and geography.’ (13-18 community school  requiring improvement)

More specific comments on progress include:

‘The progress of the most able students in English is significantly better than that in other schools nationally, and above national figures in mathematics. However, the progress of this group is less secure in science and humanities.’  (Outstanding 11-18 sponsored academy)

‘In 2013, when compared to similar students nationally, more-able students made less progress than less-able students in English. In mathematics, where progress is less than in English, students of all abilities made similar progress.’ (11-18 sponsored academy requiring improvement).

Statements about progress rarely extend beyond English and maths (the first example above is exceptional) but, when attainment is the focus, some reports take a narrow view based exclusively on the core subjects, while others are far wider-ranging.

Despite the reference in Ofsted’s survey report, and subsequently the revised subsidiary guidance, to coverage of high attaining learners in receipt of the Pupil Premium, this is hardly ever addressed.

I could find only two examples amongst the 87 reports:

‘The gap between the achievement in English and mathematics of students for whom the school receives additional pupil premium funding and that of their classmates widened in 2013… During the inspection, it was clear that the performance of this group is a focus in all lessons and those of highest ability were observed to be achieving equally as well as their peers.’ (11-16 foundation school requiring improvement)

‘Students eligible for the pupil premium make less progress than others do and are consequently behind their peers by approximately one GCSE grade in English and mathematics. These gaps reduced from 2012 to 2013, although narrowing of the gaps in progress has not been consistent over time. More-able students in this group make relatively less progress.’ (11-16 sponsored academy requiring improvement)

More often than not it seems that the most able and those in receipt of the Pupil Premium are assumed to be mutually exclusive groups.

 

Quality of teaching 

There was little variation in the issues raised under teaching quality. Most inspectors select two or three options from a standard menu:

‘Where teaching is best, teachers provide suitably challenging materials and through highly effective questioning enable the most able students to be appropriately challenged and stretched…. Where teaching is less effective, teachers are not planning work at the right level of difficulty. Some work is too easy for the more able students in the class. (Good 11-16 community school)

 ‘In teaching observed during the inspection, the pace of learning for the most able students was too slow because the activities they were given were too easy. Although planning identified different activities for the most able students, this was often vague and not reflected in practice.  Work lacks challenge for the most able students.’ (Inadequate 11-16 community school)

‘In lessons where teaching requires improvement, teachers do not plan work at the right level to ensure that students of differing abilities build on what they already know. As a result, there is a lack of challenge in these lessons, particularly for the more able students, and the pace of learning is slow. In these lessons teachers do not have high enough expectations of what students can achieve.’ (11-18 community school requiring improvement)

‘Tasks set by teachers are sometimes too easy and repetitive for pupils, particularly the most able. In mathematics, pupils are sometimes not moved on quickly enough to new and more challenging tasks when they have mastered their current work.’ (9-13 community middle school requiring improvement)

‘Targets which are set for students are not demanding enough, and this particularly affects the progress of the most able because teachers across the year groups and subjects do not always set them work which is challenging. As a result, the most able students are not stretched in lessons and do not achieve as well as they should.’ (11-16 sponsored academy rated inadequate)

All the familiar themes are present – assessment informing planning, careful differentiation, pace and challenge, appropriate questioning, the application of subject knowledge, the quality of homework, high expectations and extending effective practice between subject departments.

 

Negligible coverage of the most able

Only one of the 87 reports failed to make any mention of the most able whatsoever. This is the report on North Birmingham Academy, an 11-19 mixed school requiring improvement.

This clearly does not meet the injunction to:

‘…report in detail on the progress of the most able pupils and how effectively teaching engages them with work that is challenging enough’.

It ought not to have passed through Ofsted’s quality assurance processes unscathed. The inspection was conducted in February 2014, after this guidance issued, so there is no excuse.

Several other inspections make only cursory references to the most able in the main body of the report, for example:

‘Where teaching is not so good, it was often because teachers failed to check students’ understanding or else to anticipate when to intervene to support students’ learning, especially higher attaining students in the class.’ (Good 11-18 VA comprehensive).

‘… the teachers’ judgements matched those of the examiners for a small group of more-able students who entered early for GCSE in November 2013.’ (Inadequate 11-18 sponsored academy).

‘More-able students are increasingly well catered for as part of the academy’s focus on raising levels of challenge.’ (Good 11-18 sponsored academy).

‘The most able students do not always pursue their work to the best of their capability.’ (11-16 free school requiring improvement).

These would also fall well short of the report writing guidance. At least 6% of my sample falls into this category.

Some reports note explicitly that the most able learners are not making sufficient progress, but fail to capture this in the main findings or recommendations, for example:

‘The achievement of more able students is uneven across subjects. More able students said to inspectors that they did not feel they were challenged or stretched in many of their lessons. Inspectors agreed with this view through evidence gathered in lesson observations…lessons do not fully challenge all students, especially the more able, to achieve the grades of which they are capable.’ (11-19 sponsored academy requiring improvement).

‘The 2013 results of more-able students show they made slower progress than is typical nationally, especially in mathematics.  Progress is improving this year, but they are still not always sufficiently challenged in lessons.’ (11-18 VC CofE school requiring improvement).

‘There is only a small proportion of more-able students in the academy. In 2013 they made less progress in English and mathematics than similar students nationally. Across all of their subjects, teaching is not sufficiently challenging for more-able students and they leave the academy with standards below where they should be.’ (Inadequate 11-18 sponsored academy).

‘The proportion of students achieving grades A* and A was well below average, demonstrating that the achievement of the most able also requires improvement.’  (11-18 sponsored academy requiring improvement).

Something approaching 10% of the sample fell into this category. It was not always clear why this issue was not deemed significant enough to feature amongst schools’ priorities for improvement. This state of affairs was more typical of schools requiring improvement than inadequate schools, so one could not so readily argue that the schools concerned were overwhelmed with the need to rectify more basic shortcomings.

That said, the example from an inadequate academy above may be significant. It is almost as if the small number of more able students is the reason why this shortcoming is not taken more seriously.

Inspectors must carry in their heads a somewhat subjective hierarchy of issues that schools are expected to tackle. Some inspectors appear to feature the most able at a relatively high position in this hierarchy; others push it further down the list. Some appear more flexible in the application of this hierarchy to different settings than others.

 

Formulaic and idiosyncratic references 

There is clear evidence of formulaic responses, especially in the recommendations for how schools can improve their practice.

Many reports adopt the strategy of recommending a series of actions featuring the most able, either in the target group:

‘Improve the quality of teaching to at least good so that students, including the most able, achieve higher standards, by ensuring that: [followed by a list of actions] (9-13 community middle school requiring improvement)

Or in the list of actions:

‘Improve the quality of teaching in order to raise the achievement of students by ensuring that teachers:…use assessment information to plan their work so that all groups of students, including those supported by the pupil premium and the most-able students, make good progress.’ (11-16 community school requiring improvement)

It was rare indeed to come across a report that referred explicitly to interesting or different practice in the school, or approached the topic in a more individualistic manner, but here are a few examples:

‘More-able pupils are catered for well and make good progress. Pupils enjoy the regular, extra challenges set for them in many lessons and, where this happens, it enhances their progress. They enjoy that extra element which often tests them and gets them thinking about their work in more depth. Most pupils are keen to explore problems which will take them to the next level or extend their skills.’  (Good 9-13 community middle school)

‘Although the vast majority of groups of students make excellent progress, the school has correctly identified a small number of the most able who could make even more progress. It has already started an impressive programme of support targeting the 50 most able students called ‘Students Targeted A grade Results’ (STAR). This programme offers individualised mentoring using high-quality teachers to give direct intervention and support. This is coupled with the involvement of local universities. The school believes this will give further aspiration to these students to do their very best and attend prestigious universities.’  (Outstanding 11-16 VA school)

I particularly liked:

‘Policies to promote equality of opportunity are ineffective because of the underachievement of several groups of students, including those eligible for the pupil premium and the more-able students.’ (Inadequate 11-18 academy) 

 

Conclusion

 

Main Findings

The principal findings from this survey, admittedly based on a rather small and not entirely representative sample, are that:

  • Inspectors are terminologically challenged in addressing this issue, because there are too many synonyms or near-synonyms in use.
  • Approximately one-third of inspection reports address provision for the most able in both main findings and recommendations. This is less common in academies than in community, controlled and aided schools. It is most prevalent in schools with an overall ‘requires improvement’ rating, followed by those rated inadequate. It is least prevalent in outstanding schools, although one in four outstanding schools is dealt with in this way.
  • Slightly over half of inspection reports address provision for the most able in neither the main findings nor the recommendations. This is relatively more common in the academies sector and in outstanding schools. It is least prevalent in schools rated inadequate, though almost one-third of inadequate schools fall into this category. Sometimes this is the case even though provision for the most able is identified as a significant issue in the main body of the report.
  • There is an unexplained tendency for reports on medium-sized schools to be significantly less likely to feature the most able in both main findings and recommendations and significantly more likely to feature it in neither. This warrants further investigation.
  • Overall coverage of the topic varies excessively between reports. One ignored it entirely, while several provided only cursory coverage and a few covered it to excess. The scope and quality of the coverage does not necessarily correlate with the significance of the issue for the school.
  • Coverage of the attainment and progress of the most able learners is variable. Some reports offer only generic descriptions of attainment and progress combined, some are focused exclusively on attainment in the core subjects while others take a wider curricular perspective. Outside the middle school sector, desirable attainment outcomes for the most able are almost invariably defined exclusively in terms of A* and A grade GCSEs.
  • Hardly any reports consider the attainment and/or progress of the most able learners in receipt of the Pupil Premium.
  • None of these reports make specific and explicit reference to IAG for the most able. It is rarely stated whether the school’s curriculum satisfies the needs of the most able.
  • Too many reports adopt formulaic approaches, especially in the recommendations they offer the school. Too few include reference to interesting or different practice.

In my judgement, too much current inspection reporting falls short of the commitments contained in the original Ofsted survey report and of the more recent requirement to:

‘always report in detail on the progress of the most able pupils and how effectively teaching engages them with work that is challenging enough.’

 

Recommendations

  • Ofsted should publish a glossary defining clearly all the terms for the most able that it employs, so that both inspectors and schools understand exactly what is intended when a particular term is deployed and which learners should be in scope when the most able are discussed.
  • Ofsted should co-ordinate the development of supplementary guidance clarifying their expectations of schools in respect of provision for the most able. This should set out in more detail what expectations would apply for such provision to be rated outstanding, good, requiring improvement and inadequate respectively. This should include the most able in receipt of the Pupil Premium, the suitability of the curriculum and the provision of IAG.
  • Ofsted should provide supplementary guidance for inspectors outlining and exemplifying the full range of evidence they might interrogate concerning the attainment and progress of the most able learners, including those in receipt of the Pupil Premium.
  • This guidance should specify the essential minimum coverage expected in reports and the ‘triggers’ that would warrant it being referenced in the main findings and/or recommendations for action.
  • This guidance should discourage inspectors from adopting formulaic descriptors and recommendations and specifically encourage them to identify unusual or innovative examples of effective practice.
  • The school inspection handbook and subsidiary guidance should be amended to reflect the supplementary guidance.
  • The School Data Dashboard should be expanded to include key data highlighting the attainment and progress of the most able.
  • These actions should also be undertaken for inspection of the primary and 16-19 sectors respectively.

 

Overall assessment: Requires Improvement.

 

GP

May 2014

 

 

 

 

 

 

 

 

 

 

 

PISA 2012 Creative Problem Solving: International Comparison of High Achievers’ Performance

This post compares the performance of high achievers from selected jurisdictions on the PISA 2012 creative problem solving test.

It draws principally on the material in the OECD Report ‘PISA 2012 Results: Creative Problem Solving’ published on 1 April 2014.

Pisa ps cover CaptureThe sample of jurisdictions includes England, other English-speaking countries (Australia, Canada, Ireland and the USA) and those that typically top the PISA rankings (Finland, Hong Kong, South Korea, Shanghai, Singapore and Taiwan).

With the exception of New Zealand, which did not take part in the problem solving assessment, this is deliberately identical to the sample I selected for a parallel post reviewing comparable results in the PISA 2012 assessments of reading, mathematics and science: ‘PISA 2012: International Comparisons of High Achievers’ Performance’ (December 2013)

These eleven jurisdictions account for nine of the top twelve performers ranked by mean overall performance in the problem solving assessment. (The USA and Ireland lie outside the top twelve, while Japan, Macao and Estonia are the three jurisdictions that are in the top twelve but outside my sample.)

The post is divided into seven sections:

  • Background to the problem solving assessment: How PISA defines problem solving competence; how it defines performance at each of the six levels of proficiency; how it defines high achievement; the nature of the assessment and who undertook it.
  • Average performance, the performance of high achievers and the performance of low achievers (proficiency level 1) on the problem solving assessment. This comparison includes my own sample and all the other jurisdictions that score above the OECD average on the first of these measures.
  • Gender and socio-economic differences amongst high achievers on the problem solving assessment  in my sample of eleven jurisdictions.
  • The relative strengths and weaknesses of jurisdictions in this sample on different aspects of the problem solving assessment. (This treatment is generic rather than specific to high achievers.)
  • What proportion of high achievers on the problem-solving assessment in my sample of jurisdictions are also high achievers in reading, maths and science respectively.
  • What proportion of students in my sample of jurisdictions achieves highly in one or more of the four PISA 2012 assessments – and against the ‘all-rounder’ measure, which is based on high achievement in all of reading, maths and science (but not problem solving).
  • Implications for education policy makers seeking to improve problem solving performance in each of the sample jurisdictions.

Background to the Problem Solving Assessment

.

Definition of problem solving

PISA’s definition of problem-solving competence is:

‘…an individual’s capacity to engage in cognitive processing to understand and resolve problem situations where a method of solution is not immediately obvious. It includes the willingness to engage with such situations in order to achieve one’s potential as a constructive and reflective citizen.’

The commentary on this definition points out that:

  • Problem solving requires identification of the problem(s) to be solved, planning and applying a solution, and monitoring and evaluating progress.
  • A problem is ‘a situation in which the goal cannot be achieved by merely applying learned procedures’, so the problems encountered must be non-routine for 15 year-olds, although ‘knowledge of general strategies’ may be useful in solving them.
  • Motivational and affective factors are also in play.

The Report is rather coy about the role of creativity in problem solving, and hence the justification for the inclusion of this term in its title.

Perhaps the nearest it gets to an exposition is when commenting on the implications of its findings:

‘In some countries and economies, such as Finland, Shanghai-China and Sweden, students master the skills needed to solve static, analytical problems similar to those that textbooks and exam sheets typically contain as well or better than 15-year-olds, on average, across OECD countries. But the same 15-year-olds are less successful when not all information that is needed to solve the problem is disclosed, and the information provided must be completed by interacting with the problem situation. A specific difficulty with items that require students to be open to novelty, tolerate doubt and uncertainty, and dare to use intuitions (“hunches and feelings”) to initiate a solution suggests that opportunities to develop and exercise these traits, which are related to curiosity, perseverance and creativity, need to be prioritised.’

.

Assessment framework

PISA’s framework for assessing problem solving competence is set out in the following diagram

 

PISA problem solving framework Capture

 

In solving a particular problem it may not be necessary to apply all these steps, or to apply them in this order.

Proficiency levels

The proficiency scale was designed to have a mean score across OECD countries of 500. The six levels of proficiency applied in the assessment each have their own profile.

The lowest, level 1 proficiency is described thus:

‘At Level 1, students can explore a problem scenario only in a limited way, but tend to do so only when they have encountered very similar situations before. Based on their observations of familiar scenarios, these students are able only to partially describe the behaviour of a simple, everyday device. In general, students at Level 1 can solve straightforward problems provided there is a simple condition to be satisfied and there are only one or two steps to be performed to reach the goal. Level 1 students tend not to be able to plan ahead or set sub-goals.’

This level equates to a range of scores from 358 to 423. Across the OECD sample, 91.8% of participants are able to perform tasks at this level.

By comparison, level 5 proficiency is described in this manner:

‘At Level 5, students can systematically explore a complex problem scenario to gain an understanding of how relevant information is structured. When faced with unfamiliar, moderately complex devices, such as vending machines or home appliances, they respond quickly to feedback in order to control the device. In order to reach a solution, Level 5 problem solvers think ahead to find the best strategy that addresses all the given constraints. They can immediately adjust their plans or backtrack when they detect unexpected difficulties or when they make mistakes that take them off course.’

The associated range of scores is from 618 to 683 and 11.4% of all OECD students achieve at this level.

Finally, level 6 proficiency is described in this way:

‘At Level 6, students can develop complete, coherent mental models of diverse problem scenarios, enabling them to solve complex problems efficiently. They can explore a scenario in a highly strategic manner to understand all information pertaining to the problem. The information may be presented in different formats, requiring interpretation and integration of related parts. When confronted with very complex devices, such as home appliances that work in an unusual or unexpected manner, they quickly learn how to control the devices to achieve a goal in an optimal way. Level 6 problem solvers can set up general hypotheses about a system and thoroughly test them. They can follow a premise through to a logical conclusion or recognise when there is not enough information available to reach one. In order to reach a solution, these highly proficient problem solvers can create complex, flexible, multi-step plans that they continually monitor during execution. Where necessary, they modify their strategies, taking all constraints into account, both explicit and implicit.’

The range of level 6 scores is from 683 points upwards and 2.5% of all OECD participants score at this level.

PISA defines high achieving students as those securing proficiency level 5 or higher, so proficiency levels 5 and 6 together. The bulk of the analysis it supplies relates to this cohort, while relatively little attention is paid to the more exclusive group achieving proficiency level 6, even though almost 10% of students in Singapore reach this standard in problem solving.

 .

The sample

Sixty-five jurisdictions took part in PISA 2012, including all 34 OECD countries and 31 partners. But only 44 jurisdictions took part in the problem solving assessment, including 28 OECD countries and 16 partners. As noted above, that included all my original sample of twelve jurisdictions, with the exception of New Zealand.

I could find no stated reason why New Zealand chose not to take part. Press reports initially suggested that England would do likewise, but it was subsequently reported that this decision had been reversed.

The assessment was computer-based and comprised 16 units divided into 42 items. The units were organised into four clusters, each designed to take 20 minutes to complete. Participants completed one or two clusters, depending on whether they were also undertaking computer-based assessments of reading and maths.

In each jurisdiction a random sample of those who took part in the paper-based maths assessment was selected to undertake the problem solving assessment. About 85,000 students took part in all. The unweighted sample sizes in my selected jurisdictions are set out in Table 1 below, together with the total population of 15 year-olds in each jurisdiction.

 

Table 1: Sample sizes undertaking PISA 2012 problem solving assessment in selected jurisdictions

Country Unweighted Sample Total 15 year-olds
Australia 5,612 291,976
Canada 4,601 417,873
Finland 3,531 62,523
Hong Kong 1,325 84,200
Ireland 1,190 59,296
Shanghai 1,203 108,056
Singapore 1,394 53,637
South Korea 1,336 687,104
Taiwan 1,484 328,356
UK (England) 1,458 738,066
USA 1,273 3,985,714

Those taking the assessment were aged between 15 years and three months and 16 years and two months at the time of the assessment. All were enrolled at school and had completed at least six years of formal schooling.

Average performance compared with the performance of high and low achievers

The overall table of mean scores on the problem solving assessment is shown below

PISA problem solving raw scores Capture

 .

There are some familiar names at the top of the table, especially Singapore and South Korea, the two countries that comfortably lead the rankings. Japan is some ten points behind in third place but it in turn has a lead of twelve points over a cluster of four other Asian competitors: Macao, Hong Kong, Shanghai and Taiwan.

A slightly different picture emerges if we compare average performance with the proportion of learners who achieve the bottom proficiency level and the top two proficiency levels. Table 2 below compares these groups.

This table includes all the jurisdictions that exceeded the OECD average score. I have marked out in bold the countries in my sample of eleven which includes Ireland, the only one of them that did not exceed the OECD average.

Table 2: PISA Problem Solving 2012: Comparing Average Performance with Performance at Key Proficiency Levels

 

Jurisdiction Mean score Level 1 (%) Level 5 (%) Level 6 (%) Levels 5+6 (%)
Singapore 562 6.0 19.7 9.6 29.3
South Korea 561 4.8 20.0 7.6 27.6
Japan 552 5.3 16.9 5.3 22.2
Macao 540 6.0 13.8 2.8 16.6
Hong Kong 540 7.1 14.2 5.1 19.3
Shanghai 536 7.5 14.1 4.1 18.2
Taiwan 534 8.2 14.6 3.8 18.4
Canada 526 9.6 12.4 5.1 17.5
Australia 523 10.5 12.3 4.4 16.7
Finland 523 9.9 11.4 3.6 15.0
England (UK) 517 10.8 10.9 3.3 14.2
Estonia 515 11.1 9.5 2.2 11.7
France 511 9.8 9.9 2.1 12.0
Netherlands 511 11.2 10.9 2.7 13.6
Italy 510 11.2 8.9 1.8 10.7
Czech Republic 509 11.9 9.5 2.4 11.9
Germany 509 11.8 10.1 2.7 12.8
USA 508 12.5 8.9 2.7 11.6
Belgium 508 11.6 11.4 3.0 14.4
Austria 506 11.9 9.0 2.0 11.0
Norway 503 13.2 9.7 3.4 13.1
Ireland 498 13.3 7.3 2.1 9.4
OECD Ave. 500 13.2 8.9 2.5 11.4

 .

The jurisdictions at the top of the table also have a familiar profile, with a small ‘tail’ of low performance combined with high levels of performance at the top end.

Nine of the top ten have fewer than 10% of learners at proficiency level 1, though only South Korea pushes below 5%.

Five of the top ten have 5% or more of their learners at proficiency level 6, but only Singapore and South Korea have a higher percentage at level 6 than level 1 (with Japan managing the same percentage at both levels).

The top three performers – Singapore, South Korea and Japan – are the only three jurisdictions that have over 20% of their learners at proficiency levels 5 and 6 together.

South Korea slightly outscores Singapore at level 5 (20.0% against 19.7%). Japan is in third place, followed by Taiwan, Hong Kong and Shanghai.

But at level 6, Singapore has a clear lead, followed by South Korea, Japan, Hong Kong and Canada respectively.

England’s overall place in the table is relatively consistent on each of these measures, but the gaps between England and the top performers vary considerably.

The best have fewer than half England’s proportion of learners at proficiency level 1, almost twice as many learners at proficiency level 5 and more than twice as many at proficiency levels 5 and 6 together. But at proficiency level 6 they have almost three times as many learners as England.

Chart 1 below compares performance on these four measures across my sample of eleven jurisdictions.

All but Ireland are comfortably below the OECD average for the percentage of learners at proficiency level 1. The USA and Ireland are atypical in having a bigger tail (proficiency level 1) than their cadres of high achievers (levels 5 and 6 together).

At level 5 all but Ireland and the USA are above the OECD average, but USA leapfrogs the OECD average at level 6.

There is a fairly strong correlation between the proportions of learners achieving the highest proficiency thresholds and average performance in each jurisdiction. However, Canada stands out by having an atypically high proportion of students at level 6.

.

Chart 1: PISA 2012 Problem-solving: Comparing performance at specified proficiency levels

Problem solving chart 1

.

PISA’s Report discusses the variation in problem-solving performance within different jurisdictions. However it does so without reference to the proficiency levels, so we do not know to what extent these findings apply equally to high achievers.

Amongst those above the OECD average, those with least variation are Macao, Japan, Estonia, Shanghai, Taiwan, Korea, Hong Kong, USA, Finland, Ireland, Austria, Singapore and the Czech Republic respectively.

Perhaps surprisingly, the degree of variation in Finland is identical to that in the USA and Ireland, while Estonia has less variation than many of the Asian jurisdictions. Singapore, while top of the performance table, is only just above the OECD average in terms of variation.

The countries below the OECD average on this measure – listed in order of increasing variation – include England, Australia and Canada, though all three are relatively close to the OECD average. So these three countries and Singapore are all relatively close together.

Gender and socio-economic differences amongst high achievers

 .

Gender differences

On average across OECD jurisdictions, boys score seven points higher than girls on the problem solving assessment. There is also more variation amongst boys than girls.

Across the OECD participants, 3.1% of boys achieved proficiency level 6 but only 1.8% of girls did so. This imbalance was repeated at proficiency level 5, achieved by 10% of boys and 7.7% of girls.

The table and chart below show the variations within my sample of eleven countries. The performance of boys exceeds that of girls in all cases, except in Finland at proficiency level 5, and in that instance the gap in favour of girls is relatively small (0.4%).

 .

Table 3: PISA Problem-solving: Gender variation at top proficiency levels

Jurisdiction Level 5 (%) Level 6 (%) Levels 5+6 (%)
  Boys Girls Diff Boys Girls Diff Boys Girls Diff
Singapore 20.4 19.0 +1.4 12.0 7.1 +4.9 32.4 26.1 +6.3
South Korea 21.5 18.3 +3.2 9.4 5.5 +3.9 30.9 23.8 +7.1
Hong Kong 15.7 12.4 +3.3 6.1 3.9 +2.2 21.8 16.3 +5.5
Shanghai 17.0 11.4 +5.6 5.7 2.6 +3.1 22.7 14.0 +8.7
Taiwan 17.3 12.0 +5.3 5.0 2.5 +2.5 22.3 14.5 +7.8
Canada 13.1 11.8 +1.3 5.9 4.3 +1.6 19.0 16.1 +2.9
Australia 12.6 12.0 +0.6 5.1 3.7 +1.4 17.7 15.7 +2.0
Finland 11.2 11.6 -0.4 4.1 3.0 +1.1 15.3 14.6 +0.7
England (UK) 12.1 9.9 +2.2 3.6 3.0 +0.6 15.7 12.9 +2.8
USA 9.8 7.9 +1.9 3.2 2.3 +0.9 13.0 10.2 +2.8
Ireland 8.0 6.6 +1.4 3.0 1.1 +1.9 11.0 7.7 +3.3
OECD Average 10.0 7.7 +2.3 3.1 1.8 +1.3 13.1 9.5 +3.6

There is no consistent pattern in whether boys are more heavily over-represented at proficiency level 5 than proficiency level 6, or vice versa.

There is a bigger difference at level 6 than at level 5 in Singapore, South Korea, Canada, Australia, Finland and Ireland, but the reverse is true in the five remaining jurisdictions.

At level 5, boys are in the greatest ascendancy in Shanghai and Taiwan while, at level 6, this is true of Singapore and South Korea.

When proficiency levels 5 and 6 are combined, all five of the Asian tigers show a difference in favour of males of 5.5% or higher, significantly in advance of the six ‘Western’ countries in the sample and significantly ahead of the OECD average.

Amongst the six ‘Western’ representatives, boys have the biggest advantage at proficiency level 5 in England, while at level 6 boys in Ireland have the biggest advantage.

Within this group of jurisdictions, the gap between boys and girls at level 6 is comfortably the smallest in England. But, in terms of performance at proficiency levels 5 and 6 together, Finland is ahead.

 .

Chart 2: PISA Problem-solving: Gender variation at top proficiency levels

Problem solving chart 2

The Report includes a generic analysis of gender differences in performance for boys and girls with similar levels of performance in English, maths and science.

It concludes that girls perform significantly above their expected level in both England and Australia (though the difference is only statistically significant in the latter).

The Report comments:

‘It is not clear whether one should expect there to be a gender gap in problem solving. On the one hand, the questions posed in the PISA problem-solving assessment were not grounded in content knowledge, so boys’ or girls’ advantage in having mastered a particular subject area should not have influenced results. On the other hand… performance in problem solving is more closely related to performance in mathematics than to performance in reading. One could therefore expect the gender difference in performance to be closer to that observed in mathematics – a modest advantage for boys, in most countries – than to that observed in reading – a large advantage for girls.’

 .

Socio-economic differences

The Report considers variations in performance against PISA’s Index of Economic, Social and Cultural status (IESC), finding them weaker overall than for reading, maths and science.

It calculates that the overall percentage variation in performance attributable to these factors is about 10.6% (compared with 14.9% in maths, 14.0% in science and 13.2% in reading).

Amongst the eleven jurisdictions in my sample, the weakest correlations were found in Canada (4%), followed by Hong Kong (4.9%), South Korea (5.4%), Finland (6.5%), England (7.8%), Australia (8.5%), Taiwan (9.4%), the USA (10.1%) and Ireland (10.2%) in that order. All those jurisdictions had correlations below the OECD average.

Perhaps surprisingly, there were above average correlations in Shanghai (14.1%) and, to a lesser extent (and less surprisingly) in Singapore (11.1%).

The report suggests that students with parents working in semi-skilled and elementary occupations tend to perform above their expected level in problem-solving in Taiwan, England, Canada, the USA, Finland and Australia (in that order – with Australia closest to the OECD average).

The jurisdictions where these students tend to underperform their expected level are – in order of severity – Ireland, Shanghai, Singapore, Hong Kong and South Korea.

A parallel presentation on the Report provides some additional data about the performance in different countries of what the OECD calls ‘resilient’ students – those in the bottom quartile of the IESC but in the top quartile by perfromance, after accounting for socio-economic status.

It supplies the graph below, which shows all the Asian countries in my sample clustered at the top, but also with significant gaps between them. Canada is the highest-performing of the remainder in my sample, followed by Finland, Australia, England and the USA respectively. Ireland is some way below the OECD average.

.

PISA problem resolving resilience Capture

.

Unfortunately, I can find no analysis of how performance varies according to socio-economic variables at each proficiency level. It would be useful to see which jurisdictions have the smallest ‘excellence gaps’ at levels 5 and 6 respectively.

 .

How different jurisdictions perform on different aspects of problem-solving

The Report’s analysis of comparative strengths and weaknesses in different elements of problem-solving does not take account of variations at different proficiency levels

It explains that aspects of the assessment were found easier by students in different jurisdictions, employing a four-part distinction between:

‘Exploring and understanding. The objective is to build mental representations of each of the pieces of information presented in the problem. This involves:

  • exploring the problem situation: observing it, interacting with it, searching for information and finding limitations or obstacles; and
  • understanding given information and, in interactive problems, information discovered while interacting with the problem situation; and demonstrating understanding of relevant concepts.

Representing and formulating. The objective is to build a coherent mental representation of the problem situation (i.e. a situation model or a problem model). To do this, relevant information must be selected, mentally organised and integrated with relevant prior knowledge. This may involve:

  • representing the problem by constructing tabular, graphic, symbolic or verbal representations, and shifting between representational formats; and
  • formulating hypotheses by identifying the relevant factors in the problem and their inter-relationships; and organising and critically evaluating information.

Planning and executing. The objective is to use one’s knowledge about the problem situation to devise a plan and execute it. Tasks where “planning and executing” is the main cognitive demand do not require any substantial prior understanding or representation of the problem situation, either because the situation is straightforward or because these aspects were previously solved. “Planning and executing” includes:

  • planning, which consists of goal setting, including clarifying the overall goal, and setting subgoals, where necessary; and devising a plan or strategy to reach the goal state, including the steps to be undertaken; and
  • executing, which consists of carrying out a plan.

Monitoring and reflecting.The objective is to regulate the distinct processes involved in problem solving, and to critically evaluate the solution, the information provided with the problem, or the strategy adopted. This includes:

  • monitoring progress towards the goal at each stage, including checking intermediate and final results, detecting unexpected events, and taking remedial action when required; and
  • reflecting on solutions from different perspectives, critically evaluating assumptions and alternative solutions, identifying the need for additional information or clarification and communicating progress in a suitable manner.’

Amongst my sample of eleven jurisdictions:

  • ‘Exploring and understanding’ items were found easier by students in Singapore, Hong Kong, South Korea, Australia, Taiwan and Finland. 
  • ‘Representing and formulating’ items were found easier in Taiwan, Shanghai, South Korea, Singapore, Hong Kong, Canada and Australia. 
  • ‘Planning and executing’ items were found easier in Finland only. 
  • ‘Monitoring and reflecting’ items were found easier in Ireland, Singapore, the USA and England.

The Report concludes:

‘This analysis shows that, in general, what differentiates high-performing systems, and particularly East Asian education systems, such as those in Hong Kong-China, Japan, Korea [South Korea], Macao-China, Shanghai -China, Singapore and Chinese Taipei [Taiwan], from lower-performing ones, is their students’ high level of proficiency on “exploring and understanding” and “representing and formulating” tasks.’

It also distinguishes those jurisdictions that perform best on interactive problems, requiring students to discover some of the information required to solve the problem, rather than being presented with all the necessary information. This seems to be the nearest equivalent to a measure of creativity in problem solving

Comparative strengths and weaknesses in respect of interactive tasks are captured in the following diagram.

.

PISA problem solving strengths in different countries

.

One can see that several of my sample – Ireland, the USA, Canada, Australia, South Korea and Singapore – are placed in the top right-hand quarter of the diagram, indicating stronger than expected performance on both interactive and knowledge acquisition tasks.

England is stronger than expected on the former but not on the latter.

Jurisdictions that are weaker than inspected on interactive tasks only include Hong Kong, Taiwan and Shanghai, while Finland is weaker than expected on both.

We have no information about whether these distinctions were maintained at different proficiency levels.

.

Comparing jurisdictions’ performance at higher proficiency levels

Table 4 and Charts 3 and 4 below show variations in the performance of countries in my sample across the four different assessments at level 6, the highest proficiency level.

The charts in particular emphasise how far ahead the Asian Tigers are in maths at this level, compared with the cross-jurisdictional variation in the other three assessments.

In all five cases, each ‘Asian Tiger’s’ level 6 performance in maths also vastly exceeds its level 6 performance in the other three assessments. The proportion of students achieving level 6 proficiency in problem solving lags far behind, even though there is a fairly strong correlation between these two assessments (see below).

In contrast, all the ‘Western’ jurisdictions in the sample – with the sole exception of Ireland – achieve a higher percentage at proficiency level 6 in problem solving than they do in maths, although the difference is always less than a full percentage point. (Even in Ireland the difference is only 0.1 of a percentage point in favour of maths.)

Shanghai is the only jurisdiction in the sample which has more students achieving proficiency level 6 in science than in problem solving. It also has the narrowest gap between level 6 performance in problem solving and in reading.

Meanwhile, England, the USA, Finland and Australia all have broadly similar profiles across the four assessments, with the largest percentage of level 6 performers in problem solving, followed by maths, science and reading respectively.

The proximity of the lines marking level 6 performance in reading and science is also particularly evident in the second chart below.

.

Table 4: Percentage achieving proficiency Level 6 in each domain

  PS L6  Ma L6 Sci L6 Re L6
Singapore 9.6 19.0 5.8 5.0
South Korea 7.6 12.1 1.1 1.6
Hong Kong 5.1 12.3 1.8 1.9
Shanghai 4.1 30.8 4.2 3.8
Taiwan 3.8 18.0 0.6 1.4
Canada 5.1 4.3 1.8 2.1
Australia 4.4 4.3 2.6 1.9
Finland 3.6 3.5 3.2 2.2
England (UK) 3.3 3.1 1.9 1.3
USA 2.7 2.2 1.1 1.0
Ireland 2.1 2.2 1.5 1.3
OECD Average 2.5 3.3 1.2 1.1

 Charts 3 and 4: Percentage achieving proficiency level 6 in each domain

Problem solving chart 3

Problem solving chart 4

The pattern is materially different at proficiency levels 5 and above, as the table and chart below illustrate. These also include the proportion of all-rounders, who achieved proficiency level 5 or above in each of maths, science and reading (but not in problem-solving).

The lead enjoyed by the ‘Asian Tigers’ in maths is somewhat less pronounced. The gap between performance within these jurisdictions on the different assessments also tends to be less marked, although maths accounts for comfortably the largest proportion of level 5+ performance in all five cases.

Conversely, level 5+ performance on the different assessments is typically much closer in the ‘Western’ countries. Problem solving leads the way in Australia, Canada, England and the USA, but in Finland science is in the ascendant and reading is strongest in Ireland.

Some jurisdictions have a far ‘spikier’ profile than others. Ireland is closest to achieving equilibrium across all four assessments. Australia and England share very similar profiles, though Australia outscores England in each assessment.

The second chart in particular shows how Shanghai’s ‘spike’ applies in all the other three assessments but not in problem solving.

Table 5: Percentage achieving Proficiency level 5 and above in each domain

  PS L5+  Ma L5+ Sci L5+ Re L5+ Ma + Sci + Re L5+
Singapore 29.3 40.0 22.7 21.2 16.4
South Korea 27.6 30.9 11.7 14.2 8.1
Hong Kong 19.3 33.4 16.7 16.8 10.9
Shanghai 18.2 55.4 27.2 25.1 19.6
Taiwan 18.4 37.2 8.4 11.8 6.1
Canada 17.5 16.4 11.3 12.9 6.5
Australia 16.7 14.8 13.5 11.7 7.6
Finland 15.0 15.2 17.1 13.5 7.4
England (UK) 14.2 12.4 11.7 9.1 5.7* all UK
USA 11.6 9.0 7.4 7.9 4.7
Ireland 9.4 10.7 10.8 11.4 5.7
OECD Average 11.4 12.6 8.4 8.4 4.4

 .

Charts 5 and 6: Percentage Achieving Proficiency Level 5 and above in each domain

Problem solving chart 5Problem solving chart 6.

How high-achieving problem solvers perform in other assessments

.

Correlations between performance in different assessments

The Report provides an analysis of the proportion of students achieving proficiency levels 5 and 6 on problem solving who also achieved that outcome on one of the other three assessments: reading, maths and science.

It argues that problem solving is a distinct and separate domain. However:

‘On average, about 68% of the problem-solving score reflects skills that are also measured in one of the three regular assessment domains. The remaining 32% reflects skills that are uniquely captured by the assessment of problem solving. Of the 68% of variation that problem-solving performance shares with other domains, the overwhelming part is shared with all three regular assessment domains (62% of the total variation); about 5% is uniquely shared between problem solving and mathematics only; and about 1% of the variation in problem solving performance hinges on skills that are specifically measured in the assessments of reading or science.’

It discusses the correlation between these different assessments:

‘A key distinction between the PISA 2012 assessment of problem solving and the regular assessments of mathematics, reading and science is that the problem-solving assessment does not measure domain-specific knowledge; rather, it focuses as much as possible on the cognitive processes fundamental to problem solving. However, these processes can also be used and taught in the other subjects assessed. For this reason, problem-solving tasks are also included among the test units for mathematics, reading and science, where their solution requires expert knowledge specific to these domains, in addition to general problem-solving skills.

It is therefore expected that student performance in problem solving is positively correlated with student performance in mathematics, reading and science. This correlation hinges mostly on generic skills, and should thus be about the same magnitude as between any two regular assessment subjects.’

These overall correlations are set out in the table below, which shows that maths has a higher correlation with problem solving than either science or reading, but that this correlation is lower than those between the three subject-related assessments.

The correlation between maths and science (0.90) is comfortably the strongest (despite the relationship between reading and science at the top end of the distribution noted above).

PISA problem solving correlations capture

Correlations are broadly similar across jurisdictions, but the Report notes that the association is comparatively weak in some of these, including Hong Kong. Students here are more likely to perform poorly on problem solving and well on other assessments, or vice versa.

There is also broad consistency at different performance levels, but the Report identifies those jurisdictions where students with the same level of performance exceed expectations in relation to problem-solving performance. These include South Korea, the USA, England, Australia, Singapore and – to a lesser extent – Canada.

Those with lower than expected performance include Shanghai, Ireland, Hong Kong, Taiwan and Finland.

The Report notes:

‘In Shanghai-China, 86% of students perform below the expected level in problem solving, given their performance in mathematics, reading and science. Students in these countries/economies struggle to use all the skills that they demonstrate in the other domains when asked to perform problem-solving tasks.’

However, there is variation according to students’ maths proficiency:

  • Jurisdictions whose high scores on problem solving are mainly attributable to strong performers in maths include Australia, England and the USA. 
  • Jurisdictions whose high scores on problem solving are more attributable to weaker performers in maths include Ireland. 
  • Jurisdictions whose lower scores in problem solving are more attributable to weakness among strong performers in maths include Korea. 
  • Jurisdictions whose lower scores in problem solving are more attributable to weakness among weak performers in maths include Hong Kong and Taiwan. 
  • Jurisdictions whose weakness in problem solving is fairly consistent regardless of performance in maths include Shanghai and Singapore.

The Report adds:

‘In Italy, Japan and Korea, the good performance in problem solving is, to a large extent, due to the fact that lower performing students score beyond expectations in the problem-solving assessment….This may indicate that some of these students perform below their potential in mathematics; it may also indicate, more positively, that students at the bottom of the class who struggle with some subjects in school are remarkably resilient when it comes to confronting real-life challenges in non-curricular contexts…

In contrast, in Australia, England (United Kingdom) and the United States, the best students in mathematics also have excellent problem-solving skills. These countries’ good performance in problem solving is mainly due to strong performers in mathematics. This may suggest that in these countries, high performers in mathematics have access to – and take advantage of – the kinds of learning opportunities that are also useful for improving their problem-solving skills.’

What proportion of high performers in problem solving are also high performers in one of the other assessments?

The percentages of high achieving students (proficiency level 5 and above) in my sample of eleven jurisdictions who perform equally highly in each of the three domain-specific assessments are shown in Table 6 and Chart 7 below.

These show that Shanghai leads the way in each case, with 98.0% of all students who achieve proficiency level 5+ in problem solving also achieving the same outcome in maths. For science and reading the comparable figures are 75.1% and 71.7% respectively.

Taiwan is the nearest competitor in respect of problem solving plus maths, Finland in the case of problem solving plus science and Ireland in the case of problem solving plus reading.

South Korea, Taiwan and Canada are atypical of the rest in recording a higher proportion of problem solving plus reading at this level than problem solving plus science.

Singapore, Shanghai and Ireland are the only three jurisdictions that score above 50% on all three of these combinations. However, the only jurisdictions that exceed the OECD averages in all three cases are Singapore, Hong Kong, Shanghai and Finland.

Table 6: PISA problem-solving: Percentage of students achieving proficiency level 5+ in domain-specific assessments

  PS + Ma PS + Sci PS + Re
Singapore 84.1 57.0 50.2
South Korea 73.5 34.1 40.3
Hong Kong 79.8 49.4 48.9
Shanghai 98.0 75.1 71.7
Taiwan 93.0 35.3 43.7
Canada 57.7 43.9 44.5
Australia 61.3 54.9 47.1
Finland 66.1 65.4 49.5
England (UK) 59.0 52.8 41.7
USA 54.6 46.9 45.1
Ireland 59.0 57.2 52.0
OECD Average 63.5 45.7 41.0

Chart 7: PISA Problem-solving: Percentage of students achieving proficiency level 5+ in domain-specific assessments

Problem solving chart 7.

What proportion of students achieve highly in one or more assessments?

Table 7 and Chart 8 below show how many students in each of my sample achieved proficiency level 5 or higher in problem-solving only, in problem solving and one or more assessments, in one or more assessments but not problem solving and in at least one assessment (ie the total of the three preceding columns).

I have also repeated in the final column the percentage achieving this proficiency level in each of maths, science and reading. (PISA has not released information about the proportion of students who achieved this feat across all four assessments.)

These reveal that the percentages of students who achieve proficiency level 5+ only in problem solving are very small, ranging from 0.3% in Shanghai to 6.7% in South Korea.

Conversely, the percentages of students achieving proficiency level 5+ in any one of the other assessments but not in problem solving are typically significantly higher, ranging from 4.5% in the USA to 38.1% in Shanghai.

There is quite a bit of variation in terms of whether jurisdictions score more highly on ‘problem solving and at least one other’ (second column) and ‘at least one other excluding problem solving (third column).

More importantly, the fourth column shows that the jurisdiction with the most students achieving proficiency level 5 or higher in at least one assessment is clearly Shanghai, followed by Singapore, Hong Kong, South Korea and Taiwan in that order.

The proportion of students achieving this outcome in Shanghai is close to three times the OECD average, comfortably more than twice the rate achieved in any of the ‘Western’ countries and three times the rate achieved in the USA.

The same is true of the proportion of students achieving this level in the three domain-specific assessments.

On this measure, South Korea and Taiwan fall significantly behind their Asian competitors, and the latter is overtaken by Australia, Finland and Canada.

 .

Table 7: Percentage achieving proficiency level 5+ in different combinations of PISA assessments

  PS only% PS + 1 or more% 1+ butNot PS% L5+ in at least one % L5+ in Ma + Sci + Re %
Singapore 4.3 25.0 16.5 45.8 16.4
South Korea 6.7 20.9 11.3 38.9 8.1
Hong Kong 3.4 15.9 20.5 39.8 10.9
Shanghai 0.3 17.9 38.1 56.3 19.6
Taiwan 1.2 17.1 20.4 38.7 6.1
Canada 5.5 12.0 9.9 27.4 6.5
Australia 4.7 12.0 7.7 24.4 7.6
Finland 3.0 12.0 11.9 26.9 7.4
England (UK) 4.4 9.8 6.8 21.0 5.7* all UK
USA 4.1 7.5 4.5 16.1 4.7
Ireland 2.6 6.8 10.1 19.5 5.7
OECD Average 3.1 8.2 8.5 19.8 4.4

Chart 8: Percentage achieving proficiency level 5+ in different combinations of PISA assessments

Problem solving chart 8

The Report comments:

The proportion of students who reach the highest levels of proficiency in at least one domain (problem solving, mathematics, reading or science) can be considered a measure of the breadth of a country’s/economy’s pool of top performers. By this measure, the largest pool of top performers is found in Shanghai-China, where more than half of all students (56%) perform at the highest levels in at least one domain, followed by Singapore (46%), Hong  Kong-China (40%), Korea and Chinese  Taipei (39%)…Only one OECD country, Korea, is found among the five countries/economies with the largest proportion of top performers. On average across OECD countries, 20% of students are top performers in at least one assessment domain.

The proportion of students performing at the top in problem solving and in either mathematics, reading or science, too can be considered a measure of the depth of this pool. These are top performers who combine the mastery of a specific domain of knowledge with the ability to apply their unique skills flexibly, in a variety of contexts. By this measure, the deepest pools of top performers can be found in Singapore (25% of students), Korea (21%), Shanghai-China (18%) and Chinese Taipei (17%). On average across OECD countries, only 8% of students are top performers in both a core subject and in problem solving.’

There is no explanation of why proficiency level 5 should be equated by PISA with the breadth of a jurisdiction’s ‘pool of top performers’. The distinction between proficiency levels 5 and 6 in this respect requires further discussion.

In addition to updated ‘all-rounder’ data showing what proportion of students achieved this outcome across all four assessments, it would be really interesting to see the proportion of students achieving at proficiency level 6 across different combinations of these four assessments – and to see what proportion of students achieving that outcome in different jurisdictions are direct beneficiaries of targeted support, such as a gifted education programme.

In the light of this analysis, what are jurisdictions’ priorities for improving  problem solving performance?

Leaving aside strengths and weaknesses in different elements of problem solving discussed above, this analysis suggests that the eleven jurisdictions in my sample should address the following priorities:

Singapore has a clear lead at proficiency level 6, but falls behind South Korea at level 5 (though Singapore re-establishes its ascendancy when levels 5 and 6 are considered together). It also has more level 1 performers than South Korea. It should perhaps focus on reducing the size of this tail and pushing through more of its mid-range performers to level 5. There is a pronounced imbalance in favour of boys at level 6, so enabling more girls to achieve the highest level of performance is a clear priority. There may also be a case for prioritising the children of semi-skilled workers.

South Korea needs to focus on getting a larger proportion of its level 5 performers to level 6. This effort should be focused disproportionately on girls, who are significantly under-represented at both levels 5 and 6. South Korea has a very small tail to worry about – and may even be getting close to minimising this. It needs to concentrate on improving the problem solving skills of its stronger performers in maths.

Hong Kong has a slightly bigger tail than Singapore’s but is significantly behind at both proficiency levels 5 and 6. In the case of level 6 it is equalled by Canada. Hong Kong needs to focus simultaneously on reducing the tail and lifting performance across the top end, where girls and weaker performers in maths are a clear priority.

Shanghai has a similar profile to Hong Kong’s in all respects, though with somewhat fewer level 6 performers. It also needs to focus effort simultaneously at the top and the bottom of the distribution. Amongst this sample, Shanghai has the worst under-representation of girls at level 5 and levels 5 and 6 together, so addressing that imbalance is an obvious priority. It also demonstrated the largest variation in performance against PISA’s IESC index, which suggests that it should target young people from disadvantaged backgrounds, as well as the children of semi-skilled workers.

Taiwan is rather similar to Hong Kong and Shanghai, but its tail is slightly bigger and its level 6 cadre slightly smaller, while it does somewhat better at level 5. It may need to focus more at the very bottom, but also at the very top. Taiwan also has a problem with high-performing girls, second only to Shanghai as far as level 5 and levels 5 and 6 together are concerned. However, like Shanghai, it does comparatively better than the other ‘Asian Tigers’ in terms of girls at level 6. It also needs to consider the problem solving performance of its weaker performers in maths.

Canada is the closest western competitor to the ‘Asian Tigers’ in terms of the proportions of students at levels 1 and 5 – and it already outscores Shanghai and Taiwan at level 6. It needs to continue cutting down the tail without compromising achievement at the top end. Canada also has small but significant gender imbalances in favour of boys at the top end.

Australia by comparison is significantly worse than Canada at level 1, broadly comparable at level 5 and somewhat worse at level 6. It too needs to improve scores at the very bottom and the very top. Australia’s gender imbalance is more pronounced at level 6 than level 5.

Finland has the same mean score as Australia’s but a smaller tail (though not quite as small as Canada’s). It needs to improve across the piece but might benefit from concentrating rather more heavily at the top end. Finland has a slight gender imbalance in favour of girls at level 5, but boys are more in the ascendancy at level 6 than in either England or the USA. As in Australia, this latter point needs addressing.

England has a profile similar to Australia’s, but less effective at all three selected proficiency levels. It is further behind at the top than at the bottom of the distribution, but needs to work hard at both ends to catch up the strongest western performers and maintain its advantage over the USA and Ireland. Gender imbalances are small but nonetheless significant.

USA has a comparatively long tail of low achievement at proficiency level 1 and, with the exception of Ireland, the fewest high achievers. This profile is very close to the OECD average. As in England, the relatively small size of gender imbalances in favour of boys does not mean that these can be ignored.

Ireland has the longest tail of low achievement and the smallest proportion of students at proficiency levels 5, 6 and 5 and 6 combined. It needs to raise the bar at both ends of the achievement distribution. Ireland has a larger preponderance of boys at level 6 than its Western competitors and this needs addressing. The limited socio-economic evidence suggests that Ireland should also be targeting the offspring of parents with semi-skilled and elementary occupations.

So there is further scope for improvement in all eleven jurisdictions. Meanwhile the OECD could usefully provide a more in-depth analysis of high achievers on its assessments that features:

  • Proficiency level 6 performance across the board.
  • Socio-economic disparities in performance at proficiency levels 5 and 6.
  • ‘All-rounder’ achievement at these levels across all four assessments and
  • Correlations between success at these levels and specific educational provision for high achievers including gifted education programmes.

.

GP

April 2014

What Has Become of the European Talent Network? Part One

This post discusses recent progress by the European Talent Centre towards a European Talent Network.

EU flag CaptureIt is a curtain-raiser for an imminent conference on this topic and poses the critical questions I would like to see addressed at that event.

It should serve as a briefing document for prospective delegates and other interested parties, especially those who want to dig beneath the invariably positive publicity surrounding the initiative.

It continues the narrative strand of posts I have devoted to the Network, concentrating principally on developments since my last contribution in December 2012.

 

Flag_of_HungaryThe post is organised part thematically and part chronologically and covers the following ground:

  • An updated description of the Hungarian model for talent support and its increasingly complex infrastructure.
  • The origins of the European Talent project and how its scope and objectives have changed since its inception.
  • The project’s advocacy effort within the European Commission and its impact to date.
  • Progress on the European Talent Map and promised annual European Talent Days and conferences.
  • The current scope and effectiveness of the network, its support structures and funding.
  • Key issues and obstacles that need to be addressed.

To improve readability I have divided the text into two sections of broadly equivalent length. Part One is dedicated largely to bullets one to three above, while Part Two deals with bullets three to six.

Previous posts in this series

If I am to do justice to this complex narrative, I must necessarily draw to some extent on material I have already published in earlier posts. I apologise for the repetition, which I have tried to keep to a minimum.

On re-reading those earlier posts and comparing them with this, it is clear that my overall assessment of the EU talent project has shifted markedly since 2010, becoming progressively more troubled and pessimistic.

This seems to me justified by an objective assessment of progress, based exclusively on evidence in the public domain – evidence that I have tried to draw together in these posts.

However, I feel obliged to disclose the influence of personal frustration at this slow progress, as well as an increasing sense of personal exclusion from proceedings – which seems completely at odds with the networking principles on which the project is founded.

I have done my best to control this subjective influence in the assessment below, confining myself as far as possible to an objective interpretation of the facts.

However I refer you to my earlier posts if you wish to understand how I reached this point.

  • In April 2011 I attended the inaugural conference in Budapest, publishing a report on the proceedings and an analysis of the Declaration produced, plus an assessment of the Hungarian approach to talent support as it then was and its potential scalability to Europe as a whole.
  • In December 2012 I described the initial stages of EU lobbying, an ill-fated 2012 conference in Poland, the earliest activities of the European Talent Centre and the evolving relationship between the project and ECHA, the European Council for High Ability.

I will not otherwise comment on my personal involvement, other than to say that I do not expect to attend the upcoming Conference, judging that the cost of attending will not be exceeded by the benefits of doing so.

This post conveys more thoroughly and more accurately the points I would have wanted to make during the proceedings, were suitable opportunities provided to do so.

A brief demographic aside

It is important to provide some elementary information about Hungary’s demographics, to set in context the discussion below of its talent support model and the prospects for Europe-wide scalability.

Hungary is a medium-sized central European country with an area roughly one-third of the UK’s and broadly similar to South Korea or Portugal.

It has a population of around 9.88 million (2013) about a sixth of the size of the UK population and similar in size to Portugal’s or Sweden’s.

Hungary is the 16th most populous European country, accounting for about 1.4% of the total European population and about 2% of the total population of the European Union (EU).

It is divided into 7 regions and 19 counties, plus the capital, Budapest, which has a population of 1.7 million in its own right.

RegionsHungary

Almost 84% of the population are ethnic Hungarians but there is a Roma minority estimated (some say underestimated) at 3.1% of the population.

Approximately 4 million Hungarians are aged below 35 and approximately 3.5m are aged 5-34.

The GDP (purchasing power parity) is $19,497 (source: IMF), slightly over half the comparable UK figure.

The Hungarian Talent Support Model

The Hungarian model has grown bewilderingly complex and there is an array of material describing it, often in slightly different terms.

Some of the English language material is not well translated and there are gaps that can be filled only with recourse to documents in Hungarian (which I can only access through online translation tools).

Much of this documentation is devoted to publicising the model as an example of best practice, so it can be somewhat economical with the truth.

The basic framework is helpfully illustrated by this diagram, which appeared in a presentation dating from October 2012.

EU talent funding Capture

 .

It shows how the overall Hungarian National Talent Programme (NTP) comprises a series of time-limited projects paid for by the EU Social Fund, but also a parallel set of activities supported by a National Talent Fund which is fed mainly by the Hungarian taxpayer.

The following sections begin by outlining the NTP, as described in a Parliamentary Resolution dating from 2008.

Secondly, they describe the supporting infrastructure for the NTP as it exists today.

Thirdly, they outline the key features of the time-limited projects: The Hungarian Genius Programme (HGP) (2009-13) and the Talent Bridges Programme (TBP) (2012-14).

Finally, they try to make sense of the incomplete and sometimes conflicting information about the funding allocated to different elements of the NTP.

Throughout this treatment my principal purpose is to show how the European Talent project fits into the overall Hungarian plan, as precursor to a closer analysis of the former in the second half of the post.

I also want to show how the direction of the NTP has shifted since its inception.

 .

The National Talent Programme (NTP) (2008-2028)

The subsections below describe the NTP as envisaged in the original 2008 Parliamentary Resolution. This remains the most thorough exposition of the broader direction of travel that I could find.

Governing principles

The framework set out in the Resolution is built on ten general principles that I can best summarise as follows:

  • Talent support covers the period from early childhood to age 35, so extends well beyond compulsory education.
  • The NTP must preserve the traditions of existing successful talent support initiatives.
  • Talent is complex and so requires a diversity of provision – standardised support is a false economy.
  • There must be equality of access to talent support by geographical area, ethnic and socio-economic background.
  • Continuity is necessary to support individual talents as they change and develop over time; special attention is required at key transition points.
  • In early childhood one must provide opportunities for talent to emerge, but selection on the basis of commitment and motivation become increasingly significant and older participants increasingly self-select.
  • Differentiated support is needed to support different levels of talent; there must be opportunities to progress and to step off the programme without loss of esteem.
  • In return for talent support, the talented individual has a social responsibility to support talent development in others.
  • Those engaged in talent support – here called talent coaches – need time and support.
  • Wider social support for talent development is essential to success and sustainability.

Hence the Hungarians are focused on a system-wide effort to promote talent development that extends well beyond compulsory education, but only up to the age of 35. As noted above, if 0-4 year-olds are excluded, this represents an eligible population of about 3.5 million people.

The choice of this age 35 cut-off seems rather arbitrary. Having decided to push beyond compulsory education into adult provision, it is not clear why the principle of lifelong learning is then set aside – or exactly what happens when participants reach their 36th birthdays.

Otherwise the principles above seem laudable and broadly reflect one tradition of effective practice in the field.

Goals

The NTP’s goals are illustrated by this diagram

NTP goals Capture

 .

The elements in the lower half of the diagram can be expanded thus:

  • Talent support traditions: support for existing provision; development of new provision to fill gaps; minimum standards and professional development for providers; applying models of best practice; co-operation with ethnic Hungarian programmes outside Hungary (‘cross border programmes’); and ‘systematic exploration and processing of the talent support experiences’ of EU and other countries which excel in this field. 
  • Integrated programmes: compiling and updating a map of the talent support opportunities available in Hungary as well as ‘cross border programmes’; action to support access to the talent map; a ‘detailed survey of the international talent support practice’; networking between providers with cooperation and collaboration managed through a set of talent support councils; monitoring of engagement to secure continuity and minimise drop-out. 
  • Social responsibility: promoting the self-organisation of talented youth;  developing their innovation and management skills; securing counselling; piloting  a ‘Talent Bonus – Talent Coin’ scheme to record in virtual units the monetary value of support received and provided, leading to consideration of a LETS-type scheme; support for ‘exceptionally talented youth’; improved social integration of talented youth and development of a talent-friendly society. 
  • Equal opportunities: providing targeted information about talent support opportunities; targeted programming for disadvantaged, Roma and disabled people and wider emphasis on integration; supporting the development of Roma talent coaches; and action to secure ‘the desirable gender distribution’. 
  • Enhanced recognition: improving financial support for talent coaches; reducing workload and providing counselling for coaches; improving recognition and celebrating the success of coaches and others engaged in talent support. 
  • Talent-friendly society: awareness-raising activity for parents, family and friends of talented youth; periodic talent days to mobilise support and ‘promote the local utilisation of talent’; promoting talent in the media, as well as international communication about the programme and ‘introduction in both the EU and other countries by exploiting the opportunities provided by Hungary’s EU Presidency in 2011’; ‘preparation for the foreign adaptation of the successful talent support initiatives’ and organisation of EU talent days. 

Hence the goals incorporate a process of learning from European and other international experience, but also one of feeding back to the international community information about the Hungarian talent support effort and extending the model into other European countries.

There is an obvious tension in these goals between preserving the traditions of existing successful initiatives and imposing a framework with minimum standards and built-in quality criteria. This applies equally to the European project discussed below.

The reference to a LETS-type scheme is intriguing but I could trace nothing about its subsequent development.

 .

Planned Infrastructure

In 2008 the infrastructure proposed to undertake the NTP comprised:

  • A National Talent Co-ordination Board, chaired at Ministerial level, to oversee the programme and to allocate a National Talent Fund (see below).
  • A National Talent Support Circle [I’m not sure whether this should be ‘Council’] consisting of individuals from Hungary and abroad who would promote talent support through professional opportunities, financial contribution or ‘social capital opportunities’.
  • A National Talent Fund comprising a Government contribution and voluntary contributions from elsewhere. The former would include the proceeds of a 1% voluntary income tax levy (being one of the good causes towards which Hungarian taxpayers could direct this contribution). Additional financial support would come from ‘the talent support-related programmes of the New Hungary Development Plan’.
  • A system of Talent Support Councils to co-ordinate activity at regional and local level.
  • A national network of Talent Points – providers of talent support activity.
  • A biennial review of the programme presented to Parliament, the first being in 2011.

Presumably there have been two of these biennial reviews to date. They would make interesting reading, but I could find no material in English that describes the outcomes.

The NTP Infrastructure Today

The supporting infrastructure as described today has grown considerably more complex and bureaucratic than the basic model above.

  • The National Talent Co-ordination Board continues to oversee the programme as a whole. Its membership is set out here.
  • The National Talent Support Council was established in 2006 and devised the NTP as set out above. Its functions are more substantial than originally described (assuming this is the ‘Circle’ mentioned in the Resolution), although it now seems to be devolving some of these. Until recently at least, the Council: oversaw the national database of talent support initiatives and monitored coverage, matching demand – via an electronic mailing list – with the supply of opportunities; initiated and promoted regional talent days; supported the network of talent points and promoted the development of new ones; invited tenders for niche programmes of various kinds; collected and analysed evidence of best practice and the research literature; and promoted international links paying ‘special attention to the reinforcement of the EU contacts’. The Council has a Chair and six Vice Presidents as well as a Secretary and Secretariat. It operates nine committees: Higher Education, Support for Socially Disadvantaged Gifted People, Innovations, Public Education, Foreign Relations, Public and Media Relations, Theory of Giftedness, Training and Education and Giftedness Network.
  • The National Talent Point has only recently been identified as an entity in its own right, distinct from the National Council. Its role is to maintain the Talent Map and manage the underpinning database. Essentially it seems to have acquired the Council’s responsibilities for delivery, leaving the Council to concentrate on policy. It recently acquired a new website.
  • The Association of Hungarian Talent Support Organizations (MATEHETZ) is also a new addition. Described as ‘a non-profit umbrella organization that legally represents its members and the National Talent Support Council’, it is funded by the National Council and through membership fees. The Articles of Association date from February 2010 and list 10 founding organisations. The Association provides ‘representation’ for the National Council’ (which I take to mean the membership). It manages the time-limited programmes (see below) as well asthe National Talent Point and the European Talent Centre.
  • Talent Support Councils: Different numbers of these are reported. One source says 76; another 65, of which some 25% were newly-established through the programme. Their role seems broadly unchanged, involving local and regional co-ordination, support for professionals, assistance to develop new activities, helping match supply with demand and supporting the tracking of those with talent.
  • Talent Point Network: there were over 1,000 talent points by the end of 2013. (Assuming 3.5 million potential participants, that is a talent point for every 3,500 people.) Talent points are providers of talent support services – whether identification, provision or counselling. They are operated by education providers, the church and a range of other organisations and may have a local, regional or national reach. They join the network voluntarily but are accredited. In 2011 there were reportedly 400 talent points and 200 related initiatives, so there has been strong growth over the past two years.
  • Ambassadors of Talent: Another new addition, introduced by the National Talent Support Council in 2011. There is a separate Ambassador Electing Council which appoints three new ambassadors per year. The current list has thirteen entries and is markedly eclectic.
  • Friends of Talent Club: described in 2011 as ‘a voluntary organisation that holds together those, who are able and willing to support talents voluntarily and serve the issue of talent support…Among them, there are mentors, counsellors and educators, who voluntarily help talented people develop in their professional life. The members of the club can be patrons and/or supporters. “Patrons” are those, who voluntarily support talents with a considerable amount of service. “Supporters” are those, who voluntarily support the movement of talent support with a lesser amount of voluntary work, by mobilizing their contacts or in any other way.’ This sounds similar to the originally envisioned ‘National Talent Support Circle’ [sic]. I could find little more about the activities of this branch of the structure.
  • The European Talent Centre: The National Talent Point says that this:

‘…supports and coordinates European actions in the field of talent support in order to find gifted people and develop their talent in the interest of Europe as a whole and the member states.’

Altogether this is a substantial endeavour requiring large numbers of staff and volunteers and demanding a significant budgetary topslice.

I could find no reliable estimate of the ratio of the running cost to the direct investment in talent support, but there must be cause to question the overall efficiency of the system.

My hunch is that this level of bureaucracy must consume a significant proportion of the overall budget.

Clearly the Hungarian talent support network is a long, long way from being financially self-sustaining, if indeed it ever could be.

 .

Hungary Parliament Building Budapest

Hungarian Parliament Building

.

The Hungarian Genius Programme (HGP) (2009-13)

Launched in June 2009, the HGP had two principal phases lasting from 2009 to 2011 and from 2011 to 2013. The fundamental purpose was to establish the framework and infrastructure set out in the National Talent Plan.

This English language brochure was published in 2011. It explains that the initial focus is on adults who support talents, establishing a professional network and training experts, as well as creating the network and map of providers.

It mentions that training courses lasting 10 to 30 hours have been developed and accredited in over 80 subjects to:

‘…bring concepts and methods of gifted and talented education into the mainstream and reinforce the professional talent support work… These involve the exchange of experience and knowledge expansion training, as well as programs for those who deal with talented people in developing communities, and awareness-raising courses aimed at the families and environment of young pupils, on the educational, emotional and social needs of children showing special interest and aptitude in one or more subject(s). The aims of the courses are not only the exchange of information but to produce and develop the professional methodology required for teaching talents.’

The brochure also describes an extensive talent survey undertaken in 2010, the publication of several good practice studies and the development of a Talent Loan modeled on the Hungarian student loan scheme.

It lists a seven-strong strategic management group including an expert adviser, project manager, programme co-ordinator and a finance manager. There are also five operational teams, each led by a named manager, one of which focused on ‘international relations: collecting and disseminating international best practices; international networking’.

A subsequent list of programme outputs says:

  • 24,000 new talents were identified
  • The Talent Map was drawn and the Talent Network created (including 867 talent points and 76 talent councils).
  • 23,500 young people took part in ‘subsidised talent support programmes’
  • 118 new ‘local educational talent programmes’ were established
  • 25 professional development publications were written and made freely available
  • 13,987 teachers (about 10% of the total in Hungary) took part in professional development.

Evidence in English of rigorous independent evaluation is, however, limited:

‘The efficiency of the Programme has been confirmed by public opinion polls (increased social acceptance of talent support) and impact assessments (training events: expansion of specialised knowledge and of the methodological tool kit).’

 .

The Talent Bridges Project (TBP) (2012-2014)

TBP began in November 2012 and is scheduled to last until ‘mid-2014’.

The initially parallel TBP is mentioned in the 2011 brochure referenced above:

‘In the strategic plan of the Talent Bridges Program to begin in 2012, we have identified three key areas for action: bridging the gaps in the Talent Point network, encouraging talents in taking part in social responsibility issues and increasing media reach. In order to become sustainable, much attention should be payed [sic] to maintaining and expanding the support structure of this system, but the focus will significantly shift towards direct talent care work with the youth.’

Later on it says:

‘Within the framework of the Talent Bridges Program the main objectives are: to further improve the contact system between the different levels of talent support organisations; to develop talent peer communities based on the initiatives coming from young people themselves; to engage talents in taking an active role in social responsibility; to increase media reach in order to enhance the recognition and social support for both high achievers and talent support; and last, but not least, to arrange the preliminary steps of setting up an EU Institute of Talent Support in Budapest.’

A list of objectives published subsequently contains the following items:

  • Creating a national talent registration and tracking system
  • Developing programmes for 3,000 talented young people from  disadvantaged backgrounds and with special educational needs
  • Supporting the development of ‘outstanding talents’ in 500 young people
  • Supporting 500 enrichment programmes
  • Supporting ‘the peer age groups of talented young people’
  • Introducing programmes to strengthen interaction between parents, teachers and  talented youth benefiting  5,000 young people
  • Introducing ‘a Talent Marketplace’ to support ‘the direct social utilisation of talent’ involving ‘150 controlled co-operations’
  • Engaging 2,000 mentors in supporting talented young people and training 5,000 talent support facilitators and mentors
  • Launching a communication campaign to reach 100,000 young people and
  • Realise European-Union-wide communication (in addition to the current 10, to involve 10 more EU Member States into the Hungarian initiatives, in co-operation with the European Talent Centre in Budapest established in the summer of 2012).

Various sources describe how the TBP is carved up into a series of sub-projects. The 2013 Brochure ‘Towards a European Talent Support Network’ lists 14 of these, but none mention the European work.

However, what appears to be the bid for TBP (in Hungarian) calls the final sub-project ‘an EU Communications Programme’ (p29), which appears to involve:

  • Raising international awareness of Hungary’s talent support activities
  • Strengthening Hungary’s position in the EU talent network
  • Providing a foreign exchange experience for talented young Hungarians
  • Influencing policy makers.

Later on (p52) this document refers to an international campaign, undertaken with support from the European Talent Centre, targeting international organisations and the majority of EU states.

Work to be covered includes the preparation of promotional publications in foreign languages, the operation of a ‘multilingual online platform’, participation in international conferences (such as those of ECHA, the World Council, IRATDE and ICIE); and ‘establishing new professional collaborations with at least 10 new EU countries or international organisations’.

Funding

It is not a straightforward matter to reconcile the diverse and sometimes conflicting sources of information about the budgets allocated to the National Talent Fund, HGP and the TBP, but this is my best effort, with all figures converted into pounds sterling.

 .

2009 2010 2011 2012 2013 2014 Total
NTF x £2.34m.or £4.1m  £2.34m.or £4.1m £8.27m tbc tbc tbc
Of which ETC x x x £80,000 £37,500 £21,350 £138,850
HGP £8.0m £4.6m x £12.6m
TBP x x x £5.3m £5.3m
Of which EU comms x x x £182,000 £182,000

Several sources say that the Talent Fund is set to increase in size over the period.

‘This fund has an annual 5 million EUR support from the national budget and an additional amount from tax donations of the citizens of a total sum of 1.5 million EUR in the first year doubled to 3 million EUR and 6 million EUR in the second and third years respectively.’ (Csermely 2012)

That would translate into a budget of £5.4m/£6.7m/£9.2m over the three years in question, but it is not quite clear which three years are included.

Even if we assume that the NTF budget remains the same in 2013 and 2014 as in 2012, the total investment over the period 2009-2014 amounts to approximately £60m.

That works out at about £17 per eligible Hungarian. Unfortunately I could find no reliable estimate of the total number of Hungarians that have benefited directly from the initiative to date.

On the basis of the figures I have seen, my guesstimate is that the total will be below 10% of the total eligible population – so under 350,000. But I must stress that there is no evidence to support this.

Whether or not the intention is to reach 100% of the population, or whether there is an in-built assumption that only a proportion of the population are amenable to talent development, is a moot point. I found occasional references to a 25% assumption, but it was never clear whether this was official policy.

Even if this applies, there is clearly a significant scalability challenge even within Hungary’s national programme.

It is also evident that the Hungarians have received some £18m from the European Social Fund over the past five years and have invested at least twice as much of their own money. That is a very significant budget indeed for a country of this size.

Hungary’s heavy reliance on EU funding is such that they will find it very difficult to sustain the current effort if that largesse disappears.

One imagines that they will be seeking continued support from EU sources over the period 2014-2020. But, equally, one would expect the EU to demand robust evidence that continued heavy dependency on EU funding will not be required.

And of course a budget of this size also begs questions about scalability to Europe in the conspicuous absence of a commensurate figure. There is zero prospect of equivalent funding being available to extend the model across Europe. The total bill would run into billions of pounds!

A ‘Hungarian-lite’ model would not be as expensive, but it would require a considerable budget.

However, it is clear from the table that the present level of expenditure on the European network has been tiny by comparison with the domestic investment – probably not much more than £100,000 per year.

Initially this came from the National Talent Fund budget but it seems as though the bulk is now provided through the ESF, until mid-2014 at least.

This shift seems to have removed a necessity for the European Talent Centre to receive its funding in biannual tranches through a perpetual retendering process.

For the sums expended from the NTF budget are apparently tied to periods of six months or less.

The European Talent Centre website currently bears the legend:

‘Operation of the European Talent Centre – Budapest between 15th December 2012 and 30th June 2013 is realised with the support of Grant Scheme No. NTP-EUT-M-12 announced by the Institute for Educational Research and Development and the Human Resources Support Manager on commission of the Ministry of Human Resources “To support international experience exchange serving the objectives of the National Talent Programme, and to promote the operation and strategic further development of the European Talent Centre – Budapest”.’

But when I wrote my 2012 review it said:

‘The operation of the European Talent Centre — Budapest is supported from 1 July 2012 through 30 November 2012 by the grant of the National Talent Fund. The grant is realised under Grant Scheme No. NTP-EU-M-12 announced by the Hungarian Institute for Educational Research and Development and the SándorWekerle Fund Manager of the Ministry of Administration and Justice on commission of the Ministry of Human Resources, from the Training Fund Segment of the Labour Market Fund.’

A press release confirmed the funding for this period as HUF 30m.

Presumably it will now need to be amended to reflect the arrival of £21.3K under Grant Scheme No. NTP-EU-M-13 – and possibly to reflect income from the ESF-supported TBP too.

A comparison between the Hungarian http://tehetseg.hu/ website and the European Talent Centre website is illustrative of the huge funding imbalance in favour of the former.

Danube Bend at Visegrad courtesy of Phillipp Weigell

Danube Bend at Visegrad courtesy of Phillipp Weigell

.

Origins of the European Talent Project: Evolution to December 2012

Initial plans

Hungary identified talent support as a focus during its EU Presidency, in the first half of 2011, citing four objectives:

  • A talent support conference scheduled for April 2011
  • A first European Talent Day to coincide with the conference, initially ‘a Hungarian state initiative…expanding it into a public initiative by 2014’.
  • Talent support to feature in EU strategies and documents, as well as a Non-Legislative Act (NLA). It is not specified whether this should be a regulation, decision, recommendation or opinion. (Under EU legislation the two latter categories have no binding force.)
  • An OMCexpert group on talent support – ie an international group run under the aegis of the Commission.

The Budapest Declaration

The Conference duly took place, producing a Budapest Declaration on Talent Support in which conference participants:

  • ‘Call the European Commission and the European Parliament to make every effort to officially declare the 25th of March the European Day of the Talented and Gifted.’
  • ‘Stress the importance of…benefits and best practices appearing in documents of the European Commission, the European Council and the European Parliament.’
  • ‘Propose to establish a European Talent Resource and Support Centre in Budapest’ to ‘coordinate joint European actions in the field’.
  • ‘Agree to invite stakeholders from every country of the European Union to convene annually to discuss the developments and current questions in talent support. Upon the invitation of the Government of Poland the next conference will take place in Warsaw.’

The possibility of siting a European Centre anywhere other than Budapest was not seriously debated.

 .

Evolution of a Written Declaration to the EU

Following the Conference an outline Draft Resolution of the European Parliament was circulated for comment.

This proposed that:

 ‘A Europe-wide talent support network should be formed and supported with an on-line and physical presence to support information-sharing, partnership and collaborations. This network should be open for co-operation with all European talent support efforts, use the expertise and networking experiences of existing multinational bodies such as the European Council of High Ability and support both national and multinational efforts to help talents not duplicating existing efforts but providing an added European value.’

Moreover, ‘A European Talent Support Centre should be established…in Budapest’. This:

‘…should have an Advisory Board having the representatives of interested EU member states, all-European talent support-related institutions as well as key figures of European talent support.’

The Centre’s functions are five-fold:

‘Using the minimum bureaucracy and maximising its use of online solutions the European Talent Support Centre should:

  • facilitate the development and dissemination of best curricular and extra-curricular talent support practices;
  • coordinate the trans-national cooperation of Talent Points forming an EU Talent Point network;
  • help  the spread of the know-how of successful organization of Talent Days;
  • organize annual EU talent support conferences in different EU member states overseeing the progress of cooperation in European talent support;
  • provide a continuously updated easy Internet access for all the above information.’

Note the references on the one hand to an inclusive approach, a substantial advisory group (though without the status of an EU-hosted OMC expert group) and a facilitating/co-ordinating role, but also – on the other hand – the direct organisation of annual EU-wide conferences and provision of a sophisticated supporting online environment.

MEPs were lined up to submit the Resolution in Autumn 2011 but, for whatever reason, this did not happen.

Instead a new draft Written Declaration was circulated in January 2012. This called on:

  •  Member States to consider measures helping curricular and extracurricular forms of talent support including the training of educational professionals to recognize and help talent;
  • The Commission to consider talent support as a priority of future European strategies, such as the European Research Area and the European Social Fund;
  • Member States and the Commission to support the development of a Europe-wide talent support network, formed by talent support communities, Talent Points and European Talent Centres facilitating cooperation, development and dissemination of best talent support practices;
  • Member States and the Commission to celebrate the European Day of the Talented and Gifted.’

The focus has shifted from the Budapest-centric network to EU-led activity amongst member states collectively. Indeed, no specific role for Hungary is mentioned.

There is a new emphasis on professional development and – critically – a reference to ‘European talent centres’. All mention of NLAs and OMC expert groups has disappeared.

There followed an unexplained 11-month delay before a Final Written Declaration was submitted by four MEPs in November 2012.

 .

The 2012 Written Declaration 

There are some subtle adjustments in the final version of WD 0034/2012. The second bullet point has become:

  • ‘The Commission to consider talent support as part of ‘non-formal learning’ and a priority in future European strategies, such as the strategies guiding the European Research Area and the European Social Fund’.

While the third now says:

  • ‘Member States and the Commission to support the development of a Europe-wide talent support network bringing together talent support communities, Talent Points and European Talent Centres in order to facilitate cooperation and the development and dissemination of the best talent support practices.’

And the fourth is revised to:

  • ‘Member States and the Commission to celebrate the European Day of Highly Able People.’

The introduction of a phrase that distinguishes between education and talent support is curious.

CEDEFOP – which operates a European Inventory on Validation of Non-formal and Informal Learning – defines the latter as:

‘…learning resulting from daily work-related, family or leisure activities. It is not organised or structured (in terms of objectives, time or learning support). Informal learning is in most cases unintentional from the learner’s perspective. It typically does not lead to certification.’

One assumes that a distinction is being attempted between learning organised by a school or other formal education setting and that which takes place elsewhere – presumably because EU member states are so fiercely protective of their independence when it comes to compulsory education.

But surely talent support encompasses formal and informal learning alike?

Moreover, the adoption of this terminology appears to rule out any provision that is ‘organised or structured’, excluding huge swathes of activity (including much of that featured in the Hungarian programme). Surely this cannot have been intentional.

Such a distinction is increasingly anachronistic, especially in the case of gifted learners, who might be expected to access their learning from a far richer blend of sources than simply in-school classroom teaching.

Their schools are no longer the sole providers of gifted education, but facilitators and co-ordinators of diverse learning streams.

The ‘gifted and talented’ terminology has also disappeared, presumably on the grounds that it would risk frightening the EU horses.

Both of these adjustments seem to have been a temporary aberration. One wonders who exactly they were designed to accommodate and whether they were really necessary.

 .

Establishment and early activity of the EU Talent Centre in Budapest

The Budapest centre was initially scheduled to launch in February 2012, but funding issues delayed this, first until May and then the end of June.

The press release marking the launch described the long-term goal of the Centre as:

‘…to contribute on the basis of the success of the Hungarian co-operation model to organising the European talent support actors into an open and flexible network overarching the countries of Europe.’

Its mission is to:

‘…offer the organisations and individuals active in an isolated, latent form or in a minor network a framework structure and an opportunity to work together to achieve the following:

  • to provide talent support an emphasis commensurate with its importance in every European country
  • to reduce talent loss to the minimum in Europe,
  • to give talent support a priority role in the transformation of the sector of education; to provide talented young persons access to the most adequate forms of education in every Member State,
  • to make Europe attractive for the talented youth,
  • to create talent-friendly societies in every European country.’

The text continues:

‘It is particularly important that network hubs setting targets similar to those of the European Talent Centre in Budapest should proliferate in the longer term.

The first six months represent the first phase of the work: we shall lay the bases [sic] for establishing the European Talent Support Network. The expected key result is to set up a team of voluntary experts from all over Europe who will contribute to that work and help draw the European talent map.

But what exactly are these so-called network hubs? We had to wait some time for an explanation.

There was relatively little material on the website at this stage and this was also slow to change.

My December 2012 post summarised progress thus:

‘The Talent Map includes only a handful of links, none in the UK.

The page of useful links is extensive but basically just a very long list, hard to navigate and not very user-friendly. Conversely, ‘best practices’ contains only three resources, all of them produced in house.

The whole design is rather complex and cluttered, several of the pages are too text-heavy and occasionally the English leaves something to be desired.’

 

Here ends the first part of this post. Part Twoexplains the subsequent development of the ‘network hubs’ concept, charts the continuation of the advocacy effort  and reviews progress in delivering the services for which the Budapest Centre is  responsible.

It concludes with an overall assessment of the initiative highlighting some of its key weaknesses.

GP

March 2014

How Well Does Gifted Education Use Social Media?

.

This post reviews the scope and quality of gifted education coverage across selected social media.

It uses this evidence base to reflect on progress in the 18 months since I last visited this topic and to establish a benchmark against which to judge future progress.

tree-240470_640More specifically, it:

  • Proposes two sets of quality criteria – one for blogs and other websites, the other for effective use of social media;
  • Reviews gifted education-related social media activity:

By a sample of six key players  – the World Council (WCGTC) and the European Council for High Ability (ECHA), NAGC and SENG in the United States and NACE and Potential Plus UK over here

Across the Blogosphere and five of the most influential English language social media platforms – Facebook, Google+, LinkedIn, Twitter and You Tube and

Utilising four content curation tools particularly favoured by gifted educators, namely PaperLi, Pinterest, ScoopIt and Storify.

  • Considers the gap between current practice and the proposed quality criteria – and whether there has been an improvement in the application of social media across the five dimensions of gifted education identified in my previous post.

I should declare at the outset that I am a Trustee of Potential Plus UK and have been working with them to improve their online and social media presence. This post lies outside that project, but some of the underlying research is the same.

.

I have been this way before

This is my second excursion into this territory.

In September 2012 I published a two-part response to the question ‘Can Social Media Help Overcome the Problems We Face in Gifted Education?’

  • Part One outlined an analytical framework based on five dimensions of gifted education. Each dimension is stereotypically associated with a particular stakeholder group though, in reality, each group operates across more than one area. The dimensions (with their associated stakeholder groups in brackets) are: advocacy (parents); learning (learners); policy-making (policy makers); professional development (educators); and research (academics).
  • Part Two used this framework to review the challenges faced by gifted education, to what extent these were being addressed through social media and how social media could be applied more effectively to tackle them. It also outlined the limitations of a social media-driven approach and highlighted some barriers to progress.

The conclusions I reached might be summarised as follows:

  • Many of the problems associated with gifted education are longstanding and significant, but not insurmountable. Social media will not eradicate these problems but can make a valuable contribution towards that end by virtue of their unrivalled capacity to ‘only connect’.
  • Gifted education needs to adapt if it is to thrive in a globalised environment with an increasingly significant online dimension driven by a proliferation of social media. The transition from early adoption to mainstream practice has not yet been effected, but rapid acceleration is necessary otherwise gifted education will be left behind.
  • Gifted education is potentially well-placed to pioneer new developments in social media but there is limited awareness of this opportunity, or the benefits it could bring.

The post was intended to inform discussion at a Symposium at the ECHA Conference in Munster, Germany in September 2012. I published the participants’ presentations and a report on proceedings (which is embedded within a review of the Conference as a whole).

.

Defining quality

I have not previously attempted to pin down what constitutes a high quality website or blog and effective social media usage, not least because so many have gone before me.

But, on reviewing their efforts, I could find none that embodied every dimension I considered important, while several appeared unduly restrictive.

It seems virtually impossible to reconcile these two conflicting pressures, defining quality with brevity but without compromising flexibility. Any effort to pin down quality risks reductionism while also fettering innovation and wilfully obstructing the pioneering spirit.

I am a strong advocate of quality standards in gifted education but, in this context, it seemed beyond my capacity to find or generate the ideal ‘flexible framework’, offering clear guidance without compromising innovation and capacity to respond to widely varying needs and circumstances.

But the project for Potential Plus UK required us to consult stakeholders on their understanding of quality provision, so that we could reconcile any difference between their perceptions and our own.

And, in order to consult effectively, we needed to make a decent stab at the task ourselves.

So I prepared some draft success criteria, drawing on previous efforts I could find online as well as my own experience over the last four years.

I have reproduced the draft criteria below, with slight amendment to make them more universally applicable. The first set – for a blog or website – are generic, while those relating to wider online and social media presence are made specific to gifted education.

.

Draft Quality Criteria for a Blog or Website

1.    The site is inviting to regular and new readers alike; its purpose is up front and explicit; as much content as possible is accessible to all.

 2.    Readers are encouraged to interact with the content through a variety of routes – and to contribute their own (moderated) content.

3.    The structure is logical and as simple as possible, supported by clear signposting and search.

 4.    The design is contemporary, visually attractive but not obtrusive, incorporating consistent branding and a complementary colour scheme. There is no external advertising.

 5.    The layout makes generous and judicious use of space and images – and employs other media where appropriate.

 6.    Text is presented in small blocks and large fonts to ensure readability on both tablet and PC.

 7.    Content is substantial, diverse and includes material relevant to all the site’s key audiences.

 8.    New content is added weekly; older material is frequently archived (but remains accessible).

 9.    The site links consistently to – and is linked to consistently by – all other online and social media outlets maintained by the authors.

 10. Readers can access site content by multiple routes, including other social media, RSS and email.

.

Draft quality criteria for wider online/social media activity

1.    A body’s online and social media presence should be integral to its wider communications strategy which should, in turn, support its purpose, objectives and priorities.

 2.    It should:

 a.    Support existing users – whether they are learners, parents/carers, educators, policy-makers or academics – and help to attract new users;

 b.    Raise the entity’s profile and build its reputation – both nationally and internationally – as a first-rate provider in one or more of the five areas of gifted education;

 c.    Raise the profile of gifted education as an  issue and support  campaigning for stronger provision;

 d.    Help to generate income to support the pursuit of these objectives and the body’s continued existence.

3.    It should aim to:

 a.    Provide a consistently higher quality and more compelling service than its main competitors, generating maximum benefit for minimum cost.

 b.    Use social media to strengthen interaction with and between users and provide more effective ‘bottom-up’ collaborative support.

 c.    Balance diversity and reach against manageability and effectiveness, prioritising media favoured by users but resisting pressure to diversify without justification and resource.

 d.    Keep the body’s online presence coherent and uncomplicated, with clear and consistent signposting so users can navigate quickly and easily between different online locations.

e.    Integrate all elements of the body’s online presence, ensuring they are mutually supportive.

 4.    It should monitor carefully the preferences of users, as well as the development of online and social media services, adjusting the approach only when there is a proven business case for doing so.

.

P1010262-001

Perth Pelicans by Gifted Phoenix

.

Applying the Criteria

These draft criteria reflect the compromise I outlined above. They are not the final word. I hope that you will help us to refine them as part of the consultation process now underway and I cannot emphasise too much that they are intended as guidelines, to be applied with some discretion.

I continue to maintain my inalienable right – as well as yours – to break any rules imposed by self-appointed arbiters of quality.

To give an example, readers will know that I am particularly exercised by any suggestion that good blog posts are, by definition, brief!

I also maintain your inalienable right to impose your own personal tastes and preferences alongside (or in place of) these criteria. But you might prefer to do so having reflected on the criteria – and having dismissed them for logical reasons.

There are also some fairly obvious limitations to these criteria.

For example, bloggers like me who use hosted platforms are constrained to some extent by the restrictions imposed by the host, as well as by our preparedness to pay for premium features.

Moreover, the elements of effective online and social media practice have been developed with a not-for-profit charity in mind and some in particular may not apply – or may not apply so rigorously – to other kinds of organisations, or to individuals engaged in similar activity.

In short, these are not templates to be followed slavishly, but rather a basis for reviewing existing provision and prompting discussion about how it might be further improved.

It would be forward of me to attempt a rigorous scrutiny against each of the criteria of the six key players mentioned above, or of any of the host of smaller players, including the 36 active gifted education blogs now listed on my blogroll.

I will confine myself instead to reporting factually all that I can find in the public domain about the activity of the six bodies, comparing and contrasting their approaches with broad reference to the criteria and arriving at an overall impressionistic judgement.

As for the blogs, I will be even more tactful, pointing out that my own quick and dirty self-review of this one – allocating a score out of ten for each of the ten items in the first set of criteria – generated a not very impressive 62%.

Of course I am biased. I still think my blog is better than yours, but now I have some useful pointers to how I might make it even better!

.

Comparing six major players

I wanted to compare the social media profile of the most prominent international organisations, the most active national organisations based in the US (which remains the dominant country in gifted education and in supporting gifted education online) and the two major national organisations in the UK.

I could have widened my reach to include many similar organisations around the world but that would have made this post more inaccessible. It also struck me that I could evidence my key messages by analysis of this small sample alone – and that my conclusions would be equally applicable to others in the field, wherever they are located geographically.

My analysis focuses on these organisations’:

  • Principal websites, including any information they contain about their wider online and social media activity;
  • Profile across the five selected social media platforms and use of blogs plus the four featured curational tools.

I have confined myself to universally accessible material, since several of these organisations have additional material available only to their memberships.

I have included only what I understand to be official channels, tied explicitly to the main organisation. I have included accounts that are linked to franchised operations – typically conferences – but have excluded personal accounts that belong to individual employees or trustees of the organisations in question.

Table 1 below shows which of the six organisations are using which social media. The table includes hyperlinks to the principal accounts and I have also repeated these in the commentary that follows.

.

Table 1: The social media used by the sample of six organisations

WCGTC ECHA SENG NAGC PPUK NACE
Blog No No [Yes] No No No
Facebook Yes Yes Yes Yes Yes No
Google+ Yes No Yes No Yes Yes
LinkedIn Yes No Yes No Yes No
Twitter Yes No Yes Yes Yes Yes
You Tube Yes No Yes Yes No Yes
PaperLi Yes No No No No No
Pinterest No No No Yes Yes No
ScoopIt No No No No No No
Storify No No No Yes No No

.

The table gives no information about the level or quality of activity on each account – that will be addressed in the commentary below – but it gives a broadly reliable indication of which organisations are comparatively active in social media and which are less so.

The analysis shows that Facebook and Twitter are somewhat more popular platforms than Google+, LinkedIn and You Tube, while Pinterest leads the way amongst the curational tools. This distribution of activity is broadly representative of the wider gifted education community.

The next section takes a closer look at this wider activity on each of the ten platforms and tools.

.

Comparing gifted-related activity on the ten selected platforms and tools

 .

Blogs

As far as I can establish, none of the six organisations currently maintains a blog. SENG does have what it describes as a Library of Articles, which is a blog to all intents and purposes – and Potential Plus UK is currently planning a blog.

Earlier this year I noticed that my blogroll was extremely out of date and that several of the blogs it contained were no longer active. I reviewed all the blogs I could find in the field and sought recommendations from others.

I imposed a rule to distinguish live blogs from those that are dead or dormant – they had to have published three or more relevant posts in the previous six months.

I also applied a slightly more subjective rule, in an effort to sift out those that had little relevance to anyone beyond the author (being cathartic diaries of sorts) and those that are entirely devoted to servicing a small local advocacy group.

I ended up with a long shortlist of 36 blogs, which now constitutes the revised blogroll in the right hand column.  Most are written in English but I have also included a couple of particularly active blogs in other languages.

The overall number of active blogs is broadly comparable with what I remember in 2010 when I first began, but the number of posts has probably fallen.

I don’t know to what extent this reflects changes in the overall number of active blogs and posts, either generically or in the field of education. In England there has been a marked renaissance in edublogging over the last twelve months, yet only three bloggers venture regularly into the territory of gifted education.

.

Facebook

Alongside Twitter, Facebook has the most active gifted education community.

There are dozens of Facebook Groups focused on giftedness and high ability. At the time of writing, the largest and most active are:

The Facebook Pages with the most ‘likes’ have been established by bodies located in the United States. The most favoured include:

There is a Gifted Phoenix page, which is rigged up to my Twitter account so all my tweets are relayed there. Only those with a relevant hashtag – #gtchat or #gtvoice – will be relevant to gifted education.

.

Google+

To date there is comparatively little activity on Google+, though many have established an initial foothold there.

Part of the problem is lack of familiarity with the platform, but another obstacle is the limited capacity to connect other parts of one’s social media footprint with one’s Google+ presence.

There is only one Google+ Community to speak of: ‘Gifted and Talented’ currently with 134 members.

A search reveals a large number of people and pages ostensibly relevant to gifted education, but few are useful and many are dormant.

Amongst the early adopters are:

My own Google+ page is dormant. It should now be possible to have WordPress.com blogposts appear automatically on a Google+ page, but the service seems unreliable. There is no capacity to link Twitter and Google+ in this fashion. I am waiting on Google to improve the connectivity of their service.

.

LinkedIn

LinkedIn is also comparatively little used by the gifted education community. There are several groups:

But none is particularly active, despite the rather impressive numbers above. Similarly, a handful of organisations have company pages on LinkedIn, but only one or two are active.

The search purports to include a staggering 98,360 people who mention ‘gifted’ in their profiles, but basic account holders can only see 100 results at a time.

My own LinkedIn page is registered under my real name rather than my social media pseudonym and is focused principally on my consultancy activity. I often forget it exists.

 .

Twitter

By comparison, Twitter is much more lively.

My brief January post mentioned my Twitter list containing every user I could find who mentions gifted education (or a similar term, whether in English or a selection of other languages) in their profile.

The list currently contains 1,263 feeds. You are welcome to subscribe to it. If you want to see it in action first, it is embedded in the right-hand column of this Blog, just beneath the blogroll.

The majority of the gifted-related activity on Twitter takes place under the #gtchat hashtag, which tends to be busier than even the most popular Facebook pages.

This hashtag also accommodates an hour long real-time chat every Friday (at around midnight UK time) and at least once a month on Sundays, at a time more conducive to European participants.

Other hashtags carrying information about gifted education include: #gtvoice (UK-relevant), #gtie (Ireland-relevant), #hoogbegaafd (Dutch-speaking); #altascapacidades (Spanish-speaking), #nagc and #gifteded.

Chats also take place on the #gtie and #nagc hashtags, though the latter may now be discontinued.

Several feeds provide gifted-relevant news and updates from around the world. Amongst the most followed are:

  • NAGC (4,240 followers)
  • SENG (2,709 followers)

Not forgetting Gifted Phoenix (5,008 followers) who publishes gifted-relevant material under the #gtchat (globally relevant material) and #gtvoice (UK-relevant material) hashtags.

.

Twitter network 2014 Capture

Map of Gifted Phoenix’s Twitter Followers March 2014

.

You Tube

You Tube is of course primarily an audio-visual channel, so it tends to be used to store public presentations and commercials.

A search on ‘gifted education’ generates some 318,000 results including 167,000 videos and 123,000 channels, but it is hard to see the wood for the trees.

The most viewed videos and the most used channels are an eclectic mix and vary tremendously in quality.

Honourable mention should be made of:

The most viewed video is called ‘Top 10 Myths in Gifted Education’, a dramatised presentation which was uploaded in March 2010 by the Gifted and Talented Association of Montgomery County. This has had almost 70,000 views.

Gifted Phoenix does not have a You Tube presence.

.

Paper.li

Paper.li describes itself as ‘a content curation service’ which ‘enables people to publish newspapers based on topics they like and treat their readers to fresh news, daily.’

It enables curators to draw on material from Facebook, Twitter, Google+, embeddable You Tube videos and websites via RSS feeds.

In September 2013 it reported 3.7m users each month.

I found six gifted-relevant ‘papers’ with over 1,000 subscriptions:

There is, as yet, no Gifted Phoenix presence on paper.li, though I have been minded for some months to give it a try.

.

Pinterest

Pinterest is built around a pinboard concept.  Pins are illustrated bookmarks designating something found online or already on Pinterest, while Boards are used to organise a collection of pins. Users can follow each other and others’ boards.

Pinterest is said to have 70 million users, of which 80% are female.

A search on ‘gifted education’ reveals hundreds of boards dedicated to the topic, but unfortunately there is no obvious way to rank them by number of followers or number of pins.

Since advanced search capability is conspicuous by its absence, the user apparently has little choice but to sift laboriously through each board. I have not undertaken this task so I can bring you no useful information about the most used and most popular boards.

Judging by the names attached to these boards, they are owned almost exclusively by women. It is interesting to hypothesise about what causes this gender imbalance – and whether Pinterest is actively pursuing female users at the expense of males.

There are, however, some organisations in the field making active use of Pinterest. A search of ‘pinners’ suggests that amongst the most popular are:

  • IAGC Gifted which has 26 boards, 734 pins and 400 followers.

Gifted Phoenix is male and does not have a presence on Pinterest…yet!

 .

Scoop.it

Scoop.it stores material on a page somewhere between a paper.li-style newspaper and a Pinterest-style board. It is reported to have almost seven million unique visitors each month.

‘Scoopable’ material is drawn together via URLs, a programmable ‘suggestions engine’ and other social media, including all the ‘big four’. The free version permits a user to link only two social media accounts however, putting significant restrictions on Scoop.it’s curational capacity.

Scoop.it also has limited search engine capability. It is straightforward to conduct an elementary search like this one on ‘gifted’ which reveals 107 users.

There is no quick way of finding those pages that are most used or most followed, but one can hover over the search results for topics to find out which have most views:

Gifted Phoenix has a Scoop.it topic which is still very much a work in progress.

.

Storify

Storify is a slightly different animal to the other three tools. It describes itself as:

‘the leading social storytelling platform, enabling users to easily collect tweets, photos, videos and media from across the web to create stories that can be embedded on any website.  With Storify, anyone can curate stories from the social web to embed on their own site and share on the Storify platform.’

Estimates of user numbers vary but are typically from 850,000 to 1m.

Storify is a flexible tool whose free service permits one to collect material already located on the platform and from a range of other sources including Twitter, Facebook, You Tube, Flickr, Instagram, Google search, Tumblr – or via RSS or URL.

The downside is that there is no way to search within Storify for stories or users, so one cannot provide information about the level of activity or users that it might be helpful to follow.

However, a Google search reveals that users of Storify include:

  • IGGY with 9 followers

These tiny numbers show that Storify has not really taken off as a curational platform in its own right, though it is an excellent supporting tool, particularly for recording transcripts of Twitter chats.

Gifted Phoenix has a Storify profile and uses the service occasionally.

 .

The Cold Shoulder in Perth Zoo by Gifted Phoenix

The Cold Shoulder in Perth Zoo by Gifted Phoenix

.

Comparing the six organisations

So, having reviewed wider gifted education-related activity on these ten social media platforms and tools, it is time to revisit the online and social media profile of the six selected organisations.

.

World Council

The WCGTC website was revised in 2012 and has a clear and contemporary design.

The Council’s Mission Statement has a strong networking feel to it and elsewhere the website emphasises the networking benefits associated with membership:

‘…But while we’re known for our biennial conference the spirit of sharing actually goes on year round among our membership.

By joining the World Council you can become part of this vital network and have access to hundreds of other peers while learning about the latest developments in the field of gifted children.’

The home page includes direct links to the organisation’s Facebook Page and Twitter feed. There is also an RSS feed symbol but it is not active.

Both Twitter and Facebook are of course available to members and non-members alike.

At the time of writing, the Facebook page has 1,616 ‘likes’ and is relatively current, with five posts in the last month, though there is relatively little comment on these.

The Twitter feed typically manages a daily Tweet. Hashtags are infrequently if ever employed. At the time of writing the feed has 1,076 followers.

Almost all the Tweets are links to a daily paper.li production ‘WCGTC Daily’ which was first published in late July 2013, just before the last biennial conference. This has 376 subscribers at the present time, although the gifted education coverage is selective and limited.

However, the Council’s most recent biennial conference was unusual in making extensive use of social media. It placed photographs on Flickr, videos of keynotes on YouTube and podcasts of keynotes on Mixlr.

There was also a Blog – International Year of Giftedness and Creativity – which was busy in the weeks immediately preceding the Conference, but has not been active since.

There are early signs that the 2015 Conference will also make strong use of social media. In addition to its own website, it already has its own presence on Twitter and Facebook.

One of the strands of the 2015 Conference is:

‘Online collaboration

  • Setting the stage for future sharing of information
  • E-networking
  • E-learning options’

And one of the sponsors is a social media company.

As noted above, the World Council website provides links to two of its six strands of social media activity, but not the remaining four. It is not yet serving as an effective hub for the full range of this activity.

Some of the strands link together well – eg Twitter to paper.li – but there is considerable scope to improve the incidence and frequency of cross-referencing.

.

ECHA

Of the six organisations in this sample, ECHA is comfortably the least active in social media with only a Facebook page available to supplement its website.

The site itself is rather old-fashioned and could do with a refresh. It includes a section ‘Introducing ECHA’ which emphasises the organisation’s networking role:

‘The major goal of ECHA is to act as a communications network to promote the exchange of information among people interested in high ability – educators, researchers, psychologists, parents and the highly able themselves. As the ECHA network grows, provision for highly able people improves and these improvements are beneficial to all members of society.’

This is reinforced in a parallel Message from the President.

There is no reference on the website to the Facebook group which is closed, but not confined solely to ECHA members. There are currently 191 members. The group is fairly active, but does not rival those with far more members listed above.

There’s not much evidence of cross-reference between the Facebook group and the website, but that may be because the website is infrequently updated.

As with the World Council, ECHA conferences have their own social media profile.

At the 2012 Conference in In Munster this was left largely to the delegates. Several of us live Tweeted the event.

I blogged about the Conference and my part in it, providing links to transcripts of the Twitter record. The post concluded with a series of learning points for this year’s ECHA Conference in Slovenia.

The Conference website explains that the theme of the 2014 event is ‘Rethinking Giftedness: Giftedness in the Digital Age’.

Six months ahead of the event, there is a Twitter feed with 29 followers that has been dormant for three months at the time of writing and a LinkedIn group with 47 members that has been quiet for five months.

A Forum was also established which has not been used for over a year. There is no information on the website about how the event will be supported by social media.

I sincerely hope that my low expectations will not be fulfilled!

.

SENG

SENG is far more active across social media. Its website carries a 2012 copyright notice and has a more contemporary feel than many of the others in this sample.

The bottom of the home page extends an invitation to ‘connect with the SENG community’ and carries links to Facebook, Twitter and LinkedIn (though not to Google+ or You Tube).

In addition, each page carries a set of buttons to support the sharing of this information across a wide range of social media.

The organisation’s Strategic Plan 2012-2017 makes only fleeting reference to social media, in relation to creating a ‘SENG Liaison Facebook page’ to support inter-state and international support.

It does, however, devote one of its nine goals to the further development of its webinar programme (each costs $40 to access or $40 to purchase a recording for non-participants).

SENG offers online parent support groups but does not state which platform is used to host these. It has a Technology/Social Media Committee but its proceedings are not openly available.

Reference has already been made above to the principal Facebook Page which is popular, featuring posts on most days and a fair amount of interaction from readers.

The parallel group for SENG Liaisons is also in place, but is closed to outsiders, which rather seems to defeat the object.

The SENG Twitter feed is relatively well followed and active on most days. The LinkedIn page is somewhat less active but can boast 142 followers while Google+ is clearly a new addition to the fold.

The You Tube channel has 257 subscribers however and carries 16 videos, most of them featuring presentations by James Webb. Rather strangely, these don’t seem to feature in the media library carried by the website.

SENG is largely a voluntary organisation with little staff resource, but it is successfully using social media to extend its footprint and global influence. There is, however, scope to improve coherence and co-ordination.

.

National Association for Gifted Children

The NAGC’s website is also in some need of refreshment. Its copyright notice dates from 2008, which was probably when it was designed.

There are no links to social media on the home page but ‘NAGC at a glance’ carries a direct link to the Facebook group and a Twitter logo without a link, while the page listing NAGC staff has working links to both Facebook and Twitter.

In the past, NAGC has been more active in this field.

There was for a time a Parenting High Potential Blog but the site is now marked private.

NAGC’s Storify account contains the transcripts of 6 Twitter chats conducted under the hashtag #nagcchat between June and August 2012. These were hosted by NAGC’s Parent Outreach Specialist.

But, by November 2012 I was tweeting:

.

.

And in February 2013:

.

.

This post was filled by July 2013. The postholder seems to have been concentrating primarily on editing the magazine edition of Parenting High Potential, which is confined to members only (but also has a Facebook presence – see below).

NAGC’s website carries a document called ‘NAGC leadership initiatives 2013-14’ which suggests further developments in the next few months.

The initiatives include:

‘Leverage content to intentionally connect NAGC resources, products and programs to targeted audiences through an organization-wide social media strategy.’

and

‘Implement a new website and membership database that integrates with social media and provides a state-of-the-art user interface.’

One might expect NAGC to build on its current social media profile which features:

  • A Facebook Group which currently has 2,420 members and is reasonably active, though not markedly so. Relatively few posts generate significant comments.
  • A Twitter feed boasting an impressive 4,287 followers. Tweets are published on a fairly regular basis

There is additional activity associated with the Annual NAGC Convention. There was extensive live Tweeting from the 2013 Convention under the rival hashtags #NAGC2013 and #NAGC13. #NAGC14 looks the favourite for this year’s Convention which has also established a Facebook presence

NAGC also has its own networks. The website lists 15 of these but hardly any of their pages give details of their social media activity. A cursory review reveals that:

Overall, NAGC has a fairly impressive array of social media activity but demonstrates relatively little evidence of strategic coherence and co-ordination. This may be expected to improve in the next six months, however.

.

NACE

NACE is not quite the poorest performer in our sample but, like ECHA, it has so far made relatively little progress towards effective engagement with social media.

Its website dates from 2010 but looks older. Prominent links to Twitter and Facebook appear on the front page as well as – joy of joys – an RSS feed.

However, the Facebook link is not to a NACE-specific page or group and the RSS feed doesn’t work.

There are references on the website to the networking benefits of NACE membership, but not to any role for the organisation in wider networking activity via social media. Current efforts seem focused primarily on advertising NACE and its services to prospective members and purchasers.

The Twitter feed has a respectable 1,426 followers but Tweets tend to appear in blocks of three or four spaced a few days apart. Quality and relevance are variable.

The Google+ page and You Tube channel contain the same two resources, posted last November.

There is much room for improvement.

.

Potential Plus UK

All of which brings us back to Potential Plus and the work I have been supporting to strengthen its online and social media presence.

.

Current Profile

Potential Plus’s current social media profile is respectably diverse but somewhat lacking in coherence.

The website is old-fashioned. There is a working link to Facebook on the home page, but this takes readers to the old NAGC Britain page which is no longer used, rather than directing them to the new Potential Plus UK page.

Whereas the old Facebook page had reached 1,344 likes, the new one is currently at roughly half that level – 683 – but the level of activity is reasonably impressive.

There is a third Facebook page dedicated to the organisation’s ‘It’s Alright to Be Bright’ campaign, which is not quite dormant.

All website pages carry buttons supporting information-sharing via a wide range of social media outlets. But there is little reference in the website content to its wider social media activity.

The Twitter feed is fairly lively, boasting 1,093 followers. It currently has some 400 fewer followers than NACE but has published about 700 more Tweets. Both are publishing at about the same rate. Quality and relevance are similarly variable.

The LinkedIn page is little more than a marker and does not list the products offered.

The Google+ presence uses the former NAGC Britain name and is also no more than a marker.

But the level of activity on Pinterest is more significant. There are 14 boards each containing a total of 271 pins and attracting 26 followers.  This material has been uploaded during 2014.

There is at present no substantive blog activity, although the stub of an old wordpress.com site still exists and there is also a parallel stub of an old wordpress.com children’s area.

There are no links to any of these services from the website – nor do these services link clearly and prominently with each other.

.

Future Strategy

The new wordpress.com test site sets out our plans for Potential Plus UK, which have been shaped in accordance with the two sets of draft success criteria above.

The purpose of the project is to help the organisation to:

  • improve how it communicates and engage with its different audiences clearly and effectively
  • improve support for members and benefit all its stakeholder groups
  • provide a consistently higher quality and more compelling service than its main competitors that generates maximum benefit for minimum cost

Subject to consultation and if all goes well, the outcome will be:

  • A children’s website on wordpress.org
  • A members’ and stakeholders’ website on wordpress.com (which may transfer to wordpress.org in due course)
  • A new forum and a new ‘bottom-up’ approach to support that marries curation and collaboration and
  • A coherent social media strategy that integrates these elements and meets audiences’ needs while remaining manageable for PPUK staff.

You can help us to develop this strategy by responding to the consultation here by Friday 18 April.

.

La Palma Panorama by Gifted Phoenix

La Palma Panorama by Gifted Phoenix

.

Conclusion

.

Gifted Phoenix

I shall begin by reflecting on Gifted Phoenix’s profile across the ten elements included in this analysis:

  • He has what he believes is a reasonable Blog.
  • He is one of the leading authorities on gifted education on Twitter (if not the leading authority).
  • His Facebook profile consists almost exclusively of ‘repeats’ from his Twitter feed.
  • His LinkedIn page reflects a different identity and is not connected properly to the rest of his profile.
  • His Google+ presence is embryonic.
  • He has used Scoop.it and Storify to some extent, but not Paper.li or Pinterest.

GP currently has a rather small social media footprint, since he is concentrating on doing only two things – blogging and microblogging – effectively.

He might be advised to extend his sphere of influence by distributing the limited available human resource more equitably across the range of available media.

On the other hand he is an individual with no organisational objectives to satisfy. Fundamentally he can follow his own preferences and inclinations.

Maybe he should experiment with this post, publishing it as widely as possible and monitoring the impact via his blog analytics…

.

The Six Organisations

There is a strong correlation between the size of each organisation’s social media footprint and the effectiveness with which they use social media.

There are no obvious examples – in this sample at least – of organisations that have a small footprint because of a deliberate choice to specialise in a narrow range of media.

If we were to rank the six in order of effectiveness, the World Council, NAGC and SENG would be vying for top place, while ECHA and NACE would be competing for bottom place and Potential Plus UK would be somewhere in the middle.

But none of the six organisations would achieve more than a moderate assessment against the two sets of quality criteria. All of them have huge scope for improvement.

Their priorities will vary, according to what is set out in their underlying social media strategies. (If they have no social media strategy, the obvious priority is to develop one, or to revise it if it is outdated.)

.

The Overall Picture across the Five Aspects of Gifted Education

This analysis has been based on the activities of a small sample of six generalist organisations in the gifted education field, as well as wider activity involving a cross-section of tools and platforms.

It has not considered providers who specialise in one of the five aspects – advocacy, learning, professional development, policy-making and research – or the use being made of specialist social media, such as MOOCs and research tools.

So the judgements that follow are necessarily approximate. But nothing I have seen across the wider spectrum of social media over the past 18 months would seriously call into question the conclusions reached below.

  • Advocacy via social media is slightly stronger than it was in 2012 but there is still much insularity and too little progress has been made towards a joined up global movement. The international organisations remain fundamentally inward-looking and have been unable to offer the leadership and sense of direction required.  The grip of the old guard has been loosened and some of the cliquey atmosphere has dissipated, but academic research remains the dominant culture.
  • Learning via social media remains limited. There are still several niche providers but none has broken through in a global sense. The scope for fruitful partnership between gifted education interests and one or more of the emerging MOOC powerhouses remains unfulfilled. The potential for social media to support coherent and targeted blended learning solutions – and to support collaborative learning amongst gifted learners worldwide – is still largely unexploited.
  • Professional development via social media has been developed at a comparatively modest level by several providers, but the prevailing tendency seems to be to regard this as a ‘cash cow’ generating income to support other activities. There has been negligible progress towards securing the benefits that would accrue from systematic international collaboration.
  • Policy-making via social media is still the poor relation. The significance of policy-making (and of policy makers) within gifted education is little appreciated and little understood. What engagement there is seems focused disproportionately on lobbying politicians, rather than on developing at working level practical solutions to the policy problems that so many countries face in common.
  • Research via social media is negligible. The vast majority of academic researchers in the field are still caught in a 20th Century paradigm built around publication in paywalled journals and a perpetual round of face-to-face conferences. I have not seen any significant examples of collaboration between researchers. A few make a real effort to convey key research findings through social media but most do not. Some of NAGC’s networks are beginning to make progress and the 2013 World Conference went further than any of its predecessors in sharing proceedings with those who could not attend. Now the pressure is on the EU Talent Conference in Budapest and ECHA 2014 in Slovenia to push beyond this new standard.

Overall progress has been limited and rather disappointing. The three conclusions I drew in 2012 remain valid.

In September 2012 I concluded that ‘rapid acceleration is necessary otherwise gifted education will be left behind’. Eighteen months on, there are some indications of slowly gathering speed, but the gap between practice in gifted education and leading practice has widened meanwhile – and the chances of closing it seem increasingly remote.

Back in 2010 and 2011 several of my posts had an optimistic ring. It seemed then that there was an opportunity to ‘only connect’ globally, but also at European level via the EU Talent Centre and in the UK via GT Voice. But both those initiatives are faltering.

My 2012 post also finished on an optimistic note:

‘Moreover, social media can make a substantial and lasting contribution to the scope, value and quality of gifted education, to the benefit of all stakeholders, but ultimately for the collective good of gifted learners.

No, ‘can’ is too cautious, non-assertive, unambitious. Let’s go for WILL instead!’

Now in 2014 I am resigned to the fact that there will be no great leap forward. The very best we can hope for is disjointed incremental improvement achieved through competition rather than collaboration.

I will be doing my best for Potential Plus UK. Now what about you?

.

GP

March 2014

A Brief Discussion about Gifted Labelling and its Permanency

.

Some of my readership may be interested in this Twitter exchange with Ellen Spencer a researcher at the Centre for Real-World Learning, the Claxton-Lucas vehicle based at the University of Winchester.

The sequence of Tweets is embedded below (scroll down to the bottom for the start)

.

.

We discussed the issue of labelling gifted learners and the idea that such labels may not be permanent sifting devices, but temporary markers attached to such learners only while they need additional challenge and support.

This is not to deny that some gifted learners may warrant a permanent marker, but it does imply that many – probably most – will move in and out of scope as they develop in non-linear fashion and differentially to their peers.

Of course much depends on one’s understanding of giftedness and gifted education, a topic I have addressed frequently, starting with my inaugural post in May 2010.

Three-and-a-half years on, it seems to me that the default position has shifted somewhat further towards the Nurture, Equity and Personalisation polarities.

But the notion of giftedness as dynamic in both directions – with learners shifting in and out of scope as they develop – may be an exception to that broader direction of travel.

Of course there’s been heavy emphasis on movement into scope (the broader notion of giftedness as learned behaviour and achievable through effort) but very little attention given to progress in the opposite direction.

It is easy to understand how this would be a red rag to several bulls in the gifted education field, while outward movement raises difficult questions for everybody – whether or not advocates for gifted education – about communication and management of self-esteem.

But reform and provocation are often stalwart bedfellows. Feel free to vent your spleen in the comments section below.

.

GP

February 2014

Gifted Education Activity in the Blogosphere and on Twitter

.

4-Eyes-resized-greenjacketfinalI have been doing some groundwork for an impending analysis of the coverage of gifted education (and related issues) in social media – and reflecting on how that has changed in the four years I have been involved.

As a first step I revised my Blogroll (normally found in the right hand margin, immediately below the Archives).

I decided to include only Blogs that have published three or more relevant posts in the last six months – and came up with the following list of 23, which I have placed in alphabetical order.

.

Begabungs

Belin-Blank Center

Distilling G and T Ideas

Dona Matthews

Gifted and Talented Ireland

Gifted Challenges

Gifted Education Perspectives

Gifted Exchange

Gifted Parenting Support

Global #gtchat powered by TAGT

headguruteacher  (posts tagged #gtvoice)

Irish Gifted Education Blog

Krummelurebloggen

Laughing at Chaos

Living the Life Fantastic

Ramblings of a Gifted Teacher

smarte barn

Talent Igniter

Talent Talk

Talento y Educacion

The Deep End

The Prufrock Press Blog

Unwrapping the Gifted

WeAreGifted2

.

This is rather a short list, which might suggest a significant falling off of blogging activity since 2010. I had to delete the majority of the entries in the previous version of the Blogroll because they were dormant or dead.

But I might have missed some deserving blogs, particularly in other languages. Most on this list are written in English.

If you have other candidates for inclusion do please suggest them through the comments facility below, or pass them on via Twitter.

You may have views about the quantity and quality of blogging activity – and whether there is an issue here that needs to be addressed. Certainly the apparent decline in gifted education blogging comes at a time when edublogging in England has never been more popular. Perhaps you have ideas for stimulating more posts.

On the other hand, you might take the view that blogging is increasingly irrelevant, given the inexorable rise of microblogging – aka Twitter – and the continued popularity of Facebook, let alone the long list of alternatives.

Speaking of Twitter, I thought it might be an interesting exercise to compile a public list of every feed I could find that references gifted education (or an equivalent term, whether in English or another language) in its profile.

The full list – which you can find at https://twitter.com/GiftedPhoenix/lists/gifted-education – contains 1,245 members at present.

I have embedded the timeline below, and you can also find it in the right hand margin, immediately below the Blogroll.

.

.

The list includes some leading academic authorities on the subject, but is dominated by gifted education teachers and the parents of gifted learners, probably in roughly equal measure.

The clear majority is based in the United States, but there is a particularly strong community in the Netherlands and reasonable representation in Australia, Canada, the Netherlands, Spain and the UK. Several other countries are more sparsely represented.

(One authority – who shall remain nameless – has unaccountably blocked me, which prevents his inclusion in the list. But he has only produced eight tweets, the most recent over a year old, so I suppose he is no great loss.)

I cannot compare this with earlier lists, but it feels as though there has been a significant expansion of the gifted Twittersphere since I began in 2010.

That said I have no information yet about how many of the feeds are active – and just how active they are.

If I have inadvertently omitted you from the list, please Tweet to let me know. Please feel free to make use of the list as you wish, or to offer suggestions for how I might use it.

There will be further segmented lists in due course.

 

Postscript 13 January:

Many thanks for your really positive response. The blogroll now has 34 entries…and there’s always room for more.

If you’d like to subscribe to the Twitter list but are not sure how, here’s Twitter’s guide (see bottom of page).

If you’re not on the list but would like to be, please either follow me (making sure there’s a reference to gifted or similar in your profile) or send me a tweet requesting to be added.

You can follow or tweet me direct from this blog by going to the ‘Gifted Phoenix on Twitter’ embed in the right hand column.

 

.

GP

January 2014

Gifted Phoenix’s 2013 Review and Retrospective

.

This final post of 2013 takes a reflective look back at this year’s activity.

4-Eyes-resized-greenjacketfinal

One purpose is straightforward self-congratulation – a self-administered pat on the back for all my hard work!

This is also an opportunity to review the bigger picture, to reflect on the achievements and disappointments of the year now ending and to consider the prospects for 2014 and beyond.

Perhaps I can also get one or two things off my chest…

…So, by way of an aside, let me mention here that I provide this information to you entirely free of charge, partly because I believe that global progress in (gifted) education is obstructed by the rationing of knowledge, partly to encourage those who construct and shelter behind paywalls to reflect on the negative consequences of their behaviour.

I try my best to offer you a factual, balanced and objective assessment, to flag up weaknesses as well as strengths. In short, I tell it like it is. I have no interest in self-aggrandisement, in reputation or the trappings of academia. You will search in vain for those trappings in my CV, but I speak and write with commensurate authority, based on extended experience as a national policy maker and student of the field …

Another purpose is to provide an annotated list of my posts, so that readers can catch up with anything they missed.

I make this my 35th post of 2013, five fewer than I managed in 2012. I took an extended break during August and September this year, half of it spent on tour in Western Australia and the remainder engaged on other projects.

During the course of the year I’ve made a conscious effort simultaneously to narrow and diversify my focus.

I’ve devoted around two-thirds of my posts to educational reform here in England, while the remainder continued to address global issues.

Some of the Anglocentric posts were intended to draw out the wider implications of these reforms, rather than confining themselves exclusively to gifted education and the impact on gifted learners.

I wanted to paint on a broader canvas. It is all too easy to exist in a gifted education ghetto, forgetting that it must be integral to our national educational systems as well as a global endeavour in its own right.

 .

Global Gifted Education

During 2013 I published two feature-length posts about the performance of high achievers in international comparisons studies:

Like it or not, these international tests are becoming increasingly influential in most countries around the world. Those involved in gifted education ignore them at their peril.

Many of the countries that top the rankings already invest significantly in gifted education – and some of those that do not (invest significantly and/or top the rankings) ought seriously to consider this as a potential route to further improvement.

Other posts with a global gifted focus include:

My best effort at a personal credo, derived from the experience of writing this Blog. Colleagues were very flattering

.

.

I supplemented the post with a vision for delivery, primarily to inform UK-based discussion within GT Voice, but also relevant to Europe (the EU Talent Centre) and globally (the World Council).

I took a second look at this nascent field, exploring developments since I first blogged about it in 2010. I like to flatter myself that I invented the term.

The post tells of the passing interest exhibited by IRATDE and notes the reference in the July 2012 World Council Newsletter to a special issue of Gifted and Talented International (GTI) that will be devoted to the topic.

I heard in May that an unnamed specialist had been invited to prepare a ‘target paper’, but nothing has materialised to date. The wheels of academic publishing turn parlous slow.

I concluded the post with a tongue-in cheek contribution of my own – the Gifted Phoenix Equation!

Minimising the Excellence Gap and Optimising the Smart Fraction maximises impact on Economic Growth (Min EG + Optimal SF = Max EG)

This post opened with a self-confessed rant about the ‘closed shop’ operated by academics in the field, defended by research paywalls and conference keynote monopolies.

But I set aside my prejudices to review the nine leading academic journals in gifted education, examine the rights the publishers offer their authors and offer a constructive set of proposals for improving the accessibility of research.

There were also a handful of new national studies:

the last of which is strictly a transatlantic study of support for low income high ability students, developed from analysis of the US NAGC publication of the same name.

.

Gifted Education in England

Two posts examined material within England’s national school performance tables relating to high attainment and high attainers.

The latter is the second such analysis I have provided, following one on the 2012 Tables published last December. The former will be supplanted by a new version when the Secondary Tables are published in January.

I also offered a detailed treatment of the underlying accountability issues in:

These posts explored the rather haphazard treatment now afforded ‘the most able students’ in documents supporting the School Inspection Framework, as well as the different definitions deployed in the Performance Tables and how these might change as a consequence of the trio of accountability consultations launched this year.

.

.

During the Spring I wrote:

Despite the Government’s reported intention to establish a national network of up to twelve of these, still only two have been announced – sponsored by King’s College London and Exeter University respectively.

I might devote a 2014 post to updating my progress report.

There was also special mini-series, corralled under the speculatively optimistic title: A Summer of Love for Gifted Education?’

This is fundamentally a trilogy:

The original conceit had been to build each episode around a key publication expected during the year. Episodes One and Two fitted this description but the third, an ‘Investigation of school- and college- level strategies to raise the Aspirations of High-Achieving Disadvantaged Pupils to pursue higher education’ was (is still) overdue, so I had to adjust the focus.

Episode Two was a particularly rigorous examination of the Ofsted report that led to the changes to the inspection documentation.

.

.

In Episode Three, I took the opportunity to expose some questionable use of statistics on the part of selective universities and their representative bodies, setting out a 10-point plan to strengthen the representation of disadvantaged students at Oxford and Cambridge. This was accompanied by a flying pig.

.

.

There were also some supplementary posts associated with the Summer of Love:

And some material I produced at the time that Ofsted published ‘The Most Able Students’:

Did it turn out to be a ‘Summer of Love’? Looking back now, I have mixed feelings. Significant attention was paid to meeting the needs of high attaining learners, and those needs are likely to be better recognised and responded to as a consequence.

But the response, such as it is, relies almost exclusively on the accountability system. There is still a desperate need for authoritative updated national framework guidance. Ideally this should be developed by the national gifted education community, working collaboratively with government seed funding.

But the community shows little sign of readiness to take on that responsibility. Collaboration is virtually non-existent:  GT Voice has failed thus far to make any impact (justifying my decision to stand down from the board in protest at frustratingly slow progress).

Meanwhile, several players are pursuing their own diverse agendas. Most are prioritising income generation, either to survive or simply for commercial gain. Everyone is protecting their corner. Too many scores are being settled. Quality suffers.

For completeness, I should also mention a couple of shorter posts:

a piece I wrote for another publisher about how free schools might be rolled into this national collaborative effort, and

which was my best effort to summarise the ‘current state’ on the other side of Ofsted’s Report, as well as an alternative future vision, avoiding the Scylla of top-down centralised prescription and the Charybdis of bottom-up diffused autonomy.

 

Wider English Educational Reform

Almost all the posts I have written within this category are associated with emerging national policy on curriculum and assessment:

.

.

There was even

which I still expect to see in a manifesto come 2015!

As things stand, there are still many unanswered questions, not least where Labour stands on these issues.

Only one of three accountability consultations has so far received a Government response. The response to the primary consultation – comfortably the least persuasive of the three – was due in ‘the autumn’ but hadn’t appeared by Christmas.

The decision to remove National Curriculum levels looks set to have several unintended negative consequences, not least HMCI Wilshaw’s recent call for the reintroduction of national testing at KS1 and KS3.

I am still to be persuaded that this decision is in the best interest of high attainers.

 

Social Media

This year I have spent more time tweeting and less time producing round-ups of my Twitter activity.

At the time of writing, my follower count has reached 4,660 and I have published something approaching 18,700 Tweets on educational topics.

I try to inform my readers about wider developments in UK (especially English) education policy, keeping a particularly close eye on material published by the Government and by Parliament.

I continue to use #gtchat (global) and #gtvoice (UK) to hashtag material on gifted education and related issues. I look out particularly for news about developments worldwide. I publish material that seems interesting or relevant, even though I might disagree with it. I try to avoid promotional material or anything that is trying to sell you something.

I began 2013 intending to produce round-ups on ‘a quarterly-cum-termly basis’ but have managed only two editions:

The next volume is already overdue but I simply can’t face the grinding effort involved in the compilation process. I may not continue with this sequence in 2014.

I was also invited to answer the question:

ResearchED was a conference organised via Twitter which took place in September.

The post argued for a national network of UK education bloggers. This hasn’t materialised, although the status and profile of edublogging has improved dramatically during 2013, partly as a consequence of the interest taken by Michael Gove.

There are many more blogs and posts than a year ago, several co-ordinated through Blogsync and/or reblogged via The Echo Chamber.

Precious few bloggers enter the field of gifted education, though honourable mentions must go to Distilling G&T Ideas and Headguruteacher.

Elsewhere in the world, not too many gifted education bloggers are still generating a constant flow of material.

Exceptions include Lisa Conrad, who is maintaining two blogs in the US Gifted Parenting Support and Global #GT Chat Powered by TAGT. Also Kari Kolberg who produces Krummelurebloggen (in Norwegian) and Javier Touron who writes Talento y Educacion (in Spanish).

I need urgently to revisit my Blogroll. I might also write a post about the general state of global gifted education blogging in the early part of 2014.

 

Reference

I have made only limited progress this year with the reference pages on this Blog:

  • Who’s Who?  remains embryonic. I had plans to force myself to produce a handful of entries each day, but managed only two days in succession! There isn’t a great deal of intellectual challenge in this process – life may be too short!
  • Key Documents is a mixed bag. The UK pages are fully stocked. You should be able to find every significant national publication since 2000. The Rest of the World section is still largely empty.

Rightly or wrongly, the production of blog posts is taking priority.

 

Analytics

Compared with 2012, the number of page views has increased by over 30%, although the number of posts is down by 12.5%. I’m happy with that.

Some 40% of views originate in the UK. Other countries displaying significant interest include the US, Singapore, Australia, India, Hong Kong, Saudi Arabia, New Zealand, Canada and Spain. Altogether there have been visits from 169 countries.

The most popular posts published this year are, in order of popularity:

  • Whither National Curriculum Assessment Without Levels?
  • What the KS2/KS4 Transition Matrices Show About High Attainers’ Performance
  • High Attaining Students in the 2012 Secondary School Performance Tables
  • Analysis of the Primary Assessment and Accountability Consultation Document and
  • A Summer of Love for English Gifted Education Episode 2: Ofsted’s ‘The Most Able Students’

.

Visuals

I have changed the theme of my Blog twice this year – initially to Zoren and more recently to Highwind. I wanted a clearer, spacier look and a bigger font.

During the course of the year I have alternated between using my photographs within posts and producing work that is largely free of illustration. I have mixed feelings about this.

It seems somehow incongruous to intersperse unrelated photographs within a post about educational matters, but the stock of education-relevant non-copyrighted illustration is severely limited. Then again, screeds of unbroken text can be rather dreary to the eye.

So readers can expect some more views of Western Australia (especially) during 2014! Here’s one to whet your appetite.

.

Flora 2 by Gifted Phoenix

Flora 2 by Gifted Phoenix

 

The Future

I close 2013 in a pessimistic mood. Despite the more favourable domestic policy climate, I am markedly less optimistic about the future of gifted education than I was at the start of the year.

Disillusion is setting in, reinforced by negligible progress towards the objectives I hold most dear.

The ‘closed shop’ is as determinedly closed as ever; vested interests are shored up; governance is weak. There is fragmentation and vacuum where there should be inclusive collaboration for the benefit of learners. Too many are on the outside, looking in. Too many on the inside are superannuated and devoid of fresh ideas.

Every so often I witness dispiriting egotism, duplicity or even vengefulness. Disagreements fester because one or both of the parties is unwilling to work towards resolution.

The world of gifted education is often not a happy place – and while it remains that way there is no real prospect of achieving significant improvements in the education and life chances of gifted learners.

To mix some metaphors, it may soon be time to cut my losses, stop flogging this moribund horse and do something else instead.

Happy New Year!

.

GP

December 2013

PISA 2012: International Comparison of High Achievers’ Performance

.

This post examines what PISA 2012 can tell us about the comparative performance of high achievers in England, other English-speaking countries and those that top the PISA rankings.

Introductory Brochure for PISA 2012 by Kristjan Paur

Introductory Brochure for PISA 2012 by Kristjan Paur

It draws on a similar range of evidence to that deployed in my post on the PISA 2009 results (December 2010).

A more recent piece, ‘The Performance of Gifted High Achievers in TIMSS, PIRLS and PISA’ (January 2013) is also relevant.

The post reviews:

  • How the PISA 2012 Assessment Framework defines reading, mathematical and scientific literacy and its definitions of high achievement in each of the three core domains.
  • How average (headline) performance on the three core measures has changed in each jurisdiction compared with PISA 2006 and PISA 2009.
  • By comparison, how high achievers’ performance – and the balance between high and low achievers’ performance – has changed in each jurisdiction over the same period.
  • How jurisdictions compare on the ‘all-rounder’ measure, derived from achievement of a high performance threshold on all three assessments.

The twelve jurisdictions included in the main analysis are: Australia, Canada, England, Finland, Hong Kong (China), Ireland, New Zealand, Shanghai (China), Singapore, South Korea, Taiwan and the USA.

The post also compares the performance of the five home countries against the high achievement thresholds. I have foregrounded this analysis, which appears immediately below, save only for the headline (but potentially misleading) ‘top 10’ high achiever rankings for 2012.

.

Headlines

 .

World Leaders against PISA’s High Achievement Benchmarks

The top 10 performers in PISA 2012 against the high achievement benchmarks (Level 5 and above), in reading, maths and science respectively, are set out in Table 1 below.

The 2009 rankings are shown in brackets and the 2012 overall average rankings in bold, square brackets. I have also included England’s rankings.

.

Table 1

Rank Reading Maths Science
1 Shanghai (1) [1] Shanghai (1) [1] Shanghai (1) [1]
2 Singapore (3) [3] Singapore (2) [2] Singapore (2) [3]
3 Japan (5) [4] Taiwan (4) [4] Japan (5) [4]
4 Hong Kong (9) [2] Hong Kong (3) [3] Finland (3) [5]
5 S. Korea (6) [5] S Korea (5) [5] Hong Kong (6) [2]
6 N Zealand (2) [13] Liechtenstein (13) [8] Australia (7) [16]
7 Finland (4) [6] Macao (15) [6] N Zealand (4) [18]
8 Canada (7=) [8] Japan (8) [7] Estonia (17) [6]
9 France (13) [21] Switzerland (6) [9] Germany (8) [12]
10 Belgium (10) [16] Belgium (9) [15] [15] Netherlands (9) [14]
England 19th (19) [23] England 24th (32) [25] England 11th  (12) [18]

 .

On the basis of these crude rankings alone, it is evident that Shanghai has maintained its ascendancy across all three domains.

Singapore has reinforced its runner-up position by overtaking New Zealand in reading. Hong Kong and Japan also make it into the top ten in all three domains.

Notable improvements in the rankings have been made by:

  • Japan, Hong Kong and France in reading
  • Liechtenstein and Macao in maths
  • Japan and Estonia in science

.

.

Jurisdictions falling down the rankings include:

  • Australia, New Zealand and Finland in reading
  • Finland and Switzerland in maths
  • Canada and New Zealand in science.

Those whose high achiever rankings significantly exceed their average rankings include:

  • New Zealand, France and Belgium in reading
  • Belgium in maths
  • Australia, New Zealand, Germany and the Netherlands in science

The only one of the top ten jurisdictions exhibiting the reverse pattern with any degree of significance is Hong Kong, in science.

On this evidence, England has maintained its relatively strong showing in science and a mid-table position in reading, but it has slipped several places in maths.

Comparing England’s rankings for high achievers with its rankings for average performance:

  • Reading 19th versus 23rd
  • Maths 24th versus 25th
  • Science 11th versus 18th

This suggests that England is substantively stronger at the top end of the achievement spectrum in science, slightly stronger in reading and almost identical in maths. (The analysis below explores whether this is borne out by the proportions of learners achieving the relevant PISA thresholds.)

Overall, these rankings suggest that England is a respectable performer at the top end, but nothing to write home about. It is not deteriorating, relatively speaking – with the possible exception of mathematics – but it is not improving significantly either. The imbalance is not atypical and it requires attention, but only as part of a determined effort to build performance at both ends.

.

Comparing the Home Countries’ Performance

Table 2 below shows how each home country has performed at Level 5 and above in each of the three core PISA assessments since 2006.

.

Table 2

  2012 Level 5+ 2009 Level 5+ 2006 Level 5+
  Read Maths Sci Read Maths Sci Read Maths Sci
England 9.1 12.4 11.7 8.1 9.9 11.6 9.2 11.2 14.0
N Ireland 8.3 10.3 10.3 9.3 10.3 11.8 10.4 12.2 13.9
Scotland 7.8 10.9 8.8 9.2 12.3 11.0 8.5 12.1 12.5
Wales 4.7 5.3 5.7 5.0 5.0 7.8 6.4 7.2 10.9
UK 8.8 11.9 11.1 8.0 9.9 11.4 9.0 11.2 13.8
OECD average 8.4 12.6 8.4 7.6 12.7 8.5 8.6 13.3 9.0

.

In 2012, England is ahead of the other home countries in all three domains. Northern Ireland is runner-up in reading and science, Scotland in maths. Wales is a long way behind the other four in all three assessments.

Only England tops the OECD average in reading. All the home countries fall below the OECD average in maths, though all but Wales are above it in science.

Compared with 2006, England’s performance has changed little in reading, increased somewhat in maths (having fallen back betweentimes) and fallen quite significantly in science.

In comparison, Northern Ireland is on a downward trend in all three domains, as is Scotland (though it produced small improvements in maths and reading in 2009). Wales has fallen back significantly in science, though somewhat less so in reading and maths.

It seems that none of the home countries is particularly outstanding when it comes to the performance of their high achievers, but England is the strongest of the four, while Wales is clearly the weakest.

A slightly different perspective can be gained by comparing high and low performance in 2012.

Table 3 below shows that the proportion of low achievers is comfortably larger than the proportion of high achievers. This is true of all the home countries and all subjects, though the difference is less pronounced in science across the board and also in Scotland. Conversely, the imbalance is much more significant in Wales.

 .

Table 3

2012 Reading Maths Science
  L5+6 L1+below L5+6 L1+below L5+6 L1+below
England 9.1 16.7 12.4 21.7 11.7 14.9
N Ireland 8.3 16.7 10.3 24.1 10.3 16.8
Scotland 7.8 12.5 10.9 18.2 8.8 12.1
Wales 4.7 20.6 5.3 29.0 5.7 19.4
UK 8.8 16.7 11.9 21.8 11.1 15.0
OECD average 8.4 8.4 12.6 23.0 8.4 17.8

.

The ‘tail’ in reading is significantly higher than the OECD average in all four countries but – with the exception of Wales – somewhat lower in science.

In maths, the ‘tail’ is higher than the OECD average in Wales and Northern Ireland, but below average in England and Scotland.

The average figures suggest that, across the OECD as a whole, the top and bottom are broadly balanced in reading, there is a small imbalance in science towards the bottom end and a more significant imbalance in maths, again towards the bottom end.

By comparison, the home countries have a major issue at the bottom in reading, but are less significantly out of line in maths and science.

Overall, there is some evidence here of a longish tail of low achievement, but with considerable variation according to country and domain.

The bottom line is that all of the home countries have significant issues to address at both the top and the bottom of the achievement distribution. Any suggestion that they need to concentrate exclusively on low achievers is not supported by this evidence.

.

Francois Peron National Park by Gifted Phoenix 2013

Francois Peron National Park by Gifted Phoenix 2013

.

Background to PISA

 .

What is PISA?

The Programme for International Student Assessment (PISA) is a triennial OECD survey of the performance of 15 year-old students which typically covers maths, science and reading. Science was the main focus in 2006, reading in 2009 and maths in 2012.

PISA 2012 also included a computer-based assessment of problem-solving and a financial literacy assessment. However, some jurisdictions did not participate in the problem-solving exercise owing to ‘technical issues’ and financial literacy was undertaken by some countries only, as an optional extra.

Fifty-eight jurisdictions took part in PISA 2006 and 74 in PISA 2009 (65 undertook the assessment in 2009 and a further nine did so in 2010).

To date, a total of 65 jurisdictions have also taken part in PISA 2012.

According to the OECD’s own FAQ:

  • PISA tests reading, mathematical and scientific literacy ‘in terms of general competencies, that is, how well students can apply the knowledge and skills they have learned at school to real-life challenges. PISA does not test how well a student has mastered a school’s specific curriculum.’
  • Student performance in each field is comparable between assessments – one cannot reasonably argue therefore that a drop in performance is attributable to a more difficult assessment.
  • Each participating jurisdiction receives an overall score in each subject area – the average of all its students’ scores. The average score among OECD countries is set at 500 points (with a standard deviation of 100 points).
  • Participating jurisdictions are ranked in each subject area according to their mean scores, but:

‘is not possible to assign a single exact rank in each subject to each country…because PISA tests only a sample of students from each country and this result is then adjusted to reflect the whole population of 15-year-old students in that country. The scores thus reflect a small measure of statistical uncertainty and it is therefore only possible to report the range of positions (upper rank and lower rank) within which a country can be placed.’

Outside the confines of reports by the OECD and its national contractors, this is honoured more in the breach than the observance.

  • Scores are derived from scales applied to each subject area. Each scale is divided into levels, Level 1 being the lowest and Level 6 typically the highest

Further background detail on the 2012 assessments is set out in the ‘PISA 2012 Assessment and Analytical Framework’ (2013).

This explains that the framework for assessing maths was completely revised ahead of the 2012 cycle and ‘introduces three new mathematical processes that form the basis of developments in the reporting of PISA mathematics outcomes’, whereas those for science and reading were unchanged (the science framework was revised when it was the main focus in 2006 and ditto for reading in 2009).

The Framework clarifies the competency-based approach summarised in the FAQ:

‘ISA focuses on competencies that 15-year-old students will need in the future and seeks to assess what they can do with what they have learnt – reflecting the ability of students to continue learning throughout their lives by applying what they learn in school to non-school environments, evaluating their choices and making decisions. The assessment is informed, but not constrained, by the common denominator of national curricula. Thus, while it does assess students’ knowledge, PISA also examines their ability to reflect, and to apply their knowledge and experience to real-life issues in a reflective way. For example, in order to understand and evaluate scientific advice on food safety, an adult would need not only to know some basic facts about the composition of nutrients, but also to be able to apply that information.’

It explains that between 4,500 and 10,000 students drawn from 150 schools are typically tested in each jurisdiction.

Initial reports suggested that England would not take part in the 2012 assessments of problem-solving and financial literacy, but it subsequently emerged that this decision had been reversed in respect of problem-solving.

.

Setting PISA Outcomes in Context

There are plenty of reasons why one should not place excessive weight on PISA outcomes:

  • The headline rankings carry a significant health warning, which remains important, even though it is commonly ignored.

‘As the PISA 2000 and PISA 2003 samples for the United Kingdom did not meet the PISA response-rate standards, no trend comparisons are possible for these years.’ (p.1)

Hence, for the UK at least, reliable comparisons with pre-2006 results are off the table.

‘The pressure from policymakers for advice based on PISA interacts with this unhealthy mix of policy and technical people. The technical experts make sure that the appropriate caveats are noted, but the warnings are all too often ignored by the needs of the policy arm of PISA. As a result, PISA reports often list the known problems with the data, but then the policy advice flows as though those problems didn’t exist. Consequently, some have argued that PISA has become a vehicle for policy advocacy in which advice is built on flimsy data and flawed analysis.’

  • PISA is not the only game in town. TIMSS and PIRLS are equally significant, though relatively more focused on content knowledge, whereas PISA is primarily concerned with the application of skills in real life scenarios.
  • There are big political risks associated with worshipping at the PISA altar for, if the next set of outcomes is disappointing, the only possible escape route is to blame the previous administration, a strategy that wears increasingly thin with the electorate the longer the current administration has been in power.

 .

.

It would be quite wrong to dismiss PISA results out of hand, however. They are a significant indicator of the comparative performance of national (and regional) education systems. But they are solely an indicator, rather than a statement of fact.

.

What is assessed – and what constitutes high achievement – in each domain

The Assessment and Analytical Framework provides definitions of each domain and level descriptors for each level within the assessments.

.

Mathematical Literacy

The PISA 2012 mathematics framework defines mathematical literacy as:

‘An individual’s capacity to formulate, employ and interpret mathematics in a variety of contexts. It includes reasoning mathematically and using mathematical concepts, procedures, facts and tools to describe, explain and predict phenomena. It assists individuals to recognise the role that mathematics plays in the world and to make the well-founded judgments and decisions needed by constructive, engaged and reflective citizens.’

Three aspects of maths are identified:

  • Mathematical processes and the fundamental capabilities underlying them. Three processes are itemised: formulating situations mathematically; employing mathematical concepts, facts, procedures and reasoning; and interpreting, applying and evaluating mathematical outcomes. The capabilities are: communication; mathematizing (transforming a real life problem to a mathematical form); representation; reasoning and argument; devising problem-solving strategies; using symbolic, formal and technical language and operations; and using mathematical tools.
  • Content knowledge, comprising four elements: change and relationships; space and shape; quantity; and uncertainty and data.
  • The contexts in which mathematical challenges are presented: personal; occupational; societal and scientific.

Six levels are identified within the PISA 2012 mathematics scale’. The top two are described thus:

  • ‘At Level 6 students can conceptualise, generalise and utilise information based on their investigations and modelling of complex problem situations. They can link different information sources and representations and flexibly translate among them. Students at this level are capable of advanced mathematical thinking and reasoning. These students can apply their insight and understandings along with a mastery of symbolic and formal mathematical operations and relationships to develop new approaches and strategies for attacking novel situations. Students at this level can formulate and precisely communicate their actions and reflections regarding their findings, interpretations, arguments and the appropriateness of these to the original situations.’
  • ‘At Level 5 students can develop and work with models for complex situations, identifying constraints and specifying assumptions. They can select, compare and evaluate appropriate problem-solving strategies for dealing with complex problems related to these models. Students at this level can work strategically using broad, well-developed thinking and reasoning skills, appropriate linked representations, symbolic and formal characterisations and insight pertaining to these situations. They can reflect on their actions and formulate and communicate their interpretations and reasoning.’

.

Reading literacy

Reading Literacy is defined as:

‘An individual’s capacity to understand, use, reflect on and engage with written texts, in order to achieve one’s goals, to develop one’s knowledge and potential, and to participate in society.’

The assessment ‘is built on three major task characteristics’:

  • Situation – the context or purpose for which reading takes place, which may be personal (practical and intellectual interests), public (activities and concerns of society), educational (for learning purposes) or occupational (accomplishment of a task).
  • Text – the range of material that is read, which may be print or digital. In the case of digital text, the environment may be authored (the reader is receptive), message based, or mixed. In the case of both print and digital text, the format may be continuous (sentences and paragraphs), non-continuous (eg graphs, lists), mixed or multiple, while the text type may be description, narration, exposition, argumentation, instruction or transaction.
  • Aspect – how readers engage with the text, which includes accessing and retrieving; integrating and interpreting; and reflecting and evaluating.

Separate proficiency scales are provided for print and digital reading respectively. Both describe achievement in terms of the task rather than the student.

The print reading scale has six levels (Level One is subdivided into two). The top levels are described as follows:

  • Level 6: Tasks at this level typically require the reader to make multiple inferences, comparisons and contrasts that are both detailed and precise. They require demonstration of a full and detailed understanding of one or more texts and may involve integrating information from more than one text. Tasks may require the reader to deal with unfamiliar ideas, in the presence of prominent competing information, and to generate abstract categories for interpretations. Reflect and evaluate tasks may require the reader to hypothesise about or critically evaluate a complex text on an unfamiliar topic, taking into account multiple criteria or perspectives, and applying sophisticated understandings from beyond the text. A salient condition for access and retrieve tasks at this level is precision of analysis and fine attention to detail that is inconspicuous in the texts.
  • Level 5: Tasks at this level that involve retrieving information require the reader to locate and organise several pieces of deeply embedded information, inferring which information in the text is relevant. Reflective tasks require critical evaluation or hypothesis, drawing on specialised knowledge. Both interpretative and reflective tasks require a full and detailed understanding of a text whose content or form is unfamiliar. For all aspects of reading, tasks at this level typically involve dealing with concepts that are contrary to expectations.

For digital reading there are only four levels, categorised as 2-5. Level 5 is described thus:

‘Tasks at this level typically require the reader to locate, analyse and critically evaluate information, related to an unfamiliar context, in the presence of ambiguity. They require generating criteria to evaluate the text. Tasks may require navigation across multiple sites without explicit direction, and detailed interrogation of texts in a variety of formats.’

 .

Scientific literacy

Scientific literacy is defined as:

‘An individual’s scientific knowledge and use of that knowledge to identify questions, to acquire new knowledge, to explain scientific phenomena, and to draw evidence-based conclusions about science-related issues, understanding of the characteristic features of science as a form of human knowledge and enquiry, awareness of how science and technology shape our material, intellectual, and cultural environments, and willingness to engage in science-related issues, and with the ideas of science, as a reflective citizen.’

The domain consists of four interrelated aspects:

  • Context – life situations involving science and technology. Contexts are personal, social or global and may relate to health, natural resources, environment, hazard or the frontiers of science and technology.
  • Knowledge – knowledge of the natural world (covering physical systems, living systems, earth and space systems and technology systems) and knowledge about science itself (scientific enquiry and scientific explanations).
  • Competencies , of which  three are identified: identify scientific issues, explain phenomena scientifically and use scientific evidence.
  • Attitudes, including an interest in science, support for scientific enquiry and a motivation to act responsibly towards the natural world.

A 6-level proficiency scale is defined with the top levels explained as follows:

  • At Level 6, students can consistently identify, explain and apply scientific knowledge and knowledge about science in a variety of complex life situations. They can link different information sources and explanations and use evidence from those sources to justify decisions. They clearly and consistently demonstrate advanced scientific thinking and reasoning, and they use their scientific understanding in support of solutions to unfamiliar scientific and technological situations. Students at this level can use scientific knowledge and develop arguments in support of recommendations and decisions that centre on personal, social or global situations.
  • At Level 5, students can identify the scientific components of many complex life situations, apply both scientific concepts and knowledge about science to these situations, and can compare, select and evaluate appropriate scientific evidence for responding to life situations. Students at this level can use well-developed inquiry abilities, link knowledge appropriately and bring critical insights to situations. They can construct explanations based on evidence and arguments based on their critical analysis.

.

Denham Sunset by Gifted Phoenix

Denham Sunset by Gifted Phoenix

.

 Changes in Average Performance in Reading, Maths and Science

The OECD published PISA outcomes for maths, science and reading on 3 December 2013.

Similarly, the PISA National Report on England, published simultaneously, covers the three core assessments.

This section looks briefly at the headline average scores and rankings across the selected sample of twelve jurisdictions, principally to enable comparisons to be drawn with the subsequent analysis of high achievers’ performance.

I apologise in advance for any transcription errors. Please let me know if you spot any and I will correct the tables accordingly.

.

Reading

Table 4 below gives the headline average numerical scores and ranks in reading from PISA 2006, 2009 and 2012 respectively.

.

Table 4

Country 2012 2009 2006
score rank score rank score rank
Australia 512↓ 13↓ 515↑ 9↓ 513 7
Canada 523↓ 8↓ 524↓ 6↓ 527 4
Finland 524↓ 6↓ 536↓ 3↓ 547 2
Hong Kong 545↑ 2↑ 533↓ 4↓ 536 3
Ireland 523↑ 7↑ 496↓ 21↓ 517 6
S Korea 536↓ 5↓ 539↓ 2↓ 556 1
New Zealand 512↓ 13↓ 521 7↓ 521 5
Shanghai 570↑ 1= 556 1 N/A N/A
Singapore 542↑ 3↑ 526 5 N/A N/A
Taiwan 523↑ 8↑ 495↓ 23↓ 496 16
UK (England) 500↑ 23↑ 495↓ 25↓ 496 17
US 498↓ 24↓ 500 17 N/A N/A
OECD Average 496↑ 493↓ 495

.

Shanghai has retained the ascendancy it established in 2009, adding a further 14 points to its average 2009 score. Whereas it was only 17 points beyond its nearest competitor in 2009, that lead has now been extended to 25 points.

South Korea’s performance has fallen slightly and it has been leapfrogged in the rankings by Hong Kong (up 12 points), Singapore (up 16 points), and Japan (not included in the table).

Two countries making even more significant improvements are Taiwan (up 28 points) and Ireland (up 27 points). Conversely, the performance of Finland (down 12 points) and New Zealand (down 9 points) has noticeably declined. Finland’s performance has been declining since 2006.

Results remain broadly unchanged in Australia, Canada, England, South Korea and the USA. South Korea has been unable to make up the ground it lost in 2009.

Ireland’s huge improvement from a very similar starting point in 2009 throws England’s lack of progress into sharper relief, although it is largely catching up lost ground in 2009, having performed relatively well in 2006.

England, like the US, continues to perform slightly above the OECD average, but has fallen further behind the Asian Tigers. The gap with the world’s leader in each assessment is now 70 points (up from 60 in 2006),

.

Maths

Table 5 below sets out scores and rankings in maths since PISA 2006

.

Table 5

Country 2012 2009 2006
  score rank score rank score rank
Australia 504↓ 19↓ 514↓ 15↓ 520 13
Canada 518↓ 13↓ 527= 10↓ 527 7
Finland 519↓ 12↓ 541↓ 6↓ 548 2
Hong Kong 561↑ 3= 555↑ 3 547 3
Ireland 501↑ 20↑ 487↓ 32↓ 501 22
S Korea 554↑ 5↓ 546↓ 4 547 4
New Zealand 500↓ 23↓ 519↓ 13↓ 522 11
Shanghai 613↑ 1= 600 1 N/A N/A
Singapore 573↑ 2= 562 2 N/A N?A
Taiwan 560↑ 4↑ 543↓ 5↓ 549 1
UK (England) 495↑ 25↑ 493↓ 27↓ 495 24
US 481↓ 36↓ 487↑ 31↑ 474 35
OECD Average 494↓   496↓   497  

 .

The overall picture is rather similar to that for reading.

Shanghai (up 13 points) and Singapore (up 11 points) continue to stretch away at the head of the field. Taiwan (up 17 points) has also made significant improvement and is now close behind Hong Kong.

There has been relatively more modest improvement in Hong Kong and South Korea (which has been overtaken by Taiwan).

Elsewhere, Ireland has again made significant headway and is back to the level it achieved in 2006. But Finland’s score has plummeted 22 points. New Zealand is not far behind (down 19). There have also been significant falls in the performance of Australia (down 10) Canada (down 9) and the US (down 6).

The US is now trailing 13 points below the OECD average, having failed to sustain the substantial improvement it made in 2009.

In England meanwhile, results are largely unchanged, though now just above the OECD average rather than just below it.

The gap between England and world leader Shanghai has reached 118 points, compared with a gap in 2006 between England and world leader Taiwan of 54 points. The gap between England and its main Commonwealth competitors has narrowed, but only as a consequence of the significant declines in the latter.

.

Science

Table 6 below provides the same data in respect of science.

.

Table 6

Country 2012 2009 2006
  score rank score rank score rank
Australia 521↓ 16↓ 527= 10↓ 527 8
Canada 525↓ 10↓ 529↓ 8↓ 534 3
Finland 545↓ 5↓ 554↓ 2↓ 563 1
Hong Kong 555↑ 2↑ 549↑ 3↓ 542 2
Ireland 522↑ 15↑ 508 20 508 20
S Korea 538= 7↓ 538↑ 6↑ 522 11
New Zealand 516↓ 18↓ 532↑ 7 530 7
Shanghai 580↑ 1= 575 1 N/A N/A
Singapore 551↑ 3↑ 542 4 N/A N/A
Taiwan 523↑ 13↓ 520↓ 12↓ 532 4
UK (England) 516↑ 18↓ 515↓ 16↓ 516 14
US 497↓ 28↓ 502↑ 23↑ 489 29
OECD Average 501=   501↑   498  

 .

Shanghai is again out in front, having repeated the clean sweep it achieved in 2009.

However, it has managed only a 5-point improvement, while Taiwan has improved by 13 points and Singapore by 9 points. Hong Kong has moved up by 6 points and Taiwan by 3 points, but South Korea’s score is unchanged from 2009.

New Zealand has dropped by 16 points and Finland by 9 points compared with 2009. There have been comparatively smaller declines in Australia and Canada, while Ireland has once again improved dramatically, by 14 points, and – in this case – the improvement is not simply clawing back ground lost in 2009.

England remains comfortably above the OECD average, but has made negligible improvement since 2006. US performance has dropped back below the OECD average as it has lost some of the ground it made up in 2009.

The gap between England and the world leaders is comparable with that in maths and significantly lower than in reading. The gap is now 64 points, compared with just 47 points in 2006.

.

Overall

Overall, the Asian Tigers have consolidated their positions by maintaining improvement in all three domains, though South Korea appears to be struggling to maintain the success of earlier years.

Finland and New Zealand are in worrying decline while Ireland is making rapid progress in the opposite direction.

.

.

The US results are stagnant, remaining comparatively poor, particularly in maths.

England has broadly maintained its existing performance profile, neither improving nor declining significantly. But, it is conspicuously losing ground on the world leaders, especially in maths. Other than in science it is close to the OECD average.

There is nothing here to give comfort to either the previous Government or the present incumbents. There might be some limited relief – even a degree of shadenfreude – in the fact that several better-placed nations are falling back more severely. But of course one cannot win the ‘global race’ by simply standing still.

.

Floral by Gifted Phoenix

Floral by Gifted Phoenix

 .

Changes in High Achievers’ Performance

So much for the average headline figures.

The remainder of this post is focused on  high achievement data. The ensuing sections once more examine reading, maths and science in that order, followed by a section on all-rounders.

.

Reading

Table 7 shows how the percentage achieving higher levels in reading has changed since PISA 2006, providing separate columns for Level 6 and above level 5 respectively (there was no Level 6 in 2006)..

.

Table 7

Country 2012 2009 2006
Level 6 Levels 5 and 6 Level 6 Levels 5+6 Level 5
Australia 1.9 11.7 2.1 12.8 10.6
Canada 2.1 12.9 1.8 12.8 14.5
Finland 2.2 13.5 1.6 14.5 16.7
Hong Kong 1.9 16.8 1.2 12.4 12.8
Ireland 1.3 11.4 0.7 7.0 11.7
S Korea 1.6 14.2 1.0 12.9 21.7
New Zealand 3.0 13.9 2.9 15.8 15.9
Shanghai 3.8 25.1 2.4 19.4 N/A
Singapore 5.0 21.2 2.6 15.7 N/A
Taiwan 1.4 11.8 0.4 5.2 4.7
UK (England) 1.3 9.1 1.0 8.1 9.2
US 1.0 7.9 1.5 9.9 N/A
OECD Average 1.1 8.4 1.0 7.0 8.6

 

This reveals that:

  • In 2012, Singapore has a clear lead on its competitors at Level 6, but it is overtaken by Shanghai at Level 5 and above. New Zealand also remains comparatively strong at Level 6, but falls back significantly when Levels 5 and 6 are combined.
  • The other Asian Tigers do not perform outstandingly well at Level 6: Hong Kong, South Korea and Taiwan are all below 2.0%, behind Canada and Finland. However, all but Taiwan outscore their competitors when Levels 5 and 6 are combined.
  • Hong Kong, Shanghai, Singapore and Taiwan are all making fairly strong progress over time. Patterns are rather less discernible for other countries, though there is a downward trend in the US.
  • In Finland, New Zealand and Canada – countries that seem to be falling back overall – the percentage of Level 6 readers continues to improve. This might suggest that the proportion of the highest performers in reading is not significantly affected when national performance begins to slide.
  • When judged against these world leaders, England’s comparative performance is brought into much clearer perspective. At Level 6 it is not far behind Taiwan, South Korea and even Hong Kong. But, at Level 5 and above, the gap is somewhat more pronounced. England is improving, but very slowly.
  • The comparison with Taiwan is particularly stark. In 2006, England had roughly twice as many students performing at Level 5. By 2009 Taiwan had caught up some of this ground and, by 2012, it had overtaken.

Table 8 compares changes since PISA 2006 in national performance at Level 5 and above with changes at Level 1 and below.

This is intended to reveal the balance between top and bottom – and whether this sample of world-leading and other English-speaking jurisdictions is making consistent progress at either end of the spectrum.

.

 Table 8

Country Levels 5 (and 6 from 2009) Level 1 (or equivalent) and below
2006 2009 2012 2006 2009 2012
Australia 10.6 12.8 11.7 13.4 14.3 14.2
Canada 14.5 12.8 12.9 11.0 10.3 10.9
Finland 16.7 14.5 13.5 4.8 8.1 11.3
Hong Kong 12.8 12.4 16.8 7.2 8.3 6.8
Ireland 11.7 7.0 11.4 12.2 17.2 9.7
S Korea 21.7 12.9 14.2 5.7 5.8 7.6
New Zealand 15.9 15.8 13.9 14.6 14.3 16.3
Shanghai N/A 19.4 25.1 N/A 4.1 2.9
Singapore N/A 15.7 21.2 N/A 12.4 9.9
Taiwan 4.7 5.2 11.8 14.3 15.6 11.5
UK (England) 9.2 8.1 9.1 18.9 18.4 16.7
US N/A 9.9 7.9 N/A 17.7 16.7
OECD Average 8.6 7.0 8.4 20.1 18.8 18

 

We can see that:

  • The countries with the highest proportion of students at Level 5 and above tend to have the lowest proportion at Level 1 and below. In Shanghai in 2012, there is a 22% percentage point gap between these two populations and fewer than 3 in every hundred fall into the lower attaining group.
  • Singapore is much closer to Shanghai at the top end than it is at the bottom. But even Shanghai seems to be making faster progress at the top than at the bottom, which might suggest that it is approaching the point at which the proportion of low achievers cannot be further reduced.
  • Compared with Hong Kong and South Korea, Singapore has a higher proportion of both high achievers and low achievers.
  • Whereas Taiwan had three times as many low achievers as high achievers in 2006, by 2012 the proportions were broadly similar, but progress at the top end is much faster than at the bottom.
  • The decline in Finland has less to do with performance at the top end (which has fallen by three percentage points) than with performance at the bottom (which has increased by more than six percentage points).
  • Canada has consistently maintained a higher percentage of high achievers than low achievers, but the reverse is true in Australia. In New Zealand the percentage at the top is declining and the percentage at the bottom is increasing. The gap between the two has narrowed slightly in England, but not significantly so.
  • To catch up with Shanghai, England has to close a gap of some 16 percentage points at the top end, compared with one of around 14 percentage points at the bottom.

The PISA National Report on England offers some additional analysis, noting that 18 jurisdictions had a higher proportion of pupils than England at Level 5 or above in 2012, including all those that outperformed England overall (with the exception of Estonia and Macao), and also France and Norway.

The National Report relies more heavily on comparing the performance of learners at the 5th and 95th percentiles in each country, arguing that:

‘This is a better measure for comparing countries than using the lowest and highest scoring pupils, as such a comparison may be affected by a small number of pupils in a country with unusually high or low scores.’

This is true in the sense that a minimum sample of 4,500 PISA participants would result in fewer than 100 at Level 6 in many jurisdictions.

On the other hand, the National Report fails to point out that analysis on this basis is not particularly informative about comparative achievement of the criterion-referenced standards denoted by the PISA thresholds.

It says rather more about the spread of performance in each country and rather less about direct international comparisons.

Key points include:

  • In England the score of learners at the 5th percentile was 328, compared with 652 at the 95th percentile. This difference of 324 points is slightly larger than the OECD average difference of 310 points. More than two-thirds of OECD countries had a smaller difference between these percentiles.
  • Compared with PISA 2012, the score of high achievers at the 95th percentile in PISA 2009 increased by six points to 652, while the score of low achievers at the 5th percentile fell by six points to 328. This increase in the attainment gap is higher than in 2009 (312) but lower than in 2006 (337). Thirteen OECD countries reported a wider spread of attainment than England.
  • Of countries outperforming England, only Japan (325 points), Singapore (329 points) Belgium (339 points) and New Zealand (347 points) demonstrated a similar or wider spread of attainment. Shanghai had the lowest difference (259 points) followed by Estonia (263).
  • The strongest performing jurisdictions at the 95th percentile were Singapore (698), Shanghai (690) and Japan (689), compared with 652 for England.
  • Amongst jurisdictions ranked higher than England, only the Netherlands, Liechtenstein, Estonia and Macao secured a lower score at the 95th percentile. Only Belgium reported a lower score at the 5th percentile.

.

Maths

Turning to maths, Table 9 illustrates changes in the pattern of high achievement since 2006, again showing the percentages performing at Level 6 and above Level 5 respectively.

.

Table 9

Country 2012 2009 2006
  Level 6 Levels 5 + 6 Level 6 Levels 5+6 Level 6 Levels 5+6
Australia 4.3 14.8 4.5 16.4 4.3 16.4
Canada 4.3 16.4 4.4 18.3 4.4 18
Finland 3.5 15.2 4.9 21.6 6.3 24.4
Hong Kong 12.3 33.4 10.8 30.7 9 27.7
Ireland 2.2 10.7 0.9 6.7 1.6 10.2
S Korea 12.1 30.9 7.8 25.5 9.1 27.1
New Zealand 4.5 15.0 5.3 18.9 5.7 18.9
Shanghai 30.8 55.4 26.6 50.7 N/A N/A
Singapore 19.0 40.0 15.6 35.6 N/A N/A
Taiwan 18.0 37.2 11.3 28.5 11.8 31.9
UK (England) 3.1 12.4 1.7 9.9 2.5 11.2
US 2.2 9.0 1.9 9.9 1.3 7.7
Average 3.3 12.6 3.1 12.7 3.3 13.4

.

The variations between countries tend to be far more pronounced than in reading:

  • There is a huge 28 percentage point spread in performance at Level 6 within this sample – from 2% to 30% – compared with a three percentage point spread in reading. The spread at Level 5 and above is also significantly larger – 46 percentage points compared with 17 percentage points in reading.
  • Shanghai has an 11 percentage point lead over its nearest competitor at Level 6 and an even larger 15 percentage point lead for Level 5 and above. Moreover it has improved significantly on both counts since 2009. Well over half its sample is now performing at Level 5 or above and almost a third are at Level 6.
  • Singapore and Taiwan are the next best performers, both relatively close together. Both are improving but, following a small dip in 2009, Taiwan is improving at a faster rate – faster even than Shanghai.
  • Hong Kong and South Korea also have similar 2012 profiles, as they did back in 2006. South Korea also lost ground in 2009, but is now improving at a faster rate than Hong Kong.
  • Finland appears to be experiencing quite significant decline: the proportion of Level 6 performers in 2012 is not far short of half what it was in 2006 and performance above Level 5 has fallen by more than nine percentage points. This is a somewhat different pattern to reading, in that the top performers are also suffering from the overall decline.

.

.

  • Australia, Canada and New Zealand have maintained broadly the same performance over time, though all are showing a slight falling off at Level 5 and above, and in New Zealand this also applies at Level 6.
  • After a serious slump in 2006, Ireland has overtaken its 2006 position. Meanwhile, the US has been making some progress at Level 6 but is less convincing at Level 5 and above.
  • Once again, this comparison does not particularly flatter England. It is not too far behind the Commonwealth countries and declining Finland at Level 6 but the gap is slightly larger at Level 5 and above. That said, England has consistently performed below the OECD average and remains in that position.
  • There are, however, some grounds for domestic celebration, in that England has improved by 2.5% at Level 5 and above, and by 1.4% at Level 6. This rate of improvement bears comparison with Hong Kong, albeit from a much lower base. It suggests a narrowing gap between England and its Commonwealth counterparts.

Table 10 gives the comparison with achievement at the bottom end of the distribution, setting out the percentages performing at different levels.

.

Table 10

Country Levels 5 and 6 Level 1 and below
  2006 2009 2012 2006 2009 2012
Australia 16.4 16.4 14.8 13.0 15.9 18.6
Canada 18 18.3 16.4 10.8 11.4 13.8
Finland 24.4 21.6 15.2 5.9 7.8 12.2
Hong Kong 27.7 30.7 33.4 9.5 8.8 8.5
Ireland 10.2 6.7 10.7 16.4 20.9 16.9
S Korea 27.1 25.5 30.9 8.8 8.1 9.1
New Zealand 18.9 18.9 15.0 14.0 15.5 22.6
Shanghai N/A 50.7 55.4 N/A 4.8 3.7
Singapore N/A 35.6 40.0 N/A 9.8 8.3
Taiwan 31.9 28.5 37.2 11.9 12.8 12.8
UK (England) 11.2 9.9 12.4 19.9 19.8 21.7
US 7.7 9.9 9.0 28.1 23.4 25.9
Average 13.4 12.7 12.6 21.3 22.0 23.0

.

Key points include:

  • The same pattern is discernible amongst the strongest performers as was evident with reading: those with the highest percentages at the top end tend to have the lowest percentages at the bottom. If anything this distinction is even more pronounced. Shanghai records a 52 percentage point gap between its highest and lowest performers and the latter group is only slightly larger than the comparable group in the reading assessment.
  • Amongst the Asian Tigers, the ratio between top and bottom is at least 3:1 in favour of the top. For most of the other countries in the sample, there is never more than a 7 percentage point gap between top and bottom, but this stretches to 9 in the case of England and 13 for the USA. Needless to say, the low achievers are in the majority in both cases.
  • Although the percentages for top and bottom in Australia are broadly comparable, it has shifted since 2006 from a position where the top end was in the majority by 3 percentage points to almost a mirror image of that pattern. In New Zealand, the lower achievers have increased by almost 9 percentage points, almost double the rate of decline at the top end, as their ‘long tail’ grows significantly longer.
  • Apart from Shanghai, only Singapore, Hong Kong and South Korea have fewer than 10% in the lower performing category. Despite its reputation as a meritocratic environment, Singapore gets much closer to Shanghai at the bottom of the distribution than it does at the top. The same is true of Hong Kong and South Korea.
  • It is also noticeable that none of the Tigers is making extraordinary progress at the bottom end. Hong Kong has reduced this population by 1% since 2003, Singapore by 1.5% since 2006, Shanghai by only 0.9% since 2006. The percentage has increased in South Korea and Taiwan. Improvement has been significantly stronger at the top of the distribution. Again this might suggest that the Tigers are closing in on the point where they cannot improve further at the bottom end.
  • In Finland, the percentage achieving the higher levels has fallen by over 9 percentage points since 2006, while the increase at the lower levels is over 6 percentage points. This compares with a 3 point fall at the top and a 6 point rise at the bottom in reading. The slump amongst Finland’s high achievers is clearly more pronounced in maths.
  • England’s 9.3 percentage point gap between the top and bottom groups in 2012 is lightly larger than the 8.7 point gap in 2006. It has a whopping 43 percentage point gap to make up on Shanghai at the top end, and an 18 point gap at the bottom. England is just on the right side of the OECD average at the bottom and just on the wrong side at the top.

.

.

The National Report notes that all jurisdictions ahead of England in the rankings had a higher percentage of learners at Level 5 or above.

As for percentiles

  • The difference between the 5th percentile (335 points) and the 95th percentile (652 points) was 316 in England. The average difference for OECD countries was 301, only slightly lower than that.
  • Ten countries had a greater difference than this, five of them amongst those the highest overall mean scores. Others were Israel, Belgium, Slovakia, New Zealand and France.
  • Whereas the difference between the lowest and highest percentiles has increased very slightly across all OECD countries, this is more pronounced in England, increasing from 285 points in 2009 to 316 points in 2012. This is attributable to decreasing scores at the 5th percentile (350 in 2006, 349 in 2009 and 335 in 2012) compared with changes at the 95th percentile (643 in 2006, 634 in 2009 and 652 in 2012).

.

Science

Table 11 compares the performance of this sample of PISA participants at the higher levels in the science assessment on the last three occasions.

.

Table 11

Country 2012 2009 2006
  Level 6 Levels 5 + 6 Level 6 Levels 5+6 Level 6 Levels 5+6
Australia 2.6 13.5 3.1 14.6 2.8 14.6
Canada 1.8 11.3 1.6 12.1 2.4 14.4
Finland 3.2 17.1 3.3 18.7 3.9 20.9
Hong Kong 1.8 16.7 2 16.2 2.1 15.9
Ireland 1.5 10.8 1.2 8.7 1.1 9.4
S Korea 1.1 11.7 1.1 11.6 1.1 10.3
New Zealand 2.7 13.4 3.6 17.6 4 17.6
Shanghai 4.2 27.2 3.9 24.3 N/A N/A
Singapore 5.8 22.7 4.6 19.9 N/A N/A
Taiwan 0.6 8.4 0.8 8.8 1.7 14.6
UK (England) 1.9 11.7 1.9 11.6 3.0 14.0
US 1.1 7.4 1.3 9.2 1.5 9.1
Average 1.2 8.4 1.1 8.5 1.3 8.8

.

In science, the pattern of high achievement has more in common with reading than maths. It shows that:

  • There is again a relatively narrow spread of performance between this sample of jurisdictions – approaching five percentage points at Level 6 and 20 percentage points at Level 5 and above.
  • As in reading, Singapore outscores Shanghai at the top level 6, but is outperformed by Shanghai at Level 5 and above. Both are showing steady improvement, but Singapore’s improvement at Level 6 is more pronounced than Shanghai’s.
  • Finland remains the third best performer, although the proportion of learners achieving at both Level 6 and Level 5 plus has been declining slightly since 2006.
  • Another similarity with reading is that Australia, Finland and New Zealand all perform significantly better at Level 6 than Hong Kong, South Korea and Taiwan. Hong Kong alone performs equally well at Level 5 and above. None of these three Asian Tigers has made significant progress since 2006.
  • In Australia, Canada, New Zealand and the US there has also been relatively little progress over time – indeed some evidence to suggest a slight decline. Conversely, Ireland seems to be moving forward again after a slight dip at Level 5 and above in 2009.

.

.

  • England was a strong performer in 2006, broadly comparable with many of its competitors. But it fell back significantly in 2009 and has made no progress since then. The proportions are holding up but there is no substantive improvement since 2009, unlike in maths and (to a lesser extent) reading. However England continues to perform somewhat higher than the OECD average. There is an interesting parallel with Taiwan, although that country dipped even further than England in 2009.

Table 12 provides the comparison with the proportions achieving the lower thresholds.

.

Table 12

Country Levels 5 and 6 Levels 1 and Below
  2006 2009 2012 2006 2009 2012
Australia 14.6 14.6 13.5 12.8 12.6 13.6
Canada 14.4 12.1 11.3 10.0 9.5 10.4
Finland 20.9 18.7 17.1 4.1 6.0 7.7
Hong Kong 15.9 16.2 16.7 8.7 6.6 5.6
Ireland 9.4 8.7 10.8 15.5 15.1 11.1
S Korea 10.3 11.6 11.7 11.2 6.3 6.7
New Zealand 17.6 17.6 13.4 13.7 13.4 16.3
Shanghai N/A 24.3 27.2 N/A 3.2 2.7
Singapore N/A 19.9 22.7 N/A 11.5 9.6
Taiwan 14.6 8.8 8.4 11.6 11.1 9.8
UK (England) 14.0 11.6 11.7 16.7 14.8 14.9
US 9.1 9.2 7.4 24.4 18.1 18.2
Average 8.8 8.5 8.4 19.3 18.0 17.8

 .

  • Amongst the top performers the familiar pattern reappears. In 2012 Shanghai has 27% in the top categories against 2.7% in the bottom categories. This is very similar to reading (25.1% against 2.9%). At the bottom end, Shanghai’s nearest competitors are Hong Kong and South Korea, while Singapore and Taiwan are each approaching 10% at these levels. This is another similarity with reading (whereas, in maths, Singapore is more competitive at the lower end).
  • Since 2009, Shanghai has managed only a comparatively modest 0.5% reduction in the proportion of its students at the bottom end, compared with an increase of almost 3% at the top end. This may lend further support to the hypothesis that it is approaching the point at which further bottom end improvement is impossible.
  • No country has made consistently strong progress at the bottom end, though Ireland has made a significant improvement since 2009. There has been steady if unspectacular improvement in Hong Kong, Taiwan and Singapore. South Korea, having achieved a major improvement in 2009 has found itself unable to continue this positive trend.
  • Finland’s negative trend is consistent since 2006 at both ends of the achievement spectrum, though the decline is not nearly as pronounced as in maths. In science Finland is maintaining a ratio of 2:1 in favour of the performers at the top end, while percentages at top and bottom are now much closer together in both reading and maths.
  • There are broadly similar negative trends at top and bottom alike in the Commonwealth countries of Australia, Canada and New Zealand, although they have fallen back in fits and starts. In New Zealand the balance between top and bottom has shifted from being 4 percentage points in favour of the top end in 2006, to 3 percentage points in favour of the bottom end by 2012.
  • A similar gap in favour of lower achievers also exists in England and is unchanged from 2009. By comparison with the US (which is a virtual mirror image of the top-bottom balance in Finland, Singapore or South Korea) it is in a reasonable position, rather similar to New Zealand, now that it has fallen back.
  • England has a 1.5 percentage point gap to make up on Shanghai at the top end of the distribution, compared with a 12.2 percentage point gap at the bottom.

The PISA 2012 National Study reports that only the handful of jurisdictions shown in Table 11 above has a larger percentage of learners achieving Level 6. Conversely, England has a relatively large number of low achievers compared with these jurisdictions.

Rather tenuously, it argues on this basis that:

‘Raising the attainment of lower achievers would be an important step towards improving England’s performance and narrowing the gap between highest and lowest performers.’

When it comes to comparison of the 5th and 95th percentiles:

  • The score at the 5th percentile (343) and at the 95th percentile (674) gives a difference of 331 points, larger than the OECD average of 304 points. Only eight jurisdictions had a wider distribution: Israel, New Zealand, Luxembourg, Slovakia, Belgium, Singapore and Bulgaria.
  • The OECD average difference between the 5th and 95th percentiles has reduced slightly (from 311 in 2006 to 304 in 2012) and there has also been relatively little change in England.

.

Top-Performing All-Rounders

Volume 1 of the OECD’s ‘PISA 2012 Results’ document provides additional data about all-round top performers achieving Level 5 or above in each of the three domains.

.

PISA 2012 top performers Capture.

The diagram shows that 4.4% of learners across OECD countries achieve this feat.

This is up 0.3% on the PISA 2009 figure revealed in this PISA in Focus publication.

Performance on this measure in 2012, compared with 2009, amongst the sample of twelve jurisdictions is shown in the following Table 13. (NB that the UK figure is for the UK combined, not just England).

.

Table 13

2012 2009
%age rank %age rank
Australia 7.6 7 8.1 6
Canada 6.5 9 6.8 8
Finland 7.4 8 8.5 4
Hong Kong 10.9 4 8.4 5
Ireland 5.7 15 3.2 23
S Korea 8.1 5 7.2 7
New Zealand 8.0 6 9.9 3
Shanghai 19.6 1 14.6 1
Singapore 16.4 2 12.3 2
Taiwan 6.1 10 3.9 17
UK 5.7 15 4.6 14
US 4.7 18 5.2 11
Average 4.4 4.1

 .

In terms of percentage increases, the fastest progress on this measure is being made by Hong Kong, Ireland, Shanghai, Singapore and Taiwan. Shanghai has improved a full five percentage points and one in five of its students now achieve this benchmark.

The UK is making decent progress, particularly compared with Australia, Canada, Finland New Zealand and the US, which are moving in the opposite direction.

The Report notes:

‘Among countries with similar mean scores in PISA, there are remarkable differences in the percentage of top-performing students. For example, Denmark has a mean score of 500 points in mathematics in PISA 2012 and 10% of students perform at high proficiency levels in mathematics, which is less than the average of around 13%. New Zealand has a similar mean mathematics score of 500 points, but 15% of its students attain the highest levels of proficiency, which is above the average…these results could signal the absence of a highly educated talent pool for the future.

Having a large proportion of top performers in one subject is no guarantee of having a large proportion of top performers in the others. For example, Switzerland has one of the 10 largest shares of top performers in mathematics, but only a slightly-above-average share of top performers in reading and science.

Across the three subjects and across all countries, girls are as likely to be top performers as boys. On average across OECD countries, 4.6% of girls and 4.3% of boys are top performers in all three subjects…To increase the share of top-performing students, countries and economies need to look at the barriers posed by social background…the relationship between performance and students’… and schools’ organisation, resources and learning environment.’ (p65)

.

Denizen by Gifted Phoenix

Denizen by Gifted Phoenix

 

Conclusions

Priorities for Different Countries

On the basis of this evidence, it is possible to draw up a profile of the performance of different countries across the three assessments at these higher levels, and so make a judgement about the prospects in each of ‘a highly educated talent pool for the future’. The twelve jurisdictions in our sample might be advised as follows:

  • Shanghai should be focused on establishing ascendancy at Level 6 in reading and science, particularly if there is substance to the suspicion that scope for improvement at the bottom of the spectrum is now rather limited. Certainly it is likely to be easier to effect further improvement at the very top.
  • Singapore has some ground to catch up with Shanghai at Level 6 in maths. It has narrowed that gap by three percentage points since 2009, but there is still some way to go. Otherwise it should concentrate on strengthening its position above Level 5, where Shanghai is also conspicuously stronger.
  • Hong Kong needs to focus on Level 6 in reading and science, but perhaps also in maths where it has been extensively outpaced by Taiwan since 2009. At levels 5 and above it faces strong pressure to maintain proximity with Shanghai and Singapore, as well as marking the charge made by Taiwan in reading and maths. Progress in science is relatively slow.
  • South Korea should also pay attention to Level 6 in reading and science. It is improving faster than Hong Kong at Level 6 in maths but is also losing ground on Taiwan. That said, although South Korea now seems back on track at Level 5 and above in maths, but progress remains comparatively slow in reading and science, so both Levels 5 and 6 need attention.
  • Taiwan has strong improvement in reading and maths since 2009, but is deteriorating in science at both Levels 5 and 6. It still has much ground to pick up at Level 6 in reading. Its profile is not wildly out of kilter with Hong Kong and South Korea.
  • Finland is bucking a downward trend at Level 6 in reading and slipping only slightly in science, so the more noticeable decline is in maths. However, the ground lost is proportionately greater at Level 5 and above, once again more prominently in maths. As Finland fights to stem a decline at the lower achievement levels, it must take care not to neglect those at the top.
  • Australia seems to be slipping back at both Levels 5 and 6 across all three assessments, while also struggling at the bottom end. There are no particularly glaring weaknesses, but it needs to raise its game across the board.
  • Canada is just about holding its own at Level 6, but performance is sliding back at Level 5 and above across all three domains. This coincides with relatively little improvement and some falling back at the lower end of the achievement distribution. It faces a similar challenge to Finland’s although not so pronounced.
  • New Zealand can point to few bright points in an otherwise gloomy picture, one of which is that Level 6 performance is holding up in reading. Elsewhere, there is little to celebrate in terms of high achievers’ performance. New Zealand is another country that, in tackling more serious problems with the ‘long tail’, should not take its eye off the ball at the top.

.

.

  • The US is also doing comparatively well in reading at Level 6, but is otherwise either treading water or slipping back a little. Both Level 6 and Level 5 and above need attention. The gap between it and the world’s leading countries continues to increase, suggesting that it faces future ‘talent pool’ issues unless it can turn round its performance.
  • Ireland is a good news story, at the top end as much as the bottom. It has caught up lost ground and is beginning to push beyond where it was in 2006. Given Ireland’s proximity, the home countries might want to understand more clearly why their nearest neighbour is improving at a significantly faster rate. That said, Ireland has significant room for improvement at both Level 6 and Level 5 and above.
  • England’s performance at Level 6 and Level 5 and above has held up surprisingly well compared with 2009, especially in maths. When the comparison is solely historical, there might appear to be no real issue. But many other countries are improving at a much stronger rate and so England (as well as the other home countries) risks being left behind in the ‘global race’ declared by its Prime Minister. The world leaders now manage three times as many Level 6 performers in science, four times as many in reading and ten times as many in maths. It must withstand the siren voices urging it to focus disproportionately at the bottom end.

.

Addressing These Priorities

It is far more straightforward to pinpoint these different profiles and priorities than to recommend convincingly how they should be addressed.

The present UK Government believes firmly that its existing policy direction will deliver the improvements that will significantly strengthen its international competiveness, as judged by PISA outcomes. It argues that it has learned these lessons from careful study of the world’s leading performers and is applying them carefully and rigorously, with due attention to national needs and circumstances.

.

.

But – the argument continues – it is too soon to see the benefits of its reforms in PISA 2012, such is the extended lag time involved in improving the educational outcomes of 15 year-olds. According to this logic, the next Government will reap the significant benefits of the present Government’s reform programme, as revealed by PISA 2015.

Recent history suggests that this prediction must be grounded more in hope than expectation, not least because establishing causation between indirect policy interventions and improved test performance must surely be the weakest link in the PISA methodology.

But, playing devil’s advocate for a moment, we might reasonably conclude that any bright spots in England’s performance are attributable to interventions that the previous Government got right between five and ten years ago. It would not be unreasonable to suggest that the respectable progress made at the top PISA benchmarks is at least partly attributable to the national investment in gifted education during that period.

We might extend this argument by suggesting a similar relationship between progress in several of the Asian Tigers at these higher levels and their parallel investment in gifted education. Previous posts have drawn attention to the major programmes that continue to thrive in Hong Kong, Singapore, South Korea and Taiwan.

Shanghai might have reached the point where success in mainstream education renders investment in gifted education unnecessary. On the other hand, such a programme might help it to push forward at the top in reading and science – perhaps the only conspicuous chink in its armour. There are lessons to be learned from Singapore. (Gifted education is by no means dormant on the Chinese Mainland and there are influential voices pressing the national government to introduce more substantive reforms.)

Countries like Finland might also give serious consideration to more substantive investment in gifted education geared to strengthening high attainment in these core domains. There is increasingly evidence that the Finns need to rethink their approach.

.

.

The relationship between international comparisons studies like PISA and national investment in gifted education remains poorly researched and poorly understood, particularly how national programmes can most effectively be aligned with and support such assessments.

The global gifted education community might derive some much-needed purpose and direction by establishing an international study group to investigate this issue, providing concrete advice and support to governments with an interest.

.

GP

December 2013