Labour’s announcement is obviously timed to anticipate Ofsted’s report.
By bringing forward his report to this side of the General Election, HMCI has certainly ensured that it will exert much more leverage on political decision-making. He will want that to impact on the Conservatives as well as Labour.
What exactly is Labour’s commitment?
The original newspaper report is so far our only source. (I will add any further details from material that appears subsequently.)
It says that, if elected:
Labour would establish an independently-administered Gifted and Talented Fund, which is likely to ‘have a £15m pot initially’.
Schools would be able to bid for money from the Fund to ‘help their work in stretching the most able pupils’.
The Fund would help to establish ‘a new evidence base on how to encourage talented children’
The use of ‘gifted and talented’ terminology may be misleading, in that the remainder of the text suggests Labour is focused on high attainers including (but not exclusively) those from disadvantaged backgrounds.
It is not clear whether the £15m funding commitment is an annual commitment or an initial investment that might or might not be topped up subsequently.
It seems to be available to both primary and secondary schools, but this is not made explicit.
It is not clear how bids for the funding would be assessed, or who would assess them.
The purpose of the funding seems primarily to support teachers and schools rather than to support high attaining learners themselves.
The relationship between the Fund and building the evidence base is not made clear. Will there be an expectation of school-based action research, for example?
There is no explicit ‘joining up’ with wider Labour action on social mobility or fair access to selective higher education (and there is an unfortunate allusion to the pupil premium which suggests it is exclusively to help lower attainers).
Is anyone on the inside track?
The word on the street is that Labour developed its policy through an internal review.
But the inclusion of a statement from Peter Lampl might suggest that they are in cahoots with the Sutton Trust, where an ex-Labour SPAD is ensconced as Director of Research and Communications.
‘…an effective national programme for highly able state school pupils, with ring-fenced funding to support evidence-based activities and tracking of pupils’ progress.’
Unfortunately, it is also wedded to the misguided Open Access scheme, which involves denuding state-funded schools of high attainers and diverting them to independent schools instead. (For a more balanced and careful analysis see this post from April 2012)
It cannot be entirely accidental that Lampl published his latest article pushing this wheeze on the same day as Labour’s announcement.
The Education Endowment Foundation might be a potential home for the Fund – and of course the Sutton Trust has a close relationship with the EEF.
Pressure on the Tories?
The combined weight of Labour’s announcement and HMI’s report will put significant pressure on the Tories, especially, to follow suit.
They are already in a difficult position in this territory, having publicly wavered between advocating selection and universal setting as an alternative to it.
The Prime Minister has recently announced himself less opposed to selection, and the as-yet-unresolved decision on the Sevenoaks satellite is keeping this a live issue as we approach the Election.
His dalliance with universal setting was advertised as sidestepping the arguments for increased selection, but was subsequently relegated to one of a menu of options in the armoury of regional schools commissioners when tackling failing schools.
The Tories’ only other fallback is the claim that the Coalition Government’s more generic policies will raise standards across the board, including at the top of the attainment spectrum. This seems increasingly threadbare, however.
If they are not careful, they could be squeezed between Labour’s new-found commitment to gifted education and UKIP’s espousal of grammar schools.
Initial reaction to Labour’s announcement?
This is the first time Labour have expressed support for high attainers since Andy Burnham was Shadow Minister.
If the sum they have announced is an annual commitment, this broadly matches the budget for the National Gifted and Talented Programme when it was at its height in the mid-2000s.
They are clearly anxious to keep this support at arms-length from Government – they don’t want to return to a national programme.
The disadvantages of full autonomy could be avoided if bids are invited against a framework of priorities, rather than left entirely for schools to determine. Labour presumably want this funding to make a difference to the statistics they cite from the evidence base.
If the funding is for educators rather than learners, that begs the question whether those from disadvantaged backgrounds might not also be supported through a £50m pupil premium topslice as I have suggested elsewhere.
It would also be helpful if the funding was linked to a national effort to reach consensus on the education of high attainers, as embodied in these ten core principles.
But this is a decent start. ‘Better than a poke in the eye with a blunt stick’, as my favourite colloquialism has it.
I am rounding out this year’s blogging with my customary backwards look at the various posts I published during 2014.
This is partly an exercise in self-congratulation but also flags up to readers any potentially useful posts they might have missed.
Norwegian Panorama by Gifted Phoenix
This is my 32nd post of the year, three fewer than the 35 I published in 2013. Even so, total blog views have increased by 20% compared with 2013.
Almost exactly half of these views originate in the UK. Other countries generating a large number of views include the United States, Singapore, India, Australia, Hong Kong, Saudi Arabia, Germany, Canada and South Korea. The site has been visited this year by readers located in157 different countries.
This illustrates just how strongly the accountability regime features in the priorities of English educators.
I have continued to feature comparatively more domestic topics: approximately 75% of my posts this year have been about the English education system. I have not ventured beyond these shores since September.
The first section below reviews the minority of posts with a global perspective; the second covers the English material. A brief conclusion offers my take on future prospects.
This proposed some quality criteria for social media usage and blogs/websites that operate within the field of gifted education.
It also reviewed the social media activity of six key players (WCGTC, ECHA, NAGC, SENG, NACE and Potential Plus UK) as well as wider activity within the blogosphere, on five leading social media platforms and utilising four popular content creation tools.
Some of the websites mentioned above have been recast since the post was published and are now much improved (though I claim no direct influence).
These posts were scheduled just ahead of a conference organised by the Hungarian sponsors of the network. I did not attend, fearing that the proceedings would have limited impact on the future direction of this once promising initiative. I used the posts to set out my reservations, which include a failure to engage with constructive criticism.
Part One scrutinises the Hungarian talent development model on which the European Network is based. Part Two describes the halting progress made by to date. It identifies several deficiencies that need to be addressed if the Network is to have a significant and lasting impact on pan-European support for talent development and gifted education.
This analyses the performance of high achievers from a selection of 11 jurisdictions – either world leaders or prominent English-speaking nations – on the PISA 2012 Creative Problem Solving assessment.
It is a companion piece to a 2013 post which undertook a similar analysis of the PISA 2012 assessments in Reading, Maths and Science.
In May I contributed to the Hoagies’ Bloghop for that month.
Air on the ‘G’ String: Hoagies’ Bloghop, May 2014 was my input to discussion about the efficacy of ‘the G word’ (gifted). I deliberately produced a provocative and thought-provoking piece which stirred typically intense reactions in several quarters.
This takes a closer look at the relatively little-known PISA ‘resilient students’ measure – focused on high achievers from disadvantaged socio-economic backgrounds – and how well different jurisdictions perform against it.
The title reflects the post’s conclusion that, like many other countries, England:
‘…should be worrying as much about our ‘short head’ as our ‘long tail’’.
And so I pass seamlessly on to the series of domestic posts I published during 2014…
The purpose of these annual posts (and the primary equivalent which appears each December) is to synthesise data about the performance of high attainers and high attainment at national level, so that schools can more easily benchmark their own performance.
It examines the subsequent history of schools that recorded particularly poor results with high attainers in the Secondary Performance Tables. (The asterisk references a footnote apologising ‘for this rather tabloid title’.)
Some of the issues I highlighted eight months ago are now being more widely discussed – not least the nature of the performance descriptors, as set out in the recent consultation exercise dedicated to those.
But the reform process is slow. Many other issues remain unresolved and it seems increasingly likely that some of the more problematic will be delayed deliberately until after the General Election.
May was particularly productive, witnessing four posts, three of them substantial:
How well is Ofsted reporting on the most able? explores how Ofsted inspectors are interpreting the references to the attainment and progress of the most able added to the Inspection Handbook late last year. The sample comproses the 87 secondary inspection reports that were published in March 2014. My overall assessment? Requires Improvement.
A Closer Look at Level 6 is a ‘data-driven analysis of Level 6 performance’. As well as providing a baseline against which to assess future Level 6 achievement, this also identifies several gaps in the published data and raises as yet unanswered questions about the nature of the new tests to be introduced from 2016.
One For The Echo Chamber was prompted by The Echo Chamber reblogging service, whose founder objected that my posts are too long, together with the ensuing Twitter debate. Throughout the year the vast majority of my posts have been unapologetically detailed and thorough. They are intended as reference material, to be quarried and revisited, rather than the disposable vignettes that so many seem to prefer. To this day they get reblogged on The Echo Chamber only when a sympathetic moderator is undertaking the task.
‘Poor but Bright’ v ‘Poor but Dim’ arose from another debate on Twitter, sparked by a blog post which argued that the latter are a higher educational priority than the former. I argued that both deserved equal priority, since it is inequitable to discriminate between disadvantaged learners on the basis of prior attainment and the economic arguments cut both ways. This issue continues to bubble like a subterranean stream, only to resurface from time to time, most recently when the Fair Education Alliance proposed that the value of pupil premium allocations attached to disadvantaged high attainers should be halved.
The principles should be valuable to schools considering how best to respond to Ofsted’s increased scrutiny of their provision for the most able. Any institution considering how best to revitalise its provision might discuss how the principles should be interpreted to suit their particular needs and circumstances.
Test entries increased significantly. So did the success rates on the other level 6 tests (in maths and in grammar, punctuation and spelling (GPS)). Even teacher assessment of L6 reading showed a marked upward trend.
Despite all this, the number of pupils successful on the L6 reading test fell from 2,062 in 2013 to 851 (provisional). The final statistics – released only this month – show a marginal improvement to 935, but the outcome is still extremely disappointing. No convincing explanation has been offered and the impact on 2015 entries is unlikely to be positive.
These present the evidence base relating to high attainment gaps between disadvantaged and other learners, to distinguish what we know from what remains unclear and so to provide a baseline for further research.
The key finding is that the evidence base is both sketchy and fragmented. We should understand much more than we do about the size and incidence of excellence gaps. We should be strengthening the evidence base as part of a determined strategy to close the gaps.
@GiftedPhoenix very useful summary – the importance of both high achievement and subject choice at GCSE needs more investigation.
In October 16-19 Maths Free Schools Revisited marked a third visit to the 16-19 maths free schools programme, concentrating on progress since my previous post in March 2013, especially at the two schools which have opened to date.
The two small institutions at KCL and Exeter University (both very similar to each other) constitute a rather limited outcome for a project that was intended to generate a dozen innovative university-sponsored establishments. There is reportedly a third school in the pipeline but, as 2014 closes, details have yet to be announced.
Excellence Gaps Quality Standard: Version One is an initial draft of a standard encapsulating effective whole school practice in supporting disadvantaged high attainers. It updates and adapts the former IQS for gifted and talented education.
This first iteration needs to be trialled thoroughly, developed and refined but, even as it stands, it offers another useful starting point for schools reviewing the effectiveness of their own provision.
The baseline standard captures the essential ‘non-negotiables’ intended to be applicable to all settings. The exemplary standard is pitched high and should challenge even the most accomplished of schools and colleges.
All comments and drafting suggestions are welcome.
These issues have become linked since Prime Minister Cameron has regularly proposed an extension of the former as a response to calls on the right wing of his party for an extension of the latter.
This was almost certainly the source of autumn media rumours that a strategy, originating in Downing Street, would be launched to incentivise and extend setting.
Newly installed Secretary of State Morgan presumably insisted that existing government policy (which leaves these matters entirely to schools) should remain undisturbed. However, the idea might conceivably be resuscitated for the Tory election manifesto.
Now that UKIP has confirmed its own pro-selection policy there is pressure on the Conservative party to resolve its internal tensions on the issue and identify a viable alternative position. But the pro-grammar lobby is unlikely to accept increased setting as a consolation prize…
This shows that HMCI’s recent distinction between positive support for the most able in the primary sector and a much weaker record in secondary schools is not entirely accurate. There are conspicuous weaknesses in the primary sector too.
Meanwhile, Chinese learners continue to perform extraordinarily well on the Level 6 maths test, achieving an amazing 35% success rate, up six percentage points since 2013. This domestic equivalent of the Shanghai phenomenon bears closer investigation.
My penultimate post of the year HMCI Ups the Ante on the Most Able collates all the references to the most able in HMCI’s 2014 Annual Report and its supporting documentation.
It sets out Ofsted’s plans for the increased scrutiny of schools and for additional survey reports that reflect this scrutiny.
It asks the question whether Ofsted’s renewed emphasis will be sufficient to rectify the shortcomings they themselves identify and – assuming it will not – outlines an additional ten-step plan to secure system-wide improvement.
‘The ‘closed shop’ is as determinedly closed as ever; vested interests are shored up; governance is weak. There is fragmentation and vacuum where there should be inclusive collaboration for the benefit of learners. Too many are on the outside, looking in. Too many on the inside are superannuated and devoid of fresh ideas.’
Despite evidence of a few ‘green shoots’’ during 2014, my overall sense of pessimism remains.
Meanwhile, future prospects for high attainers in England hang in the balance.
Several of the Coalition Government’s education reforms have been designed to shift schools’ focus away from borderline learners, so that every learner improves, including those at the top of the attainment distribution.
On the other hand, Ofsted’s judgement that a third of secondary inspections this year
‘…pinpointed specific problems with teaching the most able’
would suggest that schools’ everyday practice falls some way short of this ideal.
HMCI’s commitment to champion the interests of the most able is decidedly positive but, as suggested above, it might not be enough to secure the necessary system-wide improvement.
Ofsted is itself under pressure and faces an uncertain future, regardless of the election outcome. HMCI’s championing might not survive the arrival of a successor.
It seems increasingly unlikely that any political party’s election manifesto will have anything significant to say about this topic, unless the enthusiasm for selection in some quarters can be harnessed and redirected towards the much more pertinent question of how best to meet the needs of all high attainers in all schools and colleges, especially those from disadvantaged backgrounds.
But the entire political future is shrouded in uncertainty. Let’s wait and see how things are shaping up on the other side of the election.
From a personal perspective I am closing in on five continuous years of edutweeting and edublogging.
I once expected to extract from this commitment benefits commensurate with the time and energy invested. But that is no longer the case, if indeed it ever was.
I plan to call time at the end of this academic year.
Put crudely, the discussion hinged on the question whether the educational needs of ‘poor but dim’ learners should take precedence over those of the ‘poor but bright’. (This is Mr Thomas’s shorthand, not mine.)
He argued that the ‘poor but dim’ are the higher priority; I countered that all poor learners should have equal priority, regardless of their ability and prior attainment.
We began to explore the issue:
as a matter of educational policy and principle
with reference to inputs – the allocation of financial and human resources between these competing priorities and
in terms of outcomes – the comparative benefits to the economy and to society from investment at the top or the bottom of the attainment spectrum.
This post presents the discussion, adding more flesh and gloss from the Gifted Phoenix perspective.
It might or might not stimulate some interest in how this slightly different take on a rather hoary old chestnut plays out in England’s current educational landscape.
But I am particularly interested in how gifted advocates in different countries respond to these arguments. What is the consensus, if any, on the core issue?
Depending on the answer to this first question, how should gifted advocates frame the argument for educationalists and the wider public?
To help answer the first question I have included a poll at the end of the post.
Do please respond to that – and feel free to discuss the second question in the comments section below.
The structure of the post is fairly complex, comprising:
A (hopefully objective) summary of Mr Thomas’s original post.
An embedded version of the substance of our Twitter conversation. I have removed some Tweets – mostly those from third parties – and reordered a little to make this more accessible. I don’t believe I’ve done any significant damage to either case.
Some definition of terms, because there is otherwise much cause for confusion as we push further into the debate.
A digressionary exploration of the evidence base, dealing with attainment data and budget allocations respectively. The former exposes what little we are told about how socio-economic gaps vary across the attainment spectrum; the latter is relevant to the discussion of inputs. Those pressed for time may wish to proceed directly to…
…A summing up, which expands in turn the key points we exchanged on the point of principle, on inputs and on outcomes respectively.
I have reserved until close to the end a few personal observations about the encounter and how it made me feel.
And I conclude with the customary brief summary of key points and the aforementioned poll.
It is an ambitious piece and I am in two minds as to whether it hangs together properly, but you are ultimately the judges of that.
What Mr Thomas Blogged
The post was called ‘The Romance of the Poor but Bright’ and the substance of the argument (incorporating several key quotations) ran like this:
The ‘effort and resources, of schools but particularly of business and charitable enterprise,are directed disproportionately at those who are already high achieving – the poor but bright’.
Moreover ‘huge effort is expended on access to the top universities, with great sums being spent to make marginal improvements to a small set of students at the top of the disadvantaged spectrum. They cite the gap in entry, often to Oxbridge, as a significant problem that blights our society.’
This however is ‘the pretty face of the problem. The far uglier face is the gap in life outcomes for those who take least well to education.’
‘Popular discourse is easily caught up in the romance of the poor but bright’ but ‘we end up ignoring the more pressing problem – of students for whom our efforts will determine whether they ever get a job or contribute to society’. For ‘when did you last hear someone advocate for the poor but dim?’
‘The gap most damaging to society is in life outcomes for the children who perform least well at school.’ Three areas should be prioritised to improve their educational outcomes:
o Improving alternative provision (AP) which ‘operates as a shadow school system, largely unknown and wholly unappreciated’ - ‘developing a national network of high–quality alternative provision…must be a priority if we are to close the gap at the bottom’.
o Improving ‘consistency in SEN support’because ‘schools are often ill equipped to cope with these, and often manage only because of the extraordinary effort of dedicated staff’. There is ‘inconsistency in funding and support between local authorities’.
o Introducing clearer assessment of basic skills, ‘so that a student could not appear to be performing well unless they have mastered the basics’.
While ‘any student failing to meet their potential is a dreadful thing’, the educational successes of ‘students with incredibly challenging behaviour’ and ‘complex special needs…have the power to change the British economy, far more so than those of their brighter peers.’
A footnote adds ‘I do not believe in either bright or dim, only differences in epigenetic coding or accumulated lifetime practice, but that is a discussion for another day.’
Indeed it is.
Our ensuing Twitter discussion
The substance of our Twitter discussion is captured in the embedded version immediately below. (Scroll down to the bottom for the beginning and work your way back to the top.)
I take poor to mean socio-economic disadvantage, as opposed to any disadvantage attributable to the behaviours, difficulties, needs, impairments or disabilities associated with AP and/or SEN.
I recognise of course that such a distinction is more theoretical than practical, because, when learners experience multiple causes of disadvantage, the educational response must be holistic rather than disaggregated.
Nevertheless, the meaning of ‘poor’ is clear – that term cannot be stretched to include these additional dimensions of disadvantage.
The available performance data foregrounds two measures of socio-economic disadvantage: current eligibility for and take up of free school meals (FSM) and qualification for the deprivation element of the Pupil Premium, determined by FSM eligibility at some point within the last 6 years (known as ‘ever-6’).
Both are used in this post. Distinctions are typically between the disadvantaged learners and non-disadvantaged learners, though some of the supporting data compares outcomes for disadvantaged learners with outcomes for all learners, advantaged and disadvantaged alike.
The gaps that need closing are therefore:
between ‘poor and bright’ and other ‘bright’ learners (The Excellence Gap) and
between ‘poor and dim’ and other ‘dim’ learners. I will christen this The Foundation Gap.
The core question is whether The Foundation Gap takes precedence over The Excellence Gap or vice versa, or whether they should have equal billing.
This involves immediate and overt recognition that classification as AP and/or SEN is not synonymous with the epithet ‘poor’, because there are many comparatively advantaged learners within these populations.
But such a distinction is not properly established in Mr Thomas’ blog, which applies the epithet ‘poor’ but then treats the AP and SEN populations as homogenous and somehow associated with it.
By ‘dim’ I take Mr Thomas to mean the lowest segment of the attainment distribution – one of his tweets specifically mentions ‘the bottom 20%’. The AP and/or SEN populations are likely to be disproportionately represented within these two deciles, but they are not synonymous with it either.
This distinction will not be lost on gifted advocates who are only too familiar with the very limited attention paid to twice exceptional learners.
Those from poor backgrounds within the AP and/or SEN populations are even more likely to be disproportionately represented in ‘the bottom 20%’ than their more advantaged peers, but even they will not constitute the entirety of ‘the bottom 20%’. A Venn diagram would likely show significant overlap, but that is all.
Hence disadvantaged AP/SEN are almost certainly a relatively poor proxy for the ‘poor but dim’.
That said I could find no data that quantifies these relationships.
The School Performance Tables distinguish a ‘low attainer’ cohort. (In the Secondary Tables the definition is determined by prior KS2 attainment and in the Primary Tables by prior KS1 attainment.)
These populations comprise some 15.7% of the total population in the Secondary Tables and about 18.0% in the Primary Tables. But neither set of Tables applies the distinction in their reporting of the attainment of those from disadvantaged backgrounds.
It follows from the definition of ‘dim’ that, by ‘bright’, MrThomas probably intends the two corresponding deciles at the top of the attainment distribution (even though he seems most exercised about the subset with the capacity to progress to competitive universities, particularly Oxford and Cambridge. This is a far more select group of exceptionally high attainers – and an even smaller group of exceptionally high attainers from disadvantaged backgrounds.)
A few AP and/or SEN students will likely fall within this wider group, fewer still within the subset of exceptionally high attainers. AP and/or SEN students from disadvantaged backgrounds will be fewer again, if indeed there are any at all.
The same issues with data apply. The School Performance Tables distinguish ‘high attainers’, who constitute over 32% of the secondary cohort and 25% of the primary cohort. As with low attainers, we cannot isolate the performance of those from disadvantaged backgrounds.
We are forced to rely on what limited data is made publicly available to distinguish the performance of disadvantaged low and high attainers.
At the top of the distribution there is a trickle of evidence about performance on specific high attainment measures and access to the most competitive universities. Still greater transparency is fervently to be desired.
At the bottom, I can find very little relevant data at all – we are driven inexorably towards analyses of the SEN population, because that is the only dataset differentiated by disadvantage, even though we have acknowledged that such a proxy is highly misleading. (Equivalent AP attainment data seems conspicuous by its absence.)
AP and SEN
Before exploring these datasets I ought to provide some description of the different programmes and support under discussion here, if only for the benefit of readers who are unfamiliar with the English education system.
‘They include pupils who have been excluded or who cannot attend mainstream school for other reasons: for example, children with behaviour issues, those who have short- or long-term illness, school phobics, teenage mothers, pregnant teenagers, or pupils without a school place.’
AP is provided in a variety of settings where learners engage in timetabled education activities away from their school and school staff.
Providers include further education colleges, charities, businesses, independent schools and the public sector. Pupil Referral Units (PRUs) are perhaps the best-known settings – there are some 400 nationally.
Taylor complains of a lack of reliable data about the number of learners in AP but notes that the DfE’s 2011 AP census recorded 14,050 pupils in PRUs and a further 23,020 in other settings on a mixture of full-time and part-time placements. This suggests a total of slightly over 37,000 learners, though the FTE figure is unknown.
He states that AP learners are:
‘…twice as likely as the average pupil to qualify for free school meals’
‘In Jan 2011, 34.6% of pupils in PRUs and 13.8%* of pupils in other AP, were eligible for and claiming free school meals, compared with 14.6% of pupils in secondary schools. [*Note: in some AP settings, free school meals would not be available, so that figure is under-stated, but we cannot say by how much.]’
If the PRU population is typical of the wider AP population, approximately one third qualify under this FSM measure of disadvantage, meaning that the substantial majority are not ‘poor’ according to our definition above.
Taylor confirms that overall GCSE performance in AP is extremely low, pointing out that in 2011 just 1.4% achieved five or more GCSE grades A*-C including [GCSEs in] maths and English, compared to 53.4% of pupils in all schools.
By 2012/13 the comparable percentages were 1.7% and 61.7% respectively (the latter for all state-funded schools), suggesting an increasing gap in overall performance. This is a cause for concern but not directly relevant to the issue under consideration.
The huge disparity is at least partly explained by the facts that many AP students take alternative qualifications and that the national curriculum does not apply to PRUs.
Data is available showing the full range of qualifications pursued. Taylor recommended that all students in AP should continue to receive ‘appropriate and challenging English and Maths teaching’.
Interestingly, he also pointed out that:
‘In some PRUs and AP there is no provision for more able pupils who end up leaving without the GCSE grades they are capable of earning.’
However, he fails to offer a specific recommendation to address this point.
Special Educational Needs (SEN) are needs or disabilities that affect children’s ability to learn. These may include behavioural and social difficulties, learning difficulties or physical impairments.
There is significant overlap between AP and SEN, with Taylor’s review of the former noting that the population in PRUs is 79% SEN.
We know from the 2013 SEN statistics that 12.6% of all pupils on roll at PRUs had SEN statements and 68.9% had SEN without statements. But these populations represent only a tiny proportion of the total SEN population in schools.
SEN learners also have higher than typical eligibility for FSM. In January 2013, 30.1% of all SEN categories across all primary, secondary and special schools were FSM-eligible, roughly twice the rate for all pupils. However, this means that almost seven in ten are not caught by the definition of ‘poor’ provided above.
In 2012/13 23.4% of all SEN learners achieved five or more GCSEs at A*-C or equivalent, including GCSEs in English and maths, compared with 70.4% of those having no identified SEN – another significant overall gap, but not directly relevant to our comparison of the ‘poor but bright’ and the ‘poor but dim’.
Data on socio-economic attainment gaps across the attainment spectrum
Those interested in how socio-economic attainment gaps vary at different attainment levels cannot fail to be struck by how little material of this kind is published, particularly in the secondary sector, when such gaps tend to increase in size.
One cannot entirely escape the conviction that this reticence deliberately masks some inconvenient truths.
The ideal would be to have the established high/middle/low attainer distinctions mapped directly onto performance by advantaged/disadvantaged learners in the Performance Tables but, as we have indicated, this material is conspicuous by its absence. Perhaps it will appear in the Data Portal now under development.
Our next best option is to examine socio-economic attainment gaps on specific attainment measures that will serve as decent proxies for high/middle/low attainment. We can do this to some extent but the focus is disproportionately on the primary sector because the Secondary Tables do not include proper high attainment measures (such as measures based exclusively on GCSE performance at grades A*/A). Maybe the Portal will come to the rescue here as well. We can however supply some basic Oxbridge fair access data.
The least preferable option is deploy our admittedly poor proxies for low attainers – SEN and AP. But there isn’t much information from this source either.
The analysis below looks consecutively at data for the primary and secondary sectors.
We know, from the 2013 Primary School Performance Tables, that the percentage of disadvantaged and other learners achieving different KS2 levels in reading, writing and maths combined, in 2013 and 2012 respectively, were as follows:
Table 1: Percentage of disadvantaged and all other learners achieving each national curriculum level at KS2 in 2013 in reading, writing and maths combined
L3 or below
L4 or above
L4B or above
L5 or above
This tells us relatively little, apart from the fact that disadvantaged learners are heavily over-represented at L3 and below and heavily under-represented at L5 and above.
The L5 gap is somewhat lower than the gaps at L4 and 4B respectively, but not markedly so. However, the L5 gap has widened slightly since 2012 while the reverse is true at L4.
This next table synthesises data from SFR51/13: ‘National curriculum assessments at key stage 2: 2012 to 2013’. It also shows gaps for disadvantage, as opposed to FSM gaps.
Table 2: Percentage of disadvantaged and all other learners achieving each national curriculum level, including differentiation by gender, in each 2013 end of KS2 test
This tells a relatively consistent story across each test and for boys as well as girls.
We can see that, at Level 4 and below, learners from disadvantaged backgrounds are in the clear majority, perhaps with the exception of L4 GPS. But at L4B and above they are very much in the minority.
Moreover, with the exception of L6 where low percentages across the board mask the true size of the gaps, disadvantaged learners tend to be significantly more under-represented at L4B and above than they are over-represented at L4 and below.
A different way of looking at this data is to compare the percentages of advantaged and disadvantaged learners respectively at L4 and L5 in each assessment.
Reading: Amongst disadvantaged learners the proportion at L5 is -18 percentage points fewer than the proportion at L4, but amongst advantaged learners the proportion at L5 is +12 percentage points higher than at L4.
GPS: Amongst disadvantaged learners the proportion at L5 is +5 percentage points more than the proportion at L4, but amongst advantaged learners the proportion at L5 is +26 percentage points higher than at L4.
Maths: Amongst disadvantaged learners the proportion at L5 is -26 percentage points fewer than the proportion at L4, but amongst advantaged learners the proportion at L5 is only 2 percentage points fewer than at L4.
If we look at 2013 gaps compared with 2012 (with teacher assessment of writing included in place of the GSP test introduced in 2013) we can see there has been relatively little change across the board, with the exception of L5 maths, which has been affected by the increasing success of advantaged learners at L6.
Table 3: Percentage of disadvantaged and all other learners achieving national curriculum levels 3-6 in each of reading, writing and maths in 2012 and 2013 respectively
To summarise, as far as KS2 performance is concerned, there are significant imbalances at both the top and the bottom of the attainment distribution and these gaps have not changed significantly since 2012. There is some evidence to suggest that gaps at the top are larger than those at the bottom.
Unfortunately there is a dearth of comparable data at secondary level, principally because of the absence of published measures of high attainment.
SFR05/2014 provides us with FSM gaps (as opposed to disadvantaged gaps) for a series of GCSE measures, none of which serve our purpose particularly well:
5+ A*-C GCSE grades: gap = 16.0%
5+ A*-C grades including English and maths GCSEs: gap = 26.7%
5+ A*-G grades: gap = 7.6%
5+ A*-G grades including English and maths GCSEs: gap = 9.9%
A*-C grades in English and maths GCSEs: gap = 26.6%
Achieving the English Baccalaureate: gap = 16.4%
Perhaps all we can deduce is that the gaps vary considerably in size, but tend to be smaller for the relatively less demanding and larger for the relatively more demanding measures.
For specific high attainment measures we are forced to rely principally on data snippets released in answer to occasional Parliamentary Questions.
In 2003, 1.0% of FSM-eligible learners achieved five or more GCSEs at A*/A including English and maths but excluding equivalents, compared with 6.8% of those not eligible, giving a gap of 5.8%. By 2009 the comparable percentages were 1.7% and 9.0% respectively, giving an increased gap of 7.3% (Col 568W)
In 2006/07, the percentage of FSM-eligible pupils securing A*/A grades at GCSE in different subjects, compared with the percentage of all pupils in maintained schools doing so were as shown in the table below (Col 808W)
Table 4: Percentage of FSM-eligible and all pupils achieving GCSE A*/A grades in different GCSE subjects in 2007
In 2008, 1% of FSM-eligible learners in maintained schools achieved A* in GCSE maths compared with 4% of all pupils in maintained schools. The comparable percentages for Grade A were 3% and 10% respectively, giving an A*/A gap of 10% (Col 488W)
There is much variation in the subject-specific outcomes at A*/A described above. But, when it comes to the overall 5+ GCSEs high attainment measure based on grades A*/A, the gap is much smaller than on the corresponding standard measure based on grades A*-C.
There is a complex pattern in evidence here which is very hard to explain with the limited data available. More time series data of this nature – illustrating Excellence and Foundation Gaps alike – should be published annually so that we have a more complete and much more readily accessible dataset.
I could find no information at all about the comparative performance of disadvantaged learners in AP settings compared with those not from disadvantaged backgrounds.
Data is published showing the FSM gap for SEN learners on all the basic GCSE measures listed above. I have retained the generic FSM gaps in brackets for the sake of comparison:
5+ A*-C GCSE grades: gap = 12.5% (16.0%)
5+ A*-C grades including English and maths GCSEs: gap = 12.1% (26.7%)
5+ A*-G grades: gap = 10.4% (7.6%)
5+ A*-G grades including English and maths GCSEs: gap = 13.2% (9.9%)
A*-C grades in English and maths GCSEs: gap = 12.3% (26.6%)
Achieving the English Baccalaureate: gap = 3.5% (16.4%)
One can see that the FSM gaps for the more demanding measures are generally lower for SEN learners than they are for all learners. This may be interesting but, for the reasons given above, this is not a reliable proxy for the FSM gap amongst ‘dim’ learners.
The chart below shows the number of 15 year-olds eligible for and claiming FSM at age 15 who progressed to Oxford or Cambridge by age 19. The figures are rounded to the nearest five.
Chart 1: FSM-eligible learners admitted to Oxford and Cambridge 2005/06 to 2010/11
In sum, there has been no change in these numbers over the last six years for which data has been published. So while there may have been consistently significant expenditure on access agreements and multiple smaller mentoring programmes, it has had negligible impact on this measure at least.
My previous post set out a proposal for what to do about this sorry state of affairs.
For the purposes of this discussion we need ideally to identify and compare total national budgets for the ‘poor but bright’ and the ‘poor but dim’. But that is simply not possible.
Many funding streams cannot be disaggregated in this manner. As we have seen, some – including the AP and SEN budgets – may be aligned erroneously with the second of these groups, although they also support learners who are neither ‘poor’ nor ‘dim’ and have a broader purpose than raising attainment.
There may be some debate, too, about which funding streams should be weighed in the balance.
On the ‘bright but poor’ side, do we include funding for grammar schools, even though the percentage of disadvantaged learners attending many of them is virtually negligible (despite recent suggestions that some are now prepared to do something about this)? Should the Music and Dance Scheme (MDS) be within scope of this calculation?
The best I can offer is a commentary that gives a broad sense of orders of magnitude, to illustrate in very approximate terms how the scales tend to tilt more towards the ‘poor but dim’ rather than the ‘poor but bright’, but also to weave in a few relevant asides about some of the funding streams in question.
Pupil Premium and the EEF
I begin with the Pupil Premium – providing schools with additional funding to raise the attainment of disadvantaged learners.
The Premium is not attached to the learners who qualify for it, so schools are free to aggregate the funding and use it as they see fit. They are held accountable for these decisions through Ofsted inspection and the gap-narrowing measures in the Performance Tables.
Mr Thomas suggests in our Twitter discussion that AP students are not significant beneficiaries of such support, although provision in PRUs features prominently in the published evaluation of the Premium. It is for local authorities to determine how Pupil Premium funding is allocated in AP settings.
One might also make a case that ‘bright but poor’ learners are not a priority either, despite suggestions from the Pupil Premium Champion to the contrary.
As we have seen, the Performance Tables are not sharply enough focused on the excellence gaps at the top of the distribution and I have shown elsewhere that Ofsted’s increased focus on the most able does not yet extend to the impact on those attracting the Pupil Premium, even though there was a commitment that it would do so.
If there is Pupil Premium funding heading towards high attainers from disadvantaged backgrounds, the limited data to which we have access does not yet suggest a significant impact on the size of Excellence Gaps.
This 2011 paper explains that it is prioritising the performance of disadvantaged learners in schools below the floor targets. At one point it says:
‘Looking at the full range of GCSE results (as opposed to just the proportions who achieve the expected standards) shows that the challenge facing the EEF is complex – it is not simply a question of taking pupils from D to C (the expected level of attainment). Improving results across the spectrum of attainment will mean helping talented pupils to achieve top grades, while at the same time raising standards amongst pupils currently struggling to pass.’
But this is just after it has shown that the percentages of disadvantaged high attainers in its target schools are significantly lower than elsewhere. Other things being equal, the ‘poor but dim’ will be the prime beneficiaries.
It may now be time for the EEF to expand its focus to all schools. A diagram from this paper – reproduced below – demonstrates that, in 2010, the attainment gap between FSM and non-FSM was significantly larger in schools above the floor than in those below the floor that the EEF is prioritising. This is true in both the primary and secondary sectors.
It would be interesting to see whether this is still the case.
AP and SEN
Given the disaggregation problems discussed above, this section is intended simply to give some basic sense of orders of magnitude – lending at least some evidence to counter Mr Thomas’ assertion that the ‘effort and resources, of schools… are directed disproportionately at those who are already high achieving – the poor but bright’.
It is surprisingly hard to get a grip on the overall national budget for AP. A PQ from early 2011 (Col 75W) supplies a net current expenditure figure for all English local authorities of £530m.
Taylor’s Review fails to offer a comparable figure but my rough estimates based on the per pupil costs he supplies suggests a revenue budget of at least £400m. (Taylor suggests average per pupil costs of £9,500 per year for full-time AP, although PRU places are said to cost between £12,000 and £18,000 per annum.)
I found online a consultation document from Kent – England’s largest local authority – stating its revenue costs at over £11m in FY2014-15. Approximately 454 pupils attended Kent’s AP/PRU provision in 2012-13.
There must also be a significant capital budget. There are around 400 PRUs, not to mention a growing cadre of specialist AP academies and free schools. The total capital cost of the first AP free school – Derby Pride Academy – was £2.147m for a 50 place setting.
In FY2011-12, total annual national expenditure on SEN was £5.77 billion (Col 391W). There will have been some cost-cutting as a consequence of the latest reforms, but the order of magnitude is clear.
The latest version of the SEN Code of Practice outlines the panoply of support available, including the compulsory requirement that each school has a designated teacher to be responsible for co-ordinating SEN provision (the SENCO).
In short, the national budget for AP is sizeable and the national budget for SEN is huge. Per capita expenditure is correspondingly high. If we could isolate the proportion of these budgets allocated to raising the attainment of the ‘poor but dim’, the total would be substantial.
Fair Access, especially to Oxbridge, and some related observations
Mr Thomas refers specifically to funding to support fair access to universities – especially Oxbridge – for those from disadvantaged backgrounds. This is another area in which it is hard to get a grasp on total expenditure, not least because of the many small-scale mentoring projects that exist.
Mr Thomas is quite correct to remark on the sheer number of these, although they are relatively small beer in budgetary terms. (One suspects that they would be much more efficient and effective if they could be linked together within some sort of overarching framework.)
The Office for Fair Access (OFFA) estimates University access agreement expenditure on outreach in 2014-15 at £111.9m and this has to be factored in, as does DfE’s own small contribution – the Future Scholar Awards.
Were any expenditure in this territory to be criticised, it would surely be the development and capital costs for new selective 16-19 academies and free schools that specifically give priority to disadvantaged students.
The sums are large, perhaps not outstandingly so compared with national expenditure on SEN for example, but they will almost certainly benefit only a tiny localised proportion of the ‘bright but poor’ population.
There are several such projects around the country. Some of the most prominent are located in London.
The London Academy of Excellence (capacity 420) is fairly typical. It cost an initial £4.7m to establish plus an annual lease requiring a further £400K annually.
There were reportedly disagreements within Government:
‘It is understood that the £45m cost was subject to a “significant difference of opinion” within the DfE where critics say that by concentrating large resources on the brightest children at a time when budgets are constrained means other children might miss out…
But a spokeswoman for the DfE robustly defended the plans tonight. “This is an inspirational collaboration between the country’s top academy chain and one of the best private schools in the country,” she said. “It will give hundreds of children from low income families across London the kind of top quality sixth-form previously reserved for the better off.”’
Here we have in microcosm the debate to which this post is dedicated.
One blogger – a London College Principal – pointed out that the real issue was not whether the brightest should benefit over others, but how few of the ‘poor but bright’ would do so:
‘£45m could have a transformative effect on thousands of 16-19 year olds across London… £45m could have funded at least 50 extra places in each college for over 10 years, helped build excellent new facilities for all students and created a city-wide network to support gifted and talented students in sixth forms across the capital working with our partner universities and employers.’
There are three main elements to the discussion: the point of principle, the inputs and the impact. The following sections deal with each of these in turn.
Put bluntly, should ‘poor but dim’ kids have higher priority for educators than ‘poor but bright’ kids (Mr Thomas’ position) or should all poor kids have equal priority and an equal right to the support they need to achieve their best (the Gifted Phoenix position)?
For Mr Thomas, it seems this priority is determined by whether – and how far – the learner is behind undefined ‘basic levels of attainment’ and/or mastery of ‘the basics’ (presumably literacy and numeracy).
Those below the basic attainment threshold have higher priority than those above it. He does not say so but this logic suggests that those furthest below the threshold are the highest priority and those furthest above are the lowest.
So, pursued to its logical conclusion, this would mean that the highest attainers would get next to no support while a human vegetable would be the highest priority of all.
However, since Mr Thomas’ focus is on marginal benefit, it may be that those nearest the threshold would be first in the queue for scarce resources, because they would require the least effort and resources to lift above it.
This philosophy drives the emphasis on achievement of national benchmarks and predominant focus on borderline candidates that, until recently, dominated our assessment and accountability system.
For Gifted Phoenix, every socio-economically disadvantaged learner has equal priority to the support they need to improve their attainment, by virtue of that disadvantage.
There is no question of elevating some ahead of others in the pecking order because they are further behind on key educational measures since, in effect, that is penalising some disadvantaged learners on the grounds of their ability or, more accurately, their prior attainment.
This philosophy underpins the notion of personalised education and is driving the recent and welcome reforms of the assessment and accountability system, designed to ensure that schools are judged by how well they improve the attainment of all learners, rather than predominantly on the basis of the proportion achieving the standard national benchmarks.
I suggested that, in deriding ‘the romance of the poor but bright’, Mr Thomas ran the risk of falling into ‘the slough of anti-elitism’. He rejected that suggestion, while continuing to emphasise the need to ‘concentrate more’ on ‘those at risk of never being able to engage with society’.
I have made the assumption that Thomas is interested primarily in KS2 and GCSE or equivalent qualifications at KS4 given his references to KS2 L4, basic skills and ‘paper qualifications needed to enter meaningful employment’.
But his additional references to ‘real qualifications’ (as opposed to paper ones) and engaging with society could well imply a wider range of personal, social and work-related skills for employability and adult life.
My preference for equal priority would apply regardless: there is no guarantee that high attainers from disadvantaged backgrounds will necessarily possess these vital skills.
But, as indicated in the definition above, there is an important distinction to be maintained between:
educational support to raise the attainment, learning and employability skills of socio-economically disadvantaged learners and prepare them for adult life and
support to manage a range of difficulties – whether behavioural problems, disability, physical or mental impairment – that impact on the broader life chances of the individuals concerned.
Such a distinction may well be masked in the everyday business of providing effective holistic support for learners facing such difficulties, but this debate requires it to be made and sustained given Mr Thomas’s definition of the problem in terms of the comparative treatment of the ‘poor but bright’ and the ‘poor but dim’.
Having made this distinction, it is not clear whether he himself sustains it consistently through to the end of his post. In the final paragraphs the term ‘poor but dim’ begins to morph into a broader notion encompassing all AP and SEN learners regardless of their socio-economic status.
Additional dimensions of disadvantage are potentially being brought into play. This is inconsistent and radically changes the nature of the argument.
By inputs I mean the resources – financial and human – made available to support the education of ‘dim’ and ‘bright’ disadvantaged learners respectively.
Mr Thomas also shifts his ground as far as inputs are concerned.
His post opens with a statement that ‘the effort and resources’ of schools, charities and businesses are ‘directed disproportionately’ at the poor but bright – and he exemplifies this with reference to fair access to competitive universities, particularly Oxbridge.
When I point out the significant investment in AP compared with fair access, he changes tack – ‘I’m measuring outcomes not just inputs’.
Then later he says ‘But what some need is just more expensive’, to which I respond that ‘the bottom end already has the lion’s share of funding’.
At this point we have both fallen into the trap of treating the entirety of the AP and SEN budgets as focused on the ‘poor but dim’.
We are failing to recognise that they are poor proxies because the majority of AP and SEN learners are not ‘poor’, many are not ‘dim’, these budgets are focused on a wider range of needs and there is significant additional expenditure directed at ‘poor but dim’ learners elsewhere in the wider education budget.
Despite Mr Thomas’s opening claim, it should be reasonably evident from the preceding commentary that my ‘lion’s share’ point is factually correct. His suggestion that AP is ‘largely unknown and wholly unappreciated’ flies in the face of the Taylor Review and the Government’s subsequent work programme.
SEN may depend heavily on the ‘extraordinary effort of dedicated staff’, but at least there are such dedicated staff! There may be inconsistencies in local authority funding and support for SEN, but the global investment is colossal by comparison with the funding dedicated on the other side of the balance.
Gifted Phoenix’s position acknowledges that inputs are heavily loaded in favour of the SEN and AP budgets. This is as it should be since, as Thomas rightly notes, many of the additional services they need are frequently more expensive to provide. These services are not simply dedicated to raising their attainment, but also to tackling more substantive problems associated with their status.
Whether the balance of expenditure on the ‘bright’ and ‘dim’ respectively is optimal is a somewhat different matter. Contrary to Mr Thomas’s position, gifted advocates are often convinced that too much largesse is focused on the latter at the expense of the former.
Turning to advocacy, Mr Thomas says ‘we end up ignoring the more pressing problem’ of the poor but dim. He argues in the Twitter discussion that too few people are advocating for these learners, adding that they are failed ‘because it’s not popular to talk about them’.
I could not resist countering that advocacy for gifted learners is equally unpopular, indeed ‘the word is literally taboo in many settings’. I cannot help thinking – from his footnote reference to ‘epigenetic coding’ – that Mr Thomas is amongst those who are distinctly uncomfortable with the term.
Where advocacy does survive it is focused exclusively on progression to competitive universities and, to some extent, high attainment as a route towards that outcome. The narrative has shifted away from concepts of high ability or giftedness, because of the very limited consensus about that condition (even amongst gifted advocates) and even considerable doubt in some quarters whether it exists at all.
Mr Thomas maintains in his post that the successes of his preferred target group ‘have the power to change the British economy, far more so than those of their brighter peers’. This is because ‘the gap most damaging to society is in life outcomes for the children that perform least well at school’.
As noted above, it is important to remember that we are discussing here the addition of educational and economic value by tackling underachievement amongst learners from disadvantaged backgrounds, rather than amongst all the children that perform least well.
We are also leaving to one side the addition of value through any wider engagement by health and social services to improve life chances.
It is quite reasonable to advance the argument that ‘improving the outcomes ‘of the bottom 20%’ (the Tail) will have ‘a huge socio-economic impact’ and ‘make the biggest marginal difference to society’.
But one could equally make the case that society would derive similar or even higher returns from a decision to concentrate disproportionately on the highest attainers (the Smart Fraction).
Or, as Gifted Phoenix would prefer, one could reasonably propose that the optimal returns should be achieved by means of a balanced approach that raises both the floor and the ceiling, avoiding any arbitrary distinctions on the basis of prior attainment.
From the Gifted Phoenix perspective, one should balance the advantages of removing the drag on productivity of an educational underclass against those of developing the high-level human capital needed to drive economic growth and improve our chances of success in what Coalition ministers call the ‘global race’.
According to this perspective, by eliminating excellence gaps between disadvantaged and advantaged high attainers we will secure a stream of benefits broadly commensurate to that at the bottom end.
These will include substantial spillover benefits, achieved as a result of broadening the pool of successful leaders in political, social, educational and artistic fields, not to mention significant improvements in social mobility.
It is even possible to argue that, by creating a larger pool of more highly educated parents, we can also achieve a significant positive impact on the achievement of subsequent generations, thus significantly reducing the size of the tail.
And in the present generation we will create many more role models: young people from disadvantaged backgrounds who become educationally successful and who can influence the aspirations of younger disadvantaged learners.
This avoids the risk that low expectations will be reinforced and perpetuated through a ‘deficit model’ approach that places excessive emphasis on removing the drag from the tail by producing a larger number of ‘useful members of society’.
It seems to me entirely conceivable that economists might produce calculations to justify any of these different paths.
But it would be highly inequitable to put all our eggs in the ‘poor but bright’ basket, because that penalises some disadvantaged learners for their failure to achieve high attainment thresholds.
And it would be equally inequitable to focus exclusively on the ‘poor but dim’, because that penalises some disadvantaged learners for their success in becoming high attainers.
The more equitable solution must be to opt for a ‘balanced scorecard’ approach that generates a proportion of the top end benefits and a proportion of the bottom end benefits simultaneously.
There is a risk that this reduces the total flow of benefits, compared with one or other of the inequitable solutions, but there is a trade-off here between efficiency and a socially desirable outcome that balances the competing interests of the two groups.
The personal dimension
After we had finished our Twitter exchanges, I thought to research Mr Thomas online. Turns out he’s quite the Big-Cheese-in-Embryo. Provided he escapes the lure of filthy lucre, he’ll be a mover and shaker in education within the next decade.
I couldn’t help noticing his own educational experience – public school, a First in PPE from Oxford, leading light in the Oxford Union – then graduation from Teach First alongside internships with Deutsche Bank and McKinsey.
Now he’s serving his educational apprenticeship as joint curriculum lead for maths at a prominent London Academy. He’s also a trustee of ‘a university mentoring project for highly able 11-14 year old pupils from West London state schools’.
Lucky I didn’t check earlier. Such a glowing CV might have been enough to cow this grammar school Oxbridge reject, even if I did begin this line of work several years before he was born. Not that I have a chip on my shoulder…
The experience set me wondering about the dominant ideology amongst the Teach First cadre, and how it is tempered by extended exposure to teaching in a challenging environment.
There’s more than a hint of idealism about someone from this privileged background espousing the educational philosophy that Mr Thomas professes. But didn’t he wonder where all the disadvantaged people were during his own educational experience, and doesn’t he want to change that too?
His interest in mentoring highly able pupils would suggest that he does, but also seems directly to contradict the position he’s reached here. It would be a pity if the ‘poor but bright’ could not continue to rely on his support, equal in quantity and quality to the support he offers the ‘poor but dim’.
For he could make a huge difference at both ends of the attainment spectrum – and, with his undeniable talents, he should certainly be able to do so
We are entertaining three possible answers to the question whether in principle to prioritise the needs of the ‘poor but bright’ or the ‘poor but dim’:
Concentrate principally – perhaps even exclusively – on closing the Excellence Gaps at the top
Concentrate principally – perhaps even exclusively – on closing the Foundation Gaps at the bottom
Concentrate equally across the attainment spectrum, at the top and bottom and all points in between.
Speaking as an advocate for those at the top, I favour the third option.
It seems to me incontrovertible – though hard to quantify – that, in the English education system, the lion’s share of resources go towards closing the Foundation Gaps.
That is perhaps as it should be, although one could wish that the financial scales were not tipped so excessively in their direction, for ‘poor but bright’ learners do in my view have an equal right to challenge and support, and should not be penalised for their high attainment.
Our current efforts to understand the relative size of the Foundation and Excellence Gaps and how these are changing over time are seriously compromised by the limited data in the public domain.
There is a powerful economic case to be made for prioritising the Foundation Gaps as part of a deliberate strategy for shortening the tail – but an equally powerful case can be constructed for prioritising the Excellence Gaps, as part of a deliberate strategy for increasing the smart fraction.
Neither of these options is optimal from an equity perspective however. The stream of benefits might be compromised somewhat by not focusing exclusively on one or the other, but a balanced approach should otherwise be in our collective best interests.
You may or may not agree. Here is a poll so you can register your vote. Please use the comments facility to share your wider views on this post.
We must beware the romance of the poor but bright,
As I see it, there are three sets of issues with the ‘G’ word:
Terminological – the term carries with it associations that make some advocates uncomfortable and predispose others to resist such advocacy.
Definitional – there are many different ways to define the term and the subset of the population to which it can be applied; there is much disagreement about this, even amongst advocates.
Labelling – the application of the term to individuals can have unintended negative consequences, for them and for others.
We need shared terminology to communicate effectively about this topic. A huge range of alternatives is available: able, more able, highly able, most able, talented, asynchronous, high potential, high learning potential… and so on.
These terms – the ‘g’ word in particular – are often qualified by an adjective – profoundly, highly, exceptionally – which adds a further layer of complexity. Then there is the vexed question of dual and multiple exceptionality…
Those of us who are native English speakers conveniently forget that there are also numerous terms available in other languages: surdoue, hochbegabung, hochbegaabte, altas capacidades, superdotados, altas habilidades, evnerik and many, many more!
Each of these terms has its own good and bad points, its positive and negative associations.
The ‘g’ word has a long history, is part of the lingua franca and is still most widely used. But its long ascendancy has garnered a richer mix of associations than some of the alternatives.
The negative associations can be unhelpful to those seeking to persuade others to respond positively and effectively to the needs of these children and young people. Some advocates feel uncomfortable using the term and this hampers effective communication, both within the community and outside it.
Some react negatively to its exclusive, elitist connotations; on the other hand, it can be used in a positive way to boost confidence and self-esteem.
But, ultimately, the term we use is less significant than the way in which we define it. There may be some vague generic distaste for the ‘g’ word, but logic should dictate that most reactions will depend predominantly on the meaning that is applied to the term.
My very first blog post drew attention to the very different ways in which this topic is approached around the world. I identified three key polarities:
Nature versus nurture – the perceived predominance of inherited disposition over effort and practice, or vice versa.
Excellence versus equity – whether priority is given to raising absolute standards and meritocracy or narrowing excellence gaps and social mobility.
Special needs versus personalisation – whether the condition or state defined by the term should be addressed educationally as a special need, or through mainstream provision via differentiation and tailored support.
These definitional positions may be associated with the perceived pitch or incidence of the ‘g’ condition. When those at the extreme of the distribution are under discussion, or the condition is perceived to be extremely rare, a nature-excellence-special needs perspective is more likely to predominate. A broader conceptualisation pushes one towards the nurture-equity-personalisation nexus.
Those with a more inclusive notion of ‘g’-ness – who do not distinguish between ‘bright’ and ‘g’, include all high attainers amongst the latter and are focused on the belief that ‘g’-ness is evenly distributed in the population by gender, ethnic and socio-economic background – are much more likely to hold the latter perspective, or at least tend towards it.
There are also differences according to whether the focus is the condition itself – ‘g’-ness – or schooling for the learners to whom the term is applied – ‘g’ education. In the first case, nature, excellence and special needs tend to predominate; in the second the reverse is true. This can compromise interaction between parents and educators.
In my experience, if the ‘g’ word is qualified by a careful definition that takes account of these three polarities, a mature discussion about needs and how best to meet them is much more likely to occur.
In the absence of a shared definition, the associations of the term will likely predominate unchecked. Effective communication will be impossible; common ground cannot be established; the needs that the advocate is pressing will remain unfulfilled. That is in no-one’s best interests, least of all those who are ‘g’.
When the ‘g’ word is applied to an individual, it is likely to influence how that individual perceives himself and how others perceive him.
Labelling is normally regarded as negative, because it implies a fixed and immutable state and may subject the bearers of the label to impossibly high expectations, whether of behaviour or achievement, that they cannot always fulfil.
Those who do not carry the label may see themselves as second class citizens, become demotivated and much less likely to succeed.
But, as noted above, it is also possible to use the ‘g’ label to confer much-needed status and attention on those who do not possess the former or receive enough of the latter. This can boost confidence and self-esteem, making the owners of the label more likely to conform to the expectations that it carries.
This is particularly valuable for those who strive to promote equity and narrow excellence gaps between those from advantaged and disadvantaged backgrounds.
Moreover, much depends on whether the label is permanently applied or confers a temporary status.
I recently published a Twitter conversation explaining how the ‘g’ label can be used as a marker to identify those learners who for the time being need additional learning support to maximise their already high achievement.
This approach reflects the fact that children and young people do not develop through a consistent linear process, but experience periods of rapid development and comparative stasis.
The timing and duration of these periods will vary so, at any one time in any group of such individuals, some will be progressing rapidly and others will not. Over the longer term some will prove precocious; others late developers.
This is not to deny that a few learners at the extreme of the distribution will retain the marker throughout their education, because they are consistently far ahead of their peers and so need permanent additional support to maximise their achievement.
But, critically, the label is earned through evidence of high achievement rather than through a test of intelligence or cognitive ability that might have been administered once only and in the distant past. ‘G’-ness depends on educational success. It also forces educators to address underachievement at the top of the attainment spectrum.
If a label is more typically used as a temporary marker it must be deployed sensitively, in a way that is clearly understood by learners and their parents. They must appreciate that the removal of the marker is not a punishment or downgrading that leads to loss of self-esteem.
Because the ‘g’ label typically denotes a non-permanent state that defines need rather than expectation, most if not all of the negative connotations can be avoided.
Nevertheless, this may be anathema to those with a nature-excellence-special needs perspective!
I have avoided using the ‘g’ word within this post, partly to see if it could be done and partly out of respect for those of you who dislike it so much.
But I have also advanced some provocative arguments using terminology that some of you will find equally disturbing. That is deliberate and designed to make you think!
The ‘g’ word has substantial downside, but this can be minimised through careful definition and the application of the label as a non-permanent marker.
It may be that the residual negative associations are such that an alternative is still preferable. The question then arises whether there is a better term with the same currency and none of the negative connotations.
As noted above there are many contenders – not all of them part of the English language – but none stands head-and-shoulders above its competitors.
And of course it is simply impossible to ban a word. Indeed, any attempt to do so would provoke many of us – me included – to use the ‘g’ word even more frequently and with much stronger conviction.
The more specific purpose of the post is to explore how consistently Ofsted inspectors are applying their guidance and, in particular, whether there is substance for some of the concerns I expressed in these earlier posts, drawn together in the next section.
The remainder of the post provides an analysis of the sample and a qualitative review of the material about the most able (and analogous terms) included in the sample of 87 inspection reports.
It concludes with a summary of the key points, a set of associated recommendations and an overall inspection grade for inspectors’ performance to date. Here is a link to this final section for those who prefer to skip the substance of the post.
Before embarking on the real substance of this argument I need to restate briefly some of the key issues raised in those earlier posts:
Ofsted’s definition of ‘the most able’ in its 2013 survey report is idiosyncratically broad, including around half of all learners on the basis of their KS2 outcomes.
The evidence base for this survey report included material suggesting that the most able students are supported well or better in only 20% of lessons – and are not making the progress of which they are capable in about 40% of schools.
The survey report’s recommendations included three commitments on Ofsted’s part. It would:
o ‘focus more closely in its inspections on the teaching and progress of the most able students, the curriculum available to them, and the information, advice and guidance provided to the most able students’;
o ‘consider in more detail during inspection how well the pupil premium is used to support the most able students from disadvantaged backgrounds’ and
o ‘report its inspection findings about this group of students more clearly in school inspection, sixth form and college reports.’
Subsequently the school inspection guidance was revised somewhat haphazardly, resulting in the parallel use of several undefined terms (‘able pupils’, ‘most able’, ‘high attaining’, ‘highest attaining’), the underplaying of the attainment and progress of the most able learners attracting the Pupil Premium and very limited reference to appropriate curriculum and IAG.
Within the inspection guidance, emphasis was placed primarily on learning and progress. I edited together the two relevant sets of level descriptors in the guidance to provide this summary for the four different inspection categories:
In outstanding schools the most able pupils’ learning is consistently good or better and they are making rapid and sustained progress.
In good schools the most able pupils’ learning is generally good, they make good progress and achieve well over time.
In schools requiring improvement the teaching of the most able pupils and their achievement are not good.
In inadequate schools the most able pupils are underachieving and making inadequate progress.
No published advice has been made available to inspectors on the interpretation of these amendments to the inspection guidance. In October 2013 I wrote:
‘Unfortunately, there is a real risk that the questionable clarity of the Handbook and Subsidiary Guidance will result in some inconsistency in the application of the Framework, even though the fundamental purpose of such material is surely to achieve the opposite.’
Analysis of a very small sample of reports for schools reporting poor results for high attainers in the school performance tables suggested inconsistency both before and after the amendments were introduced into the guidance. I commented:
‘One might expect that, unconsciously or otherwise, inspectors are less ready to single out the performance of the most able when a school is inadequate across the board, but the small sample above does not support this hypothesis. Some of the most substantive comments relate to inadequate schools.
It therefore seems more likely that the variance is attributable to the differing capacity of inspection teams to respond to the new emphases in their inspection guidance. This would support the case made in my previous post for inspectors to receive additional guidance on how they should interpret the new requirement.’
The material below considers the impact of these revisions on a more substantial sample of reports and whether this justifies some of the concerns expressed above.
‘Inspectors must always report in detail on the progress of the most able pupils and how effectively teaching engages them with work that is challenging enough.’ (p8)
This serves to reinforce the changes to the inspection guidance and clearly indicates that coverage of this issue – at least in these terms – is a non-negotiable: we should expect to see appropriate reference in every single section 5 report.
The sample comprises 87 secondary schools whose Section 5 inspection reports were published by Ofsted in the month of March 2014.
The inspections were conducted between 26 November 2013 and 11 March 2014, so the inspectors will have had time to become familiar with the revised guidance.
However up to 20 of the inspections took place before Ofsted felt it necessary to emphasise that coverage of the progress and teaching of the most able is compulsory.
The sample happens to include several institutions inspected as part of wider-ranging reviews of schools in Birmingham and schools operated by the E-ACT academy chain. It also incorporates several middle-deemed secondary schools.
Chart 1 shows the regional breakdown of the sample, adopting the regions Ofsted uses to categorise reports, as opposed to its own regional structure (ie with the North East identified separately from Yorkshire and Humberside).
It contains a disproportionately large number of schools from the West Midlands while the South-West is significantly under-represented. All the remaining regions supply between 5 and 13 schools. A total of 57 local authority areas are represented.
Chart 1: Schools within the sample by region
Chart 2 shows the different statuses of schools within the sample. Over 40% are community schools, while almost 30% are sponsored academies. There are no academy converters but sponsored academies, free schools and studio schools together account for some 37% of the sample.
Chart 2: Schools within the sample by status
The vast majority of schools in the sample are 11-16 or 11-18 institutions, but four are all-through schools, five provide for learners aged 13 or 14 upwards and 10 are middle schools. There are four single sex schools.
Chart 3 shows the variation in school size. Some of the studio schools, free schools and middle schools are very small by secondary standards, while the largest secondary school in the sample has some 1,600 pupils. A significant proportion of schools have between 600 and 1,000 pupils.
Chart 3: Schools within the sample by number on roll
The distribution of overall inspection grades between the sample schools is illustrated by Chart 4 below. Eight of the sample were rated outstanding, 28 good, 35 as requiring improvement and 16 inadequate.
Of those rated inadequate, 12 were subject to special measures and four had serious weaknesses.
Chart 4: Schools within the sample by overall inspection grade
The eight schools rated outstanding include:
A mixed 11-18 sponsored academy
A mixed 14-19 studio school
A mixed 11-18 free school
A mixed 11-16 VA comprehensive;
A girls’ 11-18 VA comprehensive
A boys’ 11-18 VA selective school
A girls’ 11-18 community comprehensive and
A mixed 11-18 community comprehensive
The sixteen schools rated inadequate include:
Eight mixed 11-18 sponsored academies
Two mixed 11-16 sponsored academies
An mixed all-through sponsored academy
A mixed 11-16 free school
Two mixed 11-16 community comprehensives
A mixed 11-18 community comprehensive and
A mixed 13-19 community comprehensive
Coverage of the most able in main findings and recommendations
Where they were mentioned, such learners were most often described as ‘most able’ but a wide range of other terminology is deployed included ‘most-able’, ‘the more able’, ‘more-able’, ‘higher attaining’, ‘high-ability’, ‘higher-ability’ and ‘able students’.
The idiosyncratic adoption of redundant hyphenation is an unresolved mystery.
It is not unusual for two or more of these terms to be used in the same report. Because there is no glossary in existence, this makes some reports rather less straightforward to interpret accurately.
It is also more difficult to compare and contrast reports. Helpful services like Watchsted’s word search facility become less useful.
Incidence of commentary in the main findings and recommendations
Thirty of the 87 inspection reports (34%) addressed the school’s most able learners explicitly (or applied a similar term) in the sections setting out the report’s main findings and the recommendations respectively.
The analysis showed that 28% of reports on academies (including studios and free schools) met this criterion, whereas 38% of reports on non-academy schools did so.
Chart 5 shows how the incidence of reference in both main findings and recommendations varies according to the overall inspection grade awarded.
One can see that this level of attention is most prevalent in schools requiring improvement, followed by those with inadequate grades. It was less common in schools rated good and less common still in outstanding schools. The gap between these two categories is perhaps smaller than expected.
The slight lead for schools requiring improvement over inadequate schools may be attributable to a view that more of the latter face more pressing priorities, or it may have something to do with the varying proportions of high attainers in such schools, or both of these factors could be in play, amongst others.
Chart 5: Most able covered in both main findings and recommendations by overall inspection rating (percentage)
A further eleven reports (13%) addressed the most able learners in the recommendations but not the main findings.
Only one report managed to feature the most able in the main findings but not in the recommendations and this was because the former recorded that ‘the most able students do well’.
Consequently, a total of 45 reports (52%) did not mention the most able in either the main findings or the recommendations.
This applied to some 56% of reports on academies (including free schools and studio schools) and 49% of reports on other state-funded schools.
So, according to these proxy measures, the most able in academies appear to receive comparatively less attention from inspectors than those in non-academy schools. It is not clear why. (The samples are almost certainly too small to support reliable comparison of academies and non-academies with different inspection ratings.)
Chart 6 below shows the inspection ratings for this subset of reports.
Chart 6: Most able covered in neither main findings nor recommendations by overall inspection rating (percentage)
Here is further evidence that the significant majority of outstanding schools are regarded as having no significant problems in respect of provision for the most able.
On the other hand, this is far from being universally true, since it is an issue for one in four of them. This ratio of 3:1 does not lend complete support to the oft-encountered truism that outstanding schools invariably provide outstandingly for the most able – and vice versa.
At the other end of the spectrum, and perhaps even more surprisingly, over 30% of inadequate schools are assumed not to have issues significant enough to warrant reference in these sections. Sometimes this may be because they are equally poor at providing for all their learners, so the most able are not separately singled out.
Chart 7 below shows differences by school size, giving the percentage of reports mentioning the most able in both main findings and recommendations and in neither.
It divides schools into three categories: small (24 schools with a NOR of 599 or lower), medium (35 schools with a NOR of 600-999) and large (28 schools with a NOR of 1000 or higher.
Chart 7: Reports mentioning the most able in main findings and recommendations by school size
It is evident that ‘neither’ exceeds ‘both’ in the case of all three categories. The percentages are not too dissimilar in the case of small and large schools, which record a very similar profile.
But there is a much more significant difference for medium-sized schools. They demonstrate a much smaller percentage of ‘both’ reports and comfortably the largest percentage of ‘neither’ reports.
This pattern – suggesting that inspectors are markedly less likely to emphasise provision for the most able in medium-sized schools – is worthy of further investigation.
It would be particularly interesting to explore further the relationship between school size, the proportion of high attainers in a school and their achievement.
Typical references in the main findings and recommendations
I could detect no obvious and consistent variations in these references by school status or size, but it was possible to detect a noticeably different emphasis between schools rated outstanding and those rated inadequate.
Where the most able featured in reports on outstanding schools, these included recommendations such as:
‘Further increase the proportion of outstanding teaching in order to raise attainment even higher, especially for the most able students.’ (11-16 VA comprehensive).
‘Ensure an even higher proportion of students, including the most able, make outstanding progress across all subjects’ (11-18 sponsored academy).
These statements suggest that such schools have made good progress in eradicating underachievement amongst the most able but still have further room for improvement.
But where the most able featured in recommendations for inadequate schools, they were typically of this nature:
‘Improve teaching so that it is consistently good or better across all subjects, but especially in mathematics, by: raising teachers’ expectations of the quality and amount of work students of all abilities can do, especially the most and least able.’ (11-16 sponsored academy).
‘Improve the quality of teaching in order to speed up the progress students make by setting tasks that are at the right level to get the best out of students, especially the most able.’ (11-18 sponsored academy).
‘Rapidly improve the quality of teaching, especially in mathematics, by ensuring that teachers: have much higher expectations of what students can achieve, especially the most able…’ (11-16 community school).
These make clear that poor and inconsistent teaching quality is causing significant underachievement at the top end (and ‘especially’ suggests that this top end underachievement is particularly pronounced compared with other sections of the attainment spectrum in such schools).
Recommendations for schools requiring improvement are akin to those for inadequate schools but typically more specific, pinpointing particular dimensions of good quality teaching that are absent, so limiting effective provision for the most able. It is as if these schools have some of the pieces in place but not yet the whole jigsaw.
By comparison, recommendations for good schools can seem rather more impressionistic and/or formulaic, focusing more generally on ‘increasing the proportion of outstanding teaching’. In such cases the assessment is less about missing elements and more about the consistent application of all of them across the school.
One gets the distinct impression that inspectors have a clearer grasp of the ‘fit’ between provision for the most able and the other three inspection outcomes, at least as far as the distinction between ‘good’ and ‘outstanding’ is concerned.
But it would be misleading to suggest that these lines of demarcation are invariably clear. The boundary between ‘good’ and ‘requires improvement’ seems comparatively distinct, but there was more evidence of overlap at the intersections between the other grades.
Coverage of the most able in the main body of reports
References to the most able rarely turn up in the sections dealing with behaviour and safety and leadership and management. I counted no examples of the former and no more than one or two of the latter.
I could find no examples where information, advice and guidance available to the most able are separately and explicitly discussed and little specific reference to the appropriateness of the curriculum for the most able. Both are less prominent than the recommendations in the June 2013 survey report led us to expect.
Within this sample, the vast majority of reports include some description of the attainment and/or progress of the most able in the section about pupils’ achievement, while roughly half pick up the issue in relation to the quality of teaching.
The extent of the coverage of most able learners varied enormously. Some devoted a single sentence to the topic while others referred to it separately in main findings, recommendations, pupils’ achievement and quality of teaching. In a handful of cases reports seemed to give disproportionate attention to the topic.
Attainment and progress
Analyses of attainment and progress are sometimes entirely generic, as in:
‘The most able students make good progress’ (inadequate 11-18 community school).
‘The school has correctly identified a small number of the most able who could make even more progress’ (outstanding 11-16 RC VA school).
‘The most able students do not always secure the highest grades’ (11-16 community school requiring improvement).
‘The most able students make largely expected rates of progress. Not enough yet go on to attain the highest GCSE grades in all subjects.’ (Good 11-18 sponsored academy).
Sometimes such statements can be damning:
‘The most-able students in the academy are underachieving in almost every subject. This is even the case in most of those subjects where other students are doing well. It is an academy-wide issue.’ (Inadequate 11-18 sponsored academy).
These do not in my view constitute reporting ‘in detail on the progress of the most able pupils’ and so probably fall foul of Ofsted’s guidance to inspectors on writing reports.
More specific comments on attainment typically refer explicitly to the achievement of A*/A grades at GCSE and ideally to specific subjects, for example:
‘In 2013, standards in science, design and technology, religious studies, French and Spanish were also below average. Very few students achieved the highest A* and A grades.’ (Inadequate 11-18 sponsored academy)
‘Higher-ability students do particularly well in a range of subjects, including mathematics, religious education, drama, art and graphics. They do as well as other students nationally in history and geography.’ (13-18 community school requiring improvement)
More specific comments on progress include:
‘The progress of the most able students in English is significantly better than that in other schools nationally, and above national figures in mathematics. However, the progress of this group is less secure in science and humanities.’ (Outstanding 11-18 sponsored academy)
‘In 2013, when compared to similar students nationally, more-able students made less progress than less-able students in English. In mathematics, where progress is less than in English, students of all abilities made similar progress.’ (11-18 sponsored academy requiring improvement).
Statements about progress rarely extend beyond English and maths (the first example above is exceptional) but, when attainment is the focus, some reports take a narrow view based exclusively on the core subjects, while others are far wider-ranging.
Despite the reference in Ofsted’s survey report, and subsequently the revised subsidiary guidance, to coverage of high attaining learners in receipt of the Pupil Premium, this is hardly ever addressed.
I could find only two examples amongst the 87 reports:
‘The gap between the achievement in English and mathematics of students for whom the school receives additional pupil premium funding and that of their classmates widened in 2013… During the inspection, it was clear that the performance of this group is a focus in all lessons and those of highest ability were observed to be achieving equally as well as their peers.’ (11-16 foundation school requiring improvement)
‘Students eligible for the pupil premium make less progress than others do and are consequently behind their peers by approximately one GCSE grade in English and mathematics. These gaps reduced from 2012 to 2013, although narrowing of the gaps in progress has not been consistent over time. More-able students in this group make relatively less progress.’ (11-16 sponsored academy requiring improvement)
More often than not it seems that the most able and those in receipt of the Pupil Premium are assumed to be mutually exclusive groups.
Quality of teaching
There was little variation in the issues raised under teaching quality. Most inspectors select two or three options from a standard menu:
‘Where teaching is best, teachers provide suitably challenging materials and through highly effective questioning enable the most able students to be appropriately challenged and stretched…. Where teaching is less effective, teachers are not planning work at the right level of difficulty. Some work is too easy for the more able students in the class. (Good 11-16 community school)
‘In teaching observed during the inspection, the pace of learning for the most able students was too slow because the activities they were given were too easy. Although planning identified different activities for the most able students, this was often vague and not reflected in practice. Work lacks challenge for the most able students.’ (Inadequate 11-16 community school)
‘In lessons where teaching requires improvement, teachers do not plan work at the right level to ensure that students of differing abilities build on what they already know. As a result, there is a lack of challenge in these lessons, particularly for the more able students, and the pace of learning is slow. In these lessons teachers do not have high enough expectations of what students can achieve.’ (11-18 community school requiring improvement)
‘Tasks set by teachers are sometimes too easy and repetitive for pupils, particularly the most able. In mathematics, pupils are sometimes not moved on quickly enough to new and more challenging tasks when they have mastered their current work.’ (9-13 community middle school requiring improvement)
‘Targets which are set for students are not demanding enough, and this particularly affects the progress of the most able because teachers across the year groups and subjects do not always set them work which is challenging. As a result, the most able students are not stretched in lessons and do not achieve as well as they should.’ (11-16 sponsored academy rated inadequate)
All the familiar themes are present – assessment informing planning, careful differentiation, pace and challenge, appropriate questioning, the application of subject knowledge, the quality of homework, high expectations and extending effective practice between subject departments.
Negligible coverage of the most able
Only one of the 87 reports failed to make any mention of the most able whatsoever. This is the report on North Birmingham Academy, an 11-19 mixed school requiring improvement.
This clearly does not meet the injunction to:
‘…report in detail on the progress of the most able pupils and how effectively teaching engages them with work that is challenging enough’.
It ought not to have passed through Ofsted’s quality assurance processes unscathed. The inspection was conducted in February 2014, after this guidance issued, so there is no excuse.
Several other inspections make only cursory references to the most able in the main body of the report, for example:
‘Where teaching is not so good, it was often because teachers failed to check students’ understanding or else to anticipate when to intervene to support students’ learning, especially higher attaining students in the class.’ (Good 11-18 VA comprehensive).
‘… the teachers’ judgements matched those of the examiners for a small group of more-able students who entered early for GCSE in November 2013.’ (Inadequate 11-18 sponsored academy).
‘More-able students are increasingly well catered for as part of the academy’s focus on raising levels of challenge.’ (Good 11-18 sponsored academy).
‘The most able students do not always pursue their work to the best of their capability.’ (11-16 free school requiring improvement).
These would also fall well short of the report writing guidance. At least 6% of my sample falls into this category.
Some reports note explicitly that the most able learners are not making sufficient progress, but fail to capture this in the main findings or recommendations, for example:
‘The achievement of more able students is uneven across subjects. More able students said to inspectors that they did not feel they were challenged or stretched in many of their lessons. Inspectors agreed with this view through evidence gathered in lesson observations…lessons do not fully challenge all students, especially the more able, to achieve the grades of which they are capable.’ (11-19 sponsored academy requiring improvement).
‘The 2013 results of more-able students show they made slower progress than is typical nationally, especially in mathematics. Progress is improving this year, but they are still not always sufficiently challenged in lessons.’ (11-18 VC CofE school requiring improvement).
‘There is only a small proportion of more-able students in the academy. In 2013 they made less progress in English and mathematics than similar students nationally. Across all of their subjects, teaching is not sufficiently challenging for more-able students and they leave the academy with standards below where they should be.’ (Inadequate 11-18 sponsored academy).
‘The proportion of students achieving grades A* and A was well below average, demonstrating that the achievement of the most able also requires improvement.’ (11-18 sponsored academy requiring improvement).
Something approaching 10% of the sample fell into this category.It was not always clear why this issue was not deemed significant enough to feature amongst schools’ priorities for improvement. This state of affairs was more typical of schools requiring improvement than inadequate schools, so one could not so readily argue that the schools concerned were overwhelmed with the need to rectify more basic shortcomings.
That said, the example from an inadequate academy above may be significant. It is almost as if the small number of more able students is the reason why this shortcoming is not taken more seriously.
Inspectors must carry in their heads a somewhat subjective hierarchy of issues that schools are expected to tackle. Some inspectors appear to feature the most able at a relatively high position in this hierarchy; others push it further down the list. Some appear more flexible in the application of this hierarchy to different settings than others.
Formulaic and idiosyncratic references
There is clear evidence of formulaic responses, especially in the recommendations for how schools can improve their practice.
Many reports adopt the strategy of recommending a series of actions featuring the most able, either in the target group:
‘Improve the quality of teaching to at least good so that students, including the most able, achieve higher standards, by ensuring that: [followed by a list of actions] (9-13 community middle school requiring improvement)
Or in the list of actions:
‘Improve the quality of teaching in order to raise the achievement of students by ensuring that teachers:…use assessment information to plan their work so that all groups of students, including those supported by the pupil premium and the most-able students, make good progress.’ (11-16 community school requiring improvement)
It was rare indeed to come across a report that referred explicitly to interesting or different practice in the school, or approached the topic in a more individualistic manner, but here are a few examples:
‘More-able pupils are catered for well and make good progress. Pupils enjoy the regular, extra challenges set for them in many lessons and, where this happens, it enhances their progress. They enjoy that extra element which often tests them and gets them thinking about their work in more depth. Most pupils are keen to explore problems which will take them to the next level or extend their skills.’ (Good 9-13 community middle school)
‘Although the vast majority of groups of students make excellent progress, the school has correctly identified a small number of the most able who could make even more progress. It has already started an impressive programme of support targeting the 50 most able students called ‘Students Targeted A grade Results’ (STAR). This programme offers individualised mentoring using high-quality teachers to give direct intervention and support. This is coupled with the involvement of local universities. The school believes this will give further aspiration to these students to do their very best and attend prestigious universities.’ (Outstanding 11-16 VA school)
I particularly liked:
‘Policies to promote equality of opportunity are ineffective because of the underachievement of several groups of students, including those eligible for the pupil premium and the more-able students.’ (Inadequate 11-18 academy)
The principal findings from this survey, admittedly based on a rather small and not entirely representative sample, are that:
Inspectors are terminologically challenged in addressing this issue, because there are too many synonyms or near-synonyms in use.
Approximately one-third of inspection reports address provision for the most able in both main findings and recommendations. This is less common in academies than in community, controlled and aided schools. It is most prevalent in schools with an overall ‘requires improvement’ rating, followed by those rated inadequate. It is least prevalent in outstanding schools, although one in four outstanding schools is dealt with in this way.
Slightly over half of inspection reports address provision for the most able in neither the main findings nor the recommendations. This is relatively more common in the academies sector and in outstanding schools. It is least prevalent in schools rated inadequate, though almost one-third of inadequate schools fall into this category. Sometimes this is the case even though provision for the most able is identified as a significant issue in the main body of the report.
There is an unexplained tendency for reports on medium-sized schools to be significantly less likely to feature the most able in both main findings and recommendations and significantly more likely to feature it in neither. This warrants further investigation.
Overall coverage of the topic varies excessively between reports. One ignored it entirely, while several provided only cursory coverage and a few covered it to excess. The scope and quality of the coverage does not necessarily correlate with the significance of the issue for the school.
Coverage of the attainment and progress of the most able learners is variable. Some reports offer only generic descriptions of attainment and progress combined, some are focused exclusively on attainment in the core subjects while others take a wider curricular perspective. Outside the middle school sector, desirable attainment outcomes for the most able are almost invariably defined exclusively in terms of A* and A grade GCSEs.
Hardly any reports consider the attainment and/or progress of the most able learners in receipt of the Pupil Premium.
None of these reports make specific and explicit reference to IAG for the most able. It is rarely stated whether the school’s curriculum satisfies the needs of the most able.
Too many reports adopt formulaic approaches, especially in the recommendations they offer the school. Too few include reference to interesting or different practice.
In my judgement, too much current inspection reporting falls short of the commitments contained in the original Ofsted survey report and of the more recent requirement to:
‘always report in detail on the progress of the most able pupils and how effectively teaching engages them with work that is challenging enough.’
Ofsted should publish a glossary defining clearly all the terms for the most able that it employs, so that both inspectors and schools understand exactly what is intended when a particular term is deployed and which learners should be in scope when the most able are discussed.
Ofsted should co-ordinate the development of supplementary guidance clarifying their expectations of schools in respect of provision for the most able. This should set out in more detail what expectations would apply for such provision to be rated outstanding, good, requiring improvement and inadequate respectively. This should include the most able in receipt of the Pupil Premium, the suitability of the curriculum and the provision of IAG.
Ofsted should provide supplementary guidance for inspectors outlining and exemplifying the full range of evidence they might interrogate concerning the attainment and progress of the most able learners, including those in receipt of the Pupil Premium.
This guidance should specify the essential minimum coverage expected in reports and the ‘triggers’ that would warrant it being referenced in the main findings and/or recommendations for action.
This guidance should discourage inspectors from adopting formulaic descriptors and recommendations and specifically encourage them to identify unusual or innovative examples of effective practice.
The school inspection handbook and subsidiary guidance should be amended to reflect the supplementary guidance.
The School Data Dashboard should be expanded to include key data highlighting the attainment and progress of the most able.
These actions should also be undertaken for inspection of the primary and 16-19 sectors respectively.
The sample of jurisdictions includes England, other English-speaking countries (Australia, Canada, Ireland and the USA) and those that typically top the PISA rankings (Finland, Hong Kong, South Korea, Shanghai, Singapore and Taiwan).
With the exception of New Zealand, which did not take part in the problem solving assessment, this is deliberately identical to the sample I selected for a parallel post reviewing comparable results in the PISA 2012 assessments of reading, mathematics and science: ‘PISA 2012: International Comparisons of High Achievers’ Performance’ (December 2013)
These eleven jurisdictions account for nine of the top twelve performers ranked by mean overall performance in the problem solving assessment. (The USA and Ireland lie outside the top twelve, while Japan, Macao and Estonia are the three jurisdictions that are in the top twelve but outside my sample.)
The post is divided into seven sections:
Background to the problem solving assessment: How PISA defines problem solving competence; how it defines performance at each of the six levels of proficiency; how it defines high achievement; the nature of the assessment and who undertook it.
Average performance, the performance of high achievers and the performance of low achievers (proficiency level 1) on the problem solving assessment. This comparison includes my own sample and all the other jurisdictions that score above the OECD average on the first of these measures.
Gender and socio-economic differences amongst high achievers on the problem solving assessment in my sample of eleven jurisdictions.
The relative strengths and weaknesses of jurisdictions in this sample on different aspects of the problem solving assessment. (This treatment is generic rather than specific to high achievers.)
What proportion of high achievers on the problem-solving assessment in my sample of jurisdictions are also high achievers in reading, maths and science respectively.
What proportion of students in my sample of jurisdictions achieves highly in one or more of the four PISA 2012 assessments – and against the ‘all-rounder’ measure, which is based on high achievement in all of reading, maths and science (but not problem solving).
Implications for education policy makers seeking to improve problem solving performance in each of the sample jurisdictions.
Background to the Problem Solving Assessment
Definition of problem solving
PISA’s definition of problem-solving competence is:
‘…an individual’s capacity to engage in cognitive processing to understand and resolve problem situations where a method of solution is not immediately obvious. It includes the willingness to engage with such situations in order to achieve one’s potential as a constructive and reflective citizen.’
The commentary on this definition points out that:
Problem solving requires identification of the problem(s) to be solved, planning and applying a solution, and monitoring and evaluating progress.
A problem is ‘a situation in which the goal cannot be achieved by merely applying learned procedures’, so the problems encountered must be non-routine for 15 year-olds, although ‘knowledge of general strategies’ may be useful in solving them.
Motivational and affective factors are also in play.
The Report is rather coy about the role of creativity in problem solving, and hence the justification for the inclusion of this term in its title.
Perhaps the nearest it gets to an exposition is when commenting on the implications of its findings:
‘In some countries and economies, such as Finland, Shanghai-China and Sweden, students master the skills needed to solve static, analytical problems similar to those that textbooks and exam sheets typically contain as well or better than 15-year-olds, on average, across OECD countries. But the same 15-year-olds are less successful when not all information that is needed to solve the problem is disclosed, and the information provided must be completed by interacting with the problem situation. A specific difficulty with items that require students to be open to novelty, tolerate doubt and uncertainty, and dare to use intuitions (“hunches and feelings”) to initiate a solution suggests that opportunities to develop and exercise these traits, which are related to curiosity, perseverance and creativity, need to be prioritised.’
PISA’s framework for assessing problem solving competence is set out in the following diagram
In solving a particular problem it may not be necessary to apply all these steps, or to apply them in this order.
The proficiency scale was designed to have a mean score across OECD countries of 500. The six levels of proficiency applied in the assessment each have their own profile.
The lowest, level 1 proficiency is described thus:
‘At Level 1, students can explore a problem scenario only in a limited way, but tend to do so only when they have encountered very similar situations before. Based on their observations of familiar scenarios, these students are able only to partially describe the behaviour of a simple, everyday device. In general, students at Level 1 can solve straightforward problems provided there is a simple condition to be satisfied and there are only one or two steps to be performed to reach the goal. Level 1 students tend not to be able to plan ahead or set sub-goals.’
This level equates to a range of scores from 358 to 423. Across the OECD sample, 91.8% of participants are able to perform tasks at this level.
By comparison, level 5 proficiency is described in this manner:
‘At Level 5, students can systematically explore a complex problem scenario to gain an understanding of how relevant information is structured. When faced with unfamiliar, moderately complex devices, such as vending machines or home appliances, they respond quickly to feedback in order to control the device. In order to reach a solution, Level 5 problem solvers think ahead to find the best strategy that addresses all the given constraints. They can immediately adjust their plans or backtrack when they detect unexpected difficulties or when they make mistakes that take them off course.’
The associated range of scores is from 618 to 683 and 11.4% of all OECD students achieve at this level.
Finally, level 6 proficiency is described in this way:
‘At Level 6, students can develop complete, coherent mental models of diverse problem scenarios, enabling them to solve complex problems efficiently. They can explore a scenario in a highly strategic manner to understand all information pertaining to the problem. The information may be presented in different formats, requiring interpretation and integration of related parts. When confronted with very complex devices, such as home appliances that work in an unusual or unexpected manner, they quickly learn how to control the devices to achieve a goal in an optimal way. Level 6 problem solvers can set up general hypotheses about a system and thoroughly test them. They can follow a premise through to a logical conclusion or recognise when there is not enough information available to reach one. In order to reach a solution, these highly proficient problem solvers can create complex, flexible, multi-step plans that they continually monitor during execution. Where necessary, they modify their strategies, taking all constraints into account, both explicit and implicit.’
The range of level 6 scores is from 683 points upwards and 2.5% of all OECD participants score at this level.
PISA defines high achieving students as those securing proficiency level 5 or higher, so proficiency levels 5 and 6 together. The bulk of the analysis it supplies relates to this cohort, while relatively little attention is paid to the more exclusive group achieving proficiency level 6, even though almost 10% of students in Singapore reach this standard in problem solving.
Sixty-five jurisdictions took part in PISA 2012, including all 34 OECD countries and 31 partners. But only 44 jurisdictions took part in the problem solving assessment, including 28 OECD countries and 16 partners. As noted above, that included all my original sample of twelve jurisdictions, with the exception of New Zealand.
I could find no stated reason why New Zealand chose not to take part. Press reports initially suggested that England would do likewise, but it was subsequently reported that this decision had been reversed.
The assessment was computer-based and comprised 16 units divided into 42 items. The units were organised into four clusters, each designed to take 20 minutes to complete. Participants completed one or two clusters, depending on whether they were also undertaking computer-based assessments of reading and maths.
In each jurisdiction a random sample of those who took part in the paper-based maths assessment was selected to undertake the problem solving assessment. About 85,000 students took part in all. The unweighted sample sizes in my selected jurisdictions are set out in Table 1 below, together with the total population of 15 year-olds in each jurisdiction.
Table 1: Sample sizes undertaking PISA 2012 problem solving assessment in selected jurisdictions
Total 15 year-olds
Those taking the assessment were aged between 15 years and three months and 16 years and two months at the time of the assessment. All were enrolled at school and had completed at least six years of formal schooling.
Average performance compared with the performance of high and low achievers
The overall table of mean scores on the problem solving assessment is shown below
There are some familiar names at the top of the table, especially Singapore and South Korea, the two countries that comfortably lead the rankings. Japan is some ten points behind in third place but it in turn has a lead of twelve points over a cluster of four other Asian competitors: Macao, Hong Kong, Shanghai and Taiwan.
A slightly different picture emerges if we compare average performance with the proportion of learners who achieve the bottom proficiency level and the top two proficiency levels. Table 2 below compares these groups.
This table includes all the jurisdictions that exceeded the OECD average score. I have marked out in bold the countries in my sample of eleven which includes Ireland, the only one of them that did not exceed the OECD average.
Table 2: PISA Problem Solving 2012: Comparing Average Performance with Performance at Key Proficiency Levels
Level 1 (%)
Level 5 (%)
Level 6 (%)
Levels 5+6 (%)
The jurisdictions at the top of the table also have a familiar profile, with a small ‘tail’ of low performance combined with high levels of performance at the top end.
Nine of the top ten have fewer than 10% of learners at proficiency level 1, though only South Korea pushes below 5%.
Five of the top ten have 5% or more of their learners at proficiency level 6, but only Singapore and South Korea have a higher percentage at level 6 than level 1 (with Japan managing the same percentage at both levels).
The top three performers – Singapore, South Korea and Japan – are the only three jurisdictions that have over 20% of their learners at proficiency levels 5 and 6 together.
South Korea slightly outscores Singapore at level 5 (20.0% against 19.7%). Japan is in third place, followed by Taiwan, Hong Kong and Shanghai.
But at level 6, Singapore has a clear lead, followed by South Korea, Japan, Hong Kong and Canada respectively.
England’s overall place in the table is relatively consistent on each of these measures, but the gaps between England and the top performers vary considerably.
The best have fewer than half England’s proportion of learners at proficiency level 1, almost twice as many learners at proficiency level 5 and more than twice as many at proficiency levels 5 and 6 together. But at proficiency level 6 they have almost three times as many learners as England.
Chart 1 below compares performance on these four measures across my sample of eleven jurisdictions.
All but Ireland are comfortably below the OECD average for the percentage of learners at proficiency level 1.The USA and Ireland are atypical in having a bigger tail (proficiency level 1) than their cadres of high achievers (levels 5 and 6 together).
At level 5 all but Ireland and the USA are above the OECD average, but USA leapfrogs the OECD average at level 6.
There is a fairly strong correlation between the proportions of learners achieving the highest proficiency thresholds and average performance in each jurisdiction. However, Canada stands out by having an atypically high proportion of students at level 6.
PISA’s Report discusses the variation in problem-solving performance within different jurisdictions. However it does so without reference to the proficiency levels, so we do not know to what extent these findings apply equally to high achievers.
Amongst those above the OECD average, those with least variation are Macao, Japan, Estonia, Shanghai, Taiwan, Korea, Hong Kong, USA, Finland, Ireland, Austria, Singapore and the Czech Republic respectively.
Perhaps surprisingly, the degree of variation in Finland is identical to that in the USA and Ireland, while Estonia has less variation than many of the Asian jurisdictions. Singapore, while top of the performance table, is only just above the OECD average in terms of variation.
The countries below the OECD average on this measure – listed in order of increasing variation – include England, Australia and Canada, though all three are relatively close to the OECD average. So these three countries and Singapore are all relatively close together.
Gender and socio-economic differences amongst high achievers
On average across OECD jurisdictions, boys score seven points higher than girls on the problem solving assessment. There is also more variation amongst boys than girls.
Across the OECD participants, 3.1% of boys achieved proficiency level 6 but only 1.8% of girls did so. This imbalance was repeated at proficiency level 5, achieved by 10% of boys and 7.7% of girls.
The table and chart below show the variations within my sample of eleven countries. The performance of boys exceeds that of girls in all cases, except in Finland at proficiency level 5, and in that instance the gap in favour of girls is relatively small (0.4%).
Table 3: PISA Problem-solving: Gender variation at top proficiency levels
Level 5 (%)
Level 6 (%)
Levels 5+6 (%)
There is no consistent pattern in whether boys are more heavily over-represented at proficiency level 5 than proficiency level 6, or vice versa.
There is a bigger difference at level 6 than at level 5 in Singapore, South Korea, Canada, Australia, Finland and Ireland, but the reverse is true in the five remaining jurisdictions.
At level 5, boys are in the greatest ascendancy in Shanghai and Taiwan while, at level 6, this is true of Singapore and South Korea.
When proficiency levels 5 and 6 are combined, all five of the Asian tigers show a difference in favour of males of 5.5% or higher, significantly in advance of the six ‘Western’ countries in the sample and significantly ahead of the OECD average.
Amongst the six ‘Western’ representatives, boys have the biggest advantage at proficiency level 5 in England, while at level 6 boys in Ireland have the biggest advantage.
Within this group of jurisdictions, the gap between boys and girls at level 6 is comfortably the smallest in England. But, in terms of performance at proficiency levels 5 and 6 together, Finland is ahead.
Chart 2: PISA Problem-solving: Gender variation at top proficiency levels
The Report includes a generic analysis of gender differences in performance for boys and girls with similar levels of performance in English, maths and science.
It concludes that girls perform significantly above their expected level in both England and Australia (though the difference is only statistically significant in the latter).
The Report comments:
‘It is not clear whether one should expect there to be a gender gap in problem solving. On the one hand, the questions posed in the PISA problem-solving assessment were not grounded in content knowledge, so boys’ or girls’ advantage in having mastered a particular subject area should not have influenced results. On the other hand… performance in problem solving is more closely related to performance in mathematics than to performance in reading. One could therefore expect the gender difference in performance to be closer to that observed in mathematics – a modest advantage for boys, in most countries – than to that observed in reading – a large advantage for girls.’
The Report considers variations in performance against PISA’s Index of Economic, Social and Cultural status (IESC), finding them weaker overall than for reading, maths and science.
It calculates that the overall percentage variation in performance attributable to these factors is about 10.6% (compared with 14.9% in maths, 14.0% in science and 13.2% in reading).
Amongst the eleven jurisdictions in my sample, the weakest correlations were found in Canada (4%), followed by Hong Kong (4.9%), South Korea (5.4%), Finland (6.5%), England (7.8%), Australia (8.5%), Taiwan (9.4%), the USA (10.1%) and Ireland (10.2%) in that order. All those jurisdictions had correlations below the OECD average.
Perhaps surprisingly, there were above average correlations in Shanghai (14.1%) and, to a lesser extent (and less surprisingly) in Singapore (11.1%).
The report suggests that students with parents working in semi-skilled and elementary occupations tend to perform above their expected level in problem-solving in Taiwan, England, Canada, the USA, Finland and Australia (in that order – with Australia closest to the OECD average).
The jurisdictions where these students tend to underperform their expected level are – in order of severity – Ireland, Shanghai, Singapore, Hong Kong and South Korea.
A parallel presentation on the Report provides some additional data about the performance in different countries of what the OECD calls ‘resilient’ students – those in the bottom quartile of the IESC but in the top quartile by perfromance, after accounting for socio-economic status.
It supplies the graph below, which shows all the Asian countries in my sample clustered at the top, but also with significant gaps between them. Canada is the highest-performing of the remainder in my sample, followed by Finland, Australia, England and the USA respectively. Ireland is some way below the OECD average.
Unfortunately, I can find no analysis of how performance varies according to socio-economic variables at each proficiency level. It would be useful to see which jurisdictions have the smallest ‘excellence gaps’ at levels 5 and 6 respectively.
How different jurisdictions perform on different aspects of problem-solving
The Report’s analysis of comparative strengths and weaknesses in different elements of problem-solving does not take account of variations at different proficiency levels
It explains that aspects of the assessment were found easier by students in different jurisdictions, employing a four-part distinction between:
‘Exploringand understanding. The objective is to build mental representations of each of the pieces of information presented in the problem. This involves:
exploring the problem situation: observing it, interacting with it, searching for information and finding limitations or obstacles; and
understanding given information and, in interactive problems, information discovered while interacting with the problem situation; and demonstrating understanding of relevant concepts.
Representing and formulating. The objective is to build a coherent mental representation of the problem situation (i.e. a situation model or a problem model). To do this, relevant information must be selected, mentally organised and integrated with relevant prior knowledge. This may involve:
representing the problem by constructing tabular, graphic, symbolic or verbal representations, and shifting between representational formats; and
formulating hypotheses by identifying the relevant factors in the problem and their inter-relationships; and organising and critically evaluating information.
Planning and executing. The objective is to use one’s knowledge about the problem situation to devise a plan and execute it. Tasks where “planning and executing” is the main cognitive demand do not require any substantial prior understanding or representation of the problem situation, either because the situation is straightforward or because these aspects were previously solved. “Planning and executing” includes:
planning, which consists of goal setting, including clarifying the overall goal, and setting subgoals, where necessary; and devising a plan or strategy to reach the goal state, including the steps to be undertaken; and
executing, which consists of carrying out a plan.
Monitoring and reflecting.The objective is to regulate the distinct processes involved in problem solving, and to critically evaluate the solution, the information provided with the problem, or the strategy adopted. This includes:
monitoring progress towards the goal at each stage, including checking intermediate and final results, detecting unexpected events, and taking remedial action when required; and
reflecting on solutions from different perspectives, critically evaluating assumptions and alternative solutions, identifying the need for additional information or clarification and communicating progress in a suitable manner.’
Amongst my sample of eleven jurisdictions:
‘Exploring and understanding’ items were found easier by students in Singapore, Hong Kong, South Korea, Australia, Taiwan and Finland.
‘Representing and formulating’ items were found easier in Taiwan, Shanghai, South Korea, Singapore, Hong Kong, Canada and Australia.
‘Planning and executing’ items were found easier in Finland only.
‘Monitoring and reflecting’ items were found easier in Ireland, Singapore, the USA and England.
The Report concludes:
‘This analysis shows that, in general, what differentiates high-performing systems, and particularly East Asian education systems, such as those in Hong Kong-China, Japan, Korea [South Korea], Macao-China, Shanghai -China, Singapore and Chinese Taipei [Taiwan], from lower-performing ones, is their students’ high level of proficiency on “exploring and understanding” and “representing and formulating” tasks.’
It also distinguishes those jurisdictions that perform best on interactive problems, requiring students to discover some of the information required to solve the problem, rather than being presented with all the necessary information. This seems to be the nearest equivalent to a measure of creativity in problem solving
Comparative strengths and weaknesses in respect of interactive tasks are captured in the following diagram.
One can see that several of my sample – Ireland, the USA, Canada, Australia, South Korea and Singapore – are placed in the top right-hand quarter of the diagram, indicating stronger than expected performance on both interactive and knowledge acquisition tasks.
England is stronger than expected on the former but not on the latter.
Jurisdictions that are weaker than inspected on interactive tasks only include Hong Kong, Taiwan and Shanghai, while Finland is weaker than expected on both.
We have no information about whether these distinctions were maintained at different proficiency levels.
Comparing jurisdictions’ performance at higher proficiency levels
Table 4 and Charts 3 and 4 below show variations in the performance of countries in my sample across the four different assessments at level 6, the highest proficiency level.
The charts in particular emphasise how far ahead the Asian Tigers are in maths at this level, compared with the cross-jurisdictional variation in the other three assessments.
In all five cases, each ‘Asian Tiger’s’ level 6 performance in maths also vastly exceeds its level 6 performance in the other three assessments. The proportion of students achieving level 6 proficiency in problem solving lags far behind, even though there is a fairly strong correlation between these two assessments (see below).
In contrast, all the ‘Western’ jurisdictions in the sample – with the sole exception of Ireland – achieve a higher percentage at proficiency level 6 in problem solving than they do in maths, although the difference is always less than a full percentage point. (Even in Ireland the difference is only 0.1 of a percentage point in favour of maths.)
Shanghai is the only jurisdiction in the sample which has more students achieving proficiency level 6 in science than in problem solving. It also has the narrowest gap between level 6 performance in problem solving and in reading.
Meanwhile, England, the USA, Finland and Australia all have broadly similar profiles across the four assessments, with the largest percentage of level 6 performers in problem solving, followed by maths, science and reading respectively.
The proximity of the lines marking level 6 performance in reading and science is also particularly evident in the second chart below.
Table 4: Percentage achieving proficiency Level 6 in each domain
Charts 3 and 4: Percentage achieving proficiency level 6 in each domain
The pattern is materially different at proficiency levels 5 and above, as the table and chart below illustrate. These also include the proportion of all-rounders, who achieved proficiency level 5 or above in each of maths, science and reading (but not in problem-solving).
The lead enjoyed by the ‘Asian Tigers’ in maths is somewhat less pronounced. The gap between performance within these jurisdictions on the different assessments also tends to be less marked, although maths accounts for comfortably the largest proportion of level 5+ performance in all five cases.
Conversely, level 5+ performance on the different assessments is typically much closer in the ‘Western’ countries. Problem solving leads the way in Australia, Canada, England and the USA, but in Finland science is in the ascendant and reading is strongest in Ireland.
Some jurisdictions have a far ‘spikier’ profile than others. Ireland is closest to achieving equilibrium across all four assessments. Australia and England share very similar profiles, though Australia outscores England in each assessment.
The second chart in particular shows how Shanghai’s ‘spike’ applies in all the other three assessments but not in problem solving.
Table 5: Percentage achieving Proficiency level 5 and above in each domain
Ma + Sci + Re L5+
5.7* all UK
Charts 5 and 6: Percentage Achieving Proficiency Level 5 and above in each domain
How high-achieving problem solvers perform in other assessments
Correlations between performance in different assessments
The Report provides an analysis of the proportion of students achieving proficiency levels 5 and 6 on problem solving who also achieved that outcome on one of the other three assessments: reading, maths and science.
It argues that problem solving is a distinct and separate domain. However:
‘On average, about 68% of the problem-solving score reflects skills that are also measured in one of the three regular assessment domains. The remaining 32% reflects skills that are uniquely captured by the assessment of problem solving. Of the 68% of variation that problem-solving performance shares with other domains, the overwhelming part is shared with all three regular assessment domains (62% of the total variation); about 5% is uniquely shared between problem solving and mathematics only; and about 1% of the variation in problem solving performance hinges on skills that are specifically measured in the assessments of reading or science.’
It discusses the correlation between these different assessments:
‘A key distinction between the PISA 2012 assessment of problem solving and the regular assessments of mathematics, reading and science is that the problem-solving assessment does not measure domain-specific knowledge; rather, it focuses as much as possible on the cognitive processes fundamental to problem solving. However, these processes can also be used and taught in the other subjects assessed. For this reason, problem-solving tasks are also included among the test units for mathematics, reading and science, where their solution requires expert knowledge specific to these domains, in addition to general problem-solving skills.
It is therefore expected that student performance in problem solving is positively correlated with student performance in mathematics, reading and science. This correlation hinges mostly on generic skills, and should thus be about the same magnitude as between any two regular assessment subjects.’
These overall correlations are set out in the table below, which shows that maths has a higher correlation with problem solving than either science or reading, but that this correlation is lower than those between the three subject-related assessments.
The correlation between maths and science (0.90) is comfortably the strongest (despite the relationship between reading and science at the top end of the distribution noted above).
Correlations are broadly similar across jurisdictions, but the Report notes that the association is comparatively weak in some of these, including Hong Kong. Students here are more likely to perform poorly on problem solving and well on other assessments, or vice versa.
There is also broad consistency at different performance levels, but the Report identifies those jurisdictions where students with the same level of performance exceed expectations in relation to problem-solving performance. These include South Korea, the USA, England, Australia, Singapore and – to a lesser extent – Canada.
Those with lower than expected performance include Shanghai, Ireland, Hong Kong, Taiwan and Finland.
The Report notes:
‘In Shanghai-China, 86% of students perform below the expected level in problem solving, given their performance in mathematics, reading and science. Students in these countries/economies struggle to use all the skills that they demonstrate in the other domains when asked to perform problem-solving tasks.’
However, there is variation according to students’ maths proficiency:
Jurisdictions whose high scores on problem solving are mainly attributable to strong performers in maths include Australia, England and the USA.
Jurisdictions whose high scores on problem solving are more attributable to weaker performers in maths include Ireland.
Jurisdictions whose lower scores in problem solving are more attributable to weakness among strong performers in maths include Korea.
Jurisdictions whose lower scores in problem solving are more attributable to weakness among weak performers in maths include Hong Kong and Taiwan.
Jurisdictions whose weakness in problem solving is fairly consistent regardless of performance in maths include Shanghai and Singapore.
The Report adds:
‘In Italy, Japan and Korea, the good performance in problem solving is, to a large extent, due to the fact that lower performing students score beyond expectations in the problem-solving assessment….This may indicate that some of these students perform below their potential in mathematics; it may also indicate, more positively, that students at the bottom of the class who struggle with some subjects in school are remarkably resilient when it comes to confronting real-life challenges in non-curricular contexts…
In contrast, in Australia, England (United Kingdom) and the United States, the best students in mathematics also have excellent problem-solving skills. These countries’ good performance in problem solving is mainly due to strong performers in mathematics. This may suggest that in these countries, high performers in mathematics have access to – and take advantage of – the kinds of learning opportunities that are also useful for improving their problem-solving skills.’
What proportion of high performers in problem solving are also high performers in one of the other assessments?
The percentages of high achieving students (proficiency level 5 and above) in my sample of eleven jurisdictions who perform equally highly in each of the three domain-specific assessments are shown in Table 6 and Chart 7 below.
These show that Shanghai leads the way in each case, with 98.0% of all students who achieve proficiency level 5+ in problem solving also achieving the same outcome in maths. For science and reading the comparable figures are 75.1% and 71.7% respectively.
Taiwan is the nearest competitor in respect of problem solving plus maths, Finland in the case of problem solving plus science and Ireland in the case of problem solving plus reading.
South Korea, Taiwan and Canada are atypical of the rest in recording a higher proportion of problem solving plus reading at this level than problem solving plus science.
Singapore, Shanghai and Ireland are the only three jurisdictions that score above 50% on all three of these combinations. However, the only jurisdictions that exceed the OECD averages in all three cases are Singapore, Hong Kong, Shanghai and Finland.
Table 6: PISA problem-solving: Percentage of students achieving proficiency level 5+ in domain-specific assessments
PS + Ma
PS + Sci
PS + Re
Chart 7: PISA Problem-solving: Percentage of students achieving proficiency level 5+ in domain-specific assessments
What proportion of students achieve highly in one or more assessments?
Table 7 and Chart 8 below show how many students in each of my sample achieved proficiency level 5 or higher in problem-solving only, in problem solving and one or more assessments, in one or more assessments but not problem solving and in at least one assessment (ie the total of the three preceding columns).
I have also repeated in the final column the percentage achieving this proficiency level in each of maths, science and reading. (PISA has not released information about the proportion of students who achieved this feat across all four assessments.)
These reveal that the percentages of students who achieve proficiency level 5+ only in problem solving are very small, ranging from 0.3% in Shanghai to 6.7% in South Korea.
Conversely, the percentages of students achieving proficiency level 5+ in any one of the other assessments but not in problem solving are typically significantly higher, ranging from 4.5% in the USA to 38.1% in Shanghai.
There is quite a bit of variation in terms of whether jurisdictions score more highly on ‘problem solving and at least one other’ (second column) and ‘at least one other excluding problem solving (third column).
More importantly, the fourth column shows that the jurisdiction with the most students achieving proficiency level 5 or higher in at least one assessment is clearly Shanghai, followed by Singapore, Hong Kong, South Korea and Taiwan in that order.
The proportion of students achieving this outcome in Shanghai is close to three times the OECD average, comfortably more than twice the rate achieved in any of the ‘Western’ countries and three times the rate achieved in the USA.
The same is true of the proportion of students achieving this level in the three domain-specific assessments.
On this measure, South Korea and Taiwan fall significantly behind their Asian competitors, and the latter is overtaken by Australia, Finland and Canada.
Table 7: Percentage achieving proficiency level 5+ in different combinations of PISA assessments
PS + 1 or more%
1+ butNot PS%
L5+ in at least one %
L5+ in Ma + Sci + Re %
5.7* all UK
Chart 8: Percentage achieving proficiency level 5+ in different combinations of PISA assessments
The Report comments:
‘The proportion of students who reach the highest levels of proficiency in at least one domain (problem solving, mathematics, reading or science) can be considered a measure of the breadth of a country’s/economy’s pool of top performers. By this measure, the largest pool of top performers is found in Shanghai-China, where more than half of all students (56%) perform at the highest levels in at least one domain, followed by Singapore (46%), Hong Kong-China (40%), Korea and Chinese Taipei (39%)…Only one OECD country, Korea, is found among the five countries/economies with the largest proportion of top performers. On average across OECD countries, 20% of students are top performers in at least one assessment domain.
The proportion of students performing at the top in problem solving and in either mathematics, reading or science, too can be considered a measure of the depth of this pool. These are top performers who combine the mastery of a specific domain of knowledge with the ability to apply their unique skills flexibly, in a variety of contexts. By this measure, the deepest pools of top performers can be found in Singapore (25% of students), Korea (21%), Shanghai-China (18%) and Chinese Taipei (17%). On average across OECDcountries, only 8% of students are top performers in both a core subject and in problem solving.’
There is no explanation of why proficiency level 5 should be equated by PISA with the breadth of a jurisdiction’s ‘pool of top performers’. The distinction between proficiency levels 5 and 6 in this respect requires further discussion.
In addition to updated ‘all-rounder’ data showing what proportion of students achieved this outcome across all four assessments, it would be really interesting to see the proportion of students achieving at proficiency level 6 across different combinations of these four assessments – and to see what proportion of students achieving that outcome in different jurisdictions are direct beneficiaries of targeted support, such as a gifted education programme.
In the light of this analysis, what are jurisdictions’ priorities for improving problem solving performance?
Leaving aside strengths and weaknesses in different elements of problem solving discussed above, this analysis suggests that the eleven jurisdictions in my sample should address the following priorities:
Singapore has a clear lead at proficiency level 6, but falls behind South Korea at level 5 (though Singapore re-establishes its ascendancy when levels 5 and 6 are considered together). It also has more level 1 performers than South Korea. It should perhaps focus on reducing the size of this tail and pushing through more of its mid-range performers to level 5. There is a pronounced imbalance in favour of boys at level 6, so enabling more girls to achieve the highest level of performance is a clear priority. There may also be a case for prioritising the children of semi-skilled workers.
South Korea needs to focus on getting a larger proportion of its level 5 performers to level 6. This effort should be focused disproportionately on girls, who are significantly under-represented at both levels 5 and 6. South Korea has a very small tail to worry about – and may even be getting close to minimising this. It needs to concentrate on improving the problem solving skills of its stronger performers in maths.
Hong Kong has a slightly bigger tail than Singapore’s but is significantly behind at both proficiency levels 5 and 6. In the case of level 6 it is equalled by Canada. Hong Kong needs to focus simultaneously on reducing the tail and lifting performance across the top end, where girls and weaker performers in maths are a clear priority.
Shanghai has a similar profile to Hong Kong’s in all respects, though with somewhat fewer level 6 performers. It also needs to focus effort simultaneously at the top and the bottom of the distribution. Amongst this sample, Shanghai has the worst under-representation of girls at level 5 and levels 5 and 6 together, so addressing that imbalance is an obvious priority. It also demonstrated the largest variation in performance against PISA’s IESC index, which suggests that it should target young people from disadvantaged backgrounds, as well as the children of semi-skilled workers.
Taiwan is rather similar to Hong Kong and Shanghai, but its tail is slightly bigger and its level 6 cadre slightly smaller, while it does somewhat better at level 5. It may need to focus more at the very bottom, but also at the very top. Taiwan also has a problem with high-performing girls, second only to Shanghai as far as level 5 and levels 5 and 6 together are concerned. However, like Shanghai, it does comparatively better than the other ‘Asian Tigers’ in terms of girls at level 6. It also needs to consider the problem solving performance of its weaker performers in maths.
Canada is the closest western competitor to the ‘Asian Tigers’ in terms of the proportions of students at levels 1 and 5 – and it already outscores Shanghai and Taiwan at level 6. It needs to continue cutting down the tail without compromising achievement at the top end. Canada also has small but significant gender imbalances in favour of boys at the top end.
Australia by comparison is significantly worse than Canada at level 1, broadly comparable at level 5 and somewhat worse at level 6. It too needs to improve scores at the very bottom and the very top. Australia’s gender imbalance is more pronounced at level 6 than level 5.
Finland has the same mean score as Australia’s but a smaller tail (though not quite as small as Canada’s). It needs to improve across the piece but might benefit from concentrating rather more heavily at the top end. Finland has a slight gender imbalance in favour of girls at level 5, but boys are more in the ascendancy at level 6 than in either England or the USA. As in Australia, this latter point needs addressing.
England has a profile similar to Australia’s, but less effective at all three selected proficiency levels. It is further behind at the top than at the bottom of the distribution, but needs to work hard at both ends to catch up the strongest western performers and maintain its advantage over the USA and Ireland. Gender imbalances are small but nonetheless significant.
USA has a comparatively long tail of low achievement at proficiency level 1 and, with the exception of Ireland, the fewest high achievers. This profile is very close to the OECD average. As in England, the relatively small size of gender imbalances in favour of boys does not mean that these can be ignored.
Ireland has the longest tail of low achievement and the smallest proportion of students at proficiency levels 5, 6 and 5 and 6 combined. It needs to raise the bar at both ends of the achievement distribution. Ireland has a larger preponderance of boys at level 6 than its Western competitors and this needs addressing. The limited socio-economic evidence suggests that Ireland should also be targeting the offspring of parents with semi-skilled and elementary occupations.
So there is further scope for improvement in all eleven jurisdictions. Meanwhile the OECD could usefully provide a more in-depth analysis of high achievers on its assessments that features:
Proficiency level 6 performance across the board.
Socio-economic disparities in performance at proficiency levels 5 and 6.
‘All-rounder’ achievement at these levels across all four assessments and
Correlations between success at these levels and specific educational provision for high achievers including gifted education programmes.
This post discusses recent progress by the European Talent Centre towards a European Talent Network.
It is a curtain-raiser for an imminent conference on this topic and poses the critical questions I would like to see addressed at that event.
It should serve as a briefing document for prospective delegates and other interested parties, especially those who want to dig beneath the invariably positive publicity surrounding the initiative.
It continues the narrative strand of posts I have devoted to the Network, concentrating principally on developments since my last contribution in December 2012.
The post is organised part thematically and part chronologically and covers the following ground:
An updated description of the Hungarian model for talent support and its increasingly complex infrastructure.
The origins of the European Talent project and how its scope and objectives have changed since its inception.
The project’s advocacy effort within the European Commission and its impact to date.
Progress on the European Talent Map and promised annual European Talent Days and conferences.
The current scope and effectiveness of the network, its support structures and funding.
Key issues and obstacles that need to be addressed.
To improve readability I have divided the text into two sections of broadly equivalent length. Part One is dedicated largely to bullets one to three above, while Part Two deals with bullets three to six.
Previous posts in this series
If I am to do justice to this complex narrative, I must necessarily draw to some extent on material I have already published in earlier posts. I apologise for the repetition, which I have tried to keep to a minimum.
On re-reading those earlier posts and comparing them with this, it is clear that my overall assessment of the EU talent project has shifted markedly since 2010, becoming progressively more troubled and pessimistic.
This seems to me justified by an objective assessment of progress, based exclusively on evidence in the public domain – evidence that I have tried to draw together in these posts.
However, I feel obliged to disclose the influence of personal frustration at this slow progress, as well as an increasing sense of personal exclusion from proceedings – which seems completely at odds with the networking principles on which the project is founded.
I have done my best to control this subjective influence in the assessment below, confining myself as far as possible to an objective interpretation of the facts.
However I refer you to my earlier posts if you wish to understand how I reached this point.
In April 2011 I attended the inaugural conference in Budapest, publishing a report on the proceedings and an analysis of the Declaration produced, plus an assessment of the Hungarian approach to talent support as it then was and its potential scalability to Europe as a whole.
In December 2012 I described the initial stages of EU lobbying, an ill-fated 2012 conference in Poland, the earliest activities of the European Talent Centre and the evolving relationship between the project and ECHA, the European Council for High Ability.
I will not otherwise comment on my personal involvement, other than to say that I do not expect to attend the upcoming Conference, judging that the cost of attending will not be exceeded by the benefits of doing so.
This post conveys more thoroughly and more accurately the points I would have wanted to make during the proceedings, were suitable opportunities provided to do so.
A brief demographic aside
It is important to provide some elementary information about Hungary’s demographics, to set in context the discussion below of its talent support model and the prospects for Europe-wide scalability.
Hungary is a medium-sized central European country with an area roughly one-third of the UK’s and broadly similar to South Korea or Portugal.
It has a population of around 9.88 million (2013) about a sixth of the size of the UK population and similar in size to Portugal’s or Sweden’s.
Hungary is the 16th most populous European country, accounting for about 1.4% of the total European population and about 2% of the total population of the European Union (EU).
It is divided into 7 regions and 19 counties, plus the capital, Budapest, which has a population of 1.7 million in its own right.
Almost 84% of the population are ethnic Hungarians but there is a Roma minority estimated (some say underestimated) at 3.1% of the population.
The GDP (purchasing power parity) is $19,497 (source: IMF), slightly over half the comparable UK figure.
The Hungarian Talent Support Model
The Hungarian model has grown bewilderingly complex and there is an array of material describing it, often in slightly different terms.
Some of the English language material is not well translated and there are gaps that can be filled only with recourse to documents in Hungarian (which I can only access through online translation tools).
Much of this documentation is devoted to publicising the model as an example of best practice, so it can be somewhat economical with the truth.
The basic framework is helpfully illustrated by this diagram, which appeared in a presentation dating from October 2012.
It shows how the overall Hungarian National Talent Programme (NTP) comprises a series of time-limited projects paid for by the EU Social Fund, but also a parallel set of activities supported by a National Talent Fund which is fed mainly by the Hungarian taxpayer.
Secondly, they describe the supporting infrastructure for the NTP as it exists today.
Thirdly, they outline the key features of the time-limited projects: The Hungarian Genius Programme (HGP) (2009-13) and the Talent Bridges Programme (TBP) (2012-14).
Finally, they try to make sense of the incomplete and sometimes conflicting information about the funding allocated to different elements of the NTP.
Throughout this treatment my principal purpose is to show how the European Talent project fits into the overall Hungarian plan, as precursor to a closer analysis of the former in the second half of the post.
I also want to show how the direction of the NTP has shifted since its inception.
The National Talent Programme (NTP) (2008-2028)
The subsections below describe the NTP as envisaged in the original 2008 Parliamentary Resolution. This remains the most thorough exposition of the broader direction of travel that I could find.
The framework set out in the Resolution is built on ten general principles that I can best summarise as follows:
Talent support covers the period from early childhood to age 35, so extends well beyond compulsory education.
The NTP must preserve the traditions of existing successful talent support initiatives.
Talent is complex and so requires a diversity of provision – standardised support is a false economy.
There must be equality of access to talent support by geographical area, ethnic and socio-economic background.
Continuity is necessary to support individual talents as they change and develop over time; special attention is required at key transition points.
In early childhood one must provide opportunities for talent to emerge, but selection on the basis of commitment and motivation become increasingly significant and older participants increasingly self-select.
Differentiated support is needed to support different levels of talent; there must be opportunities to progress and to step off the programme without loss of esteem.
In return for talent support, the talented individual has a social responsibility to support talent development in others.
Those engaged in talent support – here called talent coaches – need time and support.
Wider social support for talent development is essential to success and sustainability.
Hence the Hungarians are focused on a system-wide effort to promote talent development that extends well beyond compulsory education, but only up to the age of 35. As noted above, if 0-4 year-olds are excluded, this represents an eligible population of about 3.5 million people.
The choice of this age 35 cut-off seems rather arbitrary. Having decided to push beyond compulsory education into adult provision, it is not clear why the principle of lifelong learning is then set aside – or exactly what happens when participants reach their 36th birthdays.
Otherwise the principles above seem laudable and broadly reflect one tradition of effective practice in the field.
The NTP’s goals are illustrated by this diagram
The elements in the lower half of the diagram can be expanded thus:
Talent support traditions: support for existing provision; development of new provision to fill gaps; minimum standards and professional development for providers; applying models of best practice; co-operation with ethnic Hungarian programmes outside Hungary (‘cross border programmes’); and ‘systematic exploration and processing of the talent support experiences’ of EU and other countries which excel in this field.
Integrated programmes: compiling and updating a map of the talent support opportunities available in Hungary as well as ‘cross border programmes’; action to support access to the talent map; a ‘detailed survey of the international talent support practice’; networking between providers with cooperation and collaboration managed through a set of talent support councils; monitoring of engagement to secure continuity and minimise drop-out.
Social responsibility: promoting the self-organisation of talented youth; developing their innovation and management skills; securing counselling; piloting a ‘Talent Bonus – Talent Coin’ scheme to record in virtual units the monetary value of support received and provided, leading to consideration of a LETS-type scheme; support for ‘exceptionally talented youth’; improved social integration of talented youth and development of a talent-friendly society.
Equal opportunities: providing targeted information about talent support opportunities; targeted programming for disadvantaged, Roma and disabled people and wider emphasis on integration; supporting the development of Roma talent coaches; and action to secure ‘the desirable gender distribution’.
Enhanced recognition: improving financial support for talent coaches; reducing workload and providing counselling for coaches; improving recognition and celebrating the success of coaches and others engaged in talent support.
Talent-friendly society: awareness-raising activity for parents, family and friends of talented youth; periodic talent days to mobilise support and ‘promote the local utilisation of talent’; promoting talent in the media, as well as international communication about the programme and ‘introduction in both the EU and other countries by exploiting the opportunities provided by Hungary’s EU Presidency in 2011’; ‘preparation for the foreign adaptation of the successful talent support initiatives’ and organisation of EU talent days.
Hence the goals incorporate a process of learning from European and other international experience, but also one of feeding back to the international community information about the Hungarian talent support effort and extending the model into other European countries.
There is an obvious tension in these goals between preserving the traditions of existing successful initiatives and imposing a framework with minimum standards and built-in quality criteria. This applies equally to the European project discussed below.
The reference to a LETS-type scheme is intriguing but I could trace nothing about its subsequent development.
In 2008 the infrastructure proposed to undertake the NTP comprised:
A National Talent Co-ordination Board, chaired at Ministerial level, to oversee the programme and to allocate a National Talent Fund (see below).
A National Talent Support Circle [I’m not sure whether this should be ‘Council’] consisting of individuals from Hungary and abroad who would promote talent support through professional opportunities, financial contribution or ‘social capital opportunities’.
A National Talent Fund comprising a Government contribution and voluntary contributions from elsewhere. The former would include the proceeds of a 1% voluntary income tax levy (being one of the good causes towards which Hungarian taxpayers could direct this contribution). Additional financial support would come from ‘the talent support-related programmes of the New Hungary Development Plan’.
A system of Talent Support Councils to co-ordinate activity at regional and local level.
A national network of Talent Points – providers of talent support activity.
A biennial review of the programme presented to Parliament, the first being in 2011.
Presumably there have been two of these biennial reviews to date. They would make interesting reading, but I could find no material in English that describes the outcomes.
The NTP Infrastructure Today
The supporting infrastructure as described today has grown considerably more complex and bureaucratic than the basic model above.
The National Talent Co-ordination Board continues to oversee the programme as a whole. Its membership is set out here.
The National Talent Support Council was established in 2006 and devised the NTP as set out above. Its functions are more substantial than originally described (assuming this is the ‘Circle’ mentioned in the Resolution), although it now seems to be devolving some of these. Until recently at least, the Council: oversaw the national database of talent support initiatives and monitored coverage, matching demand – via an electronic mailing list – with the supply of opportunities; initiated and promoted regional talent days; supported the network of talent points and promoted the development of new ones; invited tenders for niche programmes of various kinds; collected and analysed evidence of best practice and the research literature; and promoted international links paying ‘special attention to the reinforcement of the EU contacts’. The Council has a Chair and six Vice Presidents as well as a Secretary and Secretariat. It operates nine committees: Higher Education, Support for Socially Disadvantaged Gifted People, Innovations, Public Education, Foreign Relations, Public and Media Relations, Theory of Giftedness, Training and Education and Giftedness Network.
The National Talent Point has only recently been identified as an entity in its own right, distinct from the National Council. Its role is to maintain the Talent Map and manage the underpinning database. Essentially it seems to have acquired the Council’s responsibilities for delivery, leaving the Council to concentrate on policy. It recently acquired a new website.
The Association of Hungarian Talent Support Organizations (MATEHETZ) is also a new addition. Described as ‘a non-profit umbrella organization that legally represents its members and the National Talent Support Council’, it is funded by the National Council and through membership fees. The Articles of Association date from February 2010 and list 10 founding organisations. The Association provides ‘representation’ for the National Council’ (which I take to mean the membership). It manages the time-limited programmes (see below) as well asthe National Talent Point and the European Talent Centre.
Talent Support Councils: Different numbers of these are reported. One source says 76; another 65, of which some 25% were newly-established through the programme. Their role seems broadly unchanged, involving local and regional co-ordination, support for professionals, assistance to develop new activities, helping match supply with demand and supporting the tracking of those with talent.
Talent Point Network: there were over 1,000 talent points by the end of 2013. (Assuming 3.5 million potential participants, that is a talent point for every 3,500 people.) Talent points are providers of talent support services – whether identification, provision or counselling. They are operated by education providers, the church and a range of other organisations and may have a local, regional or national reach. They join the network voluntarily but are accredited. In 2011 there were reportedly 400 talent points and 200 related initiatives, so there has been strong growth over the past two years.
Ambassadors of Talent: Another new addition, introduced by the National Talent Support Council in 2011. There is a separate Ambassador Electing Council which appoints three new ambassadors per year. The current list has thirteen entries and is markedly eclectic.
Friends of Talent Club: described in 2011 as ‘a voluntary organisation that holds together those, who are able and willing to support talents voluntarily and serve the issue of talent support…Among them, there are mentors, counsellors and educators, who voluntarily help talented people develop in their professional life. The members of the club can be patrons and/or supporters. “Patrons” are those, who voluntarily support talents with a considerable amount of service. “Supporters” are those, who voluntarily support the movement of talent support with a lesser amount of voluntary work, by mobilizing their contacts or in any other way.’ This sounds similar to the originally envisioned ‘National Talent Support Circle’ [sic]. I could find little more about the activities of this branch of the structure.
The European Talent Centre: The National Talent Point says that this:
‘…supports and coordinates European actions in the field of talent support in order to find gifted people and develop their talent in the interest of Europe as a whole and the member states.’
Altogether this is a substantial endeavour requiring large numbers of staff and volunteers and demanding a significant budgetary topslice.
I could find no reliable estimate of the ratio of the running cost to the direct investment in talent support, but there must be cause to question the overall efficiency of the system.
My hunch is that this level of bureaucracy must consume a significant proportion of the overall budget.
Clearly the Hungarian talent support network is a long, long way from being financially self-sustaining, if indeed it ever could be.
Hungarian Parliament Building
The Hungarian Genius Programme (HGP) (2009-13)
Launched in June 2009, the HGP had two principal phases lasting from 2009 to 2011 and from 2011 to 2013. The fundamental purpose was to establish the framework and infrastructure set out in the National Talent Plan.
This English language brochure was published in 2011. It explains that the initial focus is on adults who support talents, establishing a professional network and training experts, as well as creating the network and map of providers.
It mentions that training courses lasting 10 to 30 hours have been developed and accredited in over 80 subjects to:
‘…bring concepts and methods of gifted and talented education into the mainstream and reinforce the professional talent support work… These involve the exchange of experience and knowledge expansion training, as well as programs for those who deal with talented people in developing communities, and awareness-raising courses aimed at the families and environment of young pupils, on the educational, emotional and social needs of children showing special interest and aptitude in one or more subject(s). The aims of the courses are not only the exchange of information but to produce and develop the professional methodology required for teaching talents.’
The brochure also describes an extensive talent survey undertaken in 2010, the publication of several good practice studies and the development of a Talent Loan modeled on the Hungarian student loan scheme.
It lists a seven-strong strategic management group including an expert adviser, project manager, programme co-ordinator and a finance manager. There are also five operational teams, each led by a named manager, one of which focused on ‘international relations: collecting and disseminating international best practices; international networking’.
The Talent Map was drawn and the Talent Network created (including 867 talent points and 76 talent councils).
23,500 young people took part in ‘subsidised talent support programmes’
118 new ‘local educational talent programmes’ were established
25 professional development publications were written and made freely available
13,987 teachers (about 10% of the total in Hungary) took part in professional development.
Evidence in English of rigorous independent evaluation is, however, limited:
‘The efficiency of the Programme has been confirmed by public opinion polls (increased social acceptance of talent support) and impact assessments (training events: expansion of specialised knowledge and of the methodological tool kit).’
The Talent Bridges Project (TBP) (2012-2014)
TBP began in November 2012 and is scheduled to last until ‘mid-2014’.
The initially parallel TBP is mentioned in the 2011 brochure referenced above:
‘In the strategic plan of the Talent Bridges Program to begin in 2012, we have identified three key areas for action: bridging the gaps in the Talent Point network, encouraging talents in taking part in social responsibility issues and increasing media reach. In order to become sustainable, much attention should be payed [sic] to maintaining and expanding the support structure of this system, but the focus will significantly shift towards direct talent care work with the youth.’
Later on it says:
‘Within the framework of the Talent Bridges Program the main objectives are: to further improve the contact system between the different levels of talent support organisations; to develop talent peer communities based on the initiatives coming from young people themselves; to engage talents in taking an active role in social responsibility; to increase media reach in order to enhance the recognition and social support for both high achievers and talent support; and last, but not least, to arrange the preliminary steps of setting up an EU Institute of Talent Support in Budapest.’
A list of objectives published subsequently contains the following items:
Creating a national talent registration and tracking system
Developing programmes for 3,000 talented young people from disadvantaged backgrounds and with special educational needs
Supporting the development of ‘outstanding talents’ in 500 young people
Supporting 500 enrichment programmes
Supporting ‘the peer age groups of talented young people’
Introducing programmes to strengthen interaction between parents, teachers and talented youth benefiting 5,000 young people
Introducing ‘a Talent Marketplace’ to support ‘the direct social utilisation of talent’ involving ‘150 controlled co-operations’
Engaging 2,000 mentors in supporting talented young people and training 5,000 talent support facilitators and mentors
Launching a communication campaign to reach 100,000 young people and
‘Realise European-Union-wide communication (in addition to the current 10, to involve 10 more EU Member States into the Hungarian initiatives, in co-operation with the European Talent Centre in Budapest established in the summer of 2012).’
However, what appears to be the bid for TBP (in Hungarian) calls the final sub-project ‘an EU Communications Programme’ (p29), which appears to involve:
Raising international awareness of Hungary’s talent support activities
Strengthening Hungary’s position in the EU talent network
Providing a foreign exchange experience for talented young Hungarians
Influencing policy makers.
Later on (p52) this document refers to an international campaign, undertaken with support from the European Talent Centre, targeting international organisations and the majority of EU states.
Work to be covered includes the preparation of promotional publications in foreign languages, the operation of a ‘multilingual online platform’, participation in international conferences (such as those of ECHA, the World Council, IRATDE and ICIE); and ‘establishing new professional collaborations with at least 10 new EU countries or international organisations’.
It is not a straightforward matter to reconcile the diverse and sometimes conflicting sources of information about the budgets allocated to the National Talent Fund, HGP and the TBP, but this is my best effort, with all figures converted into pounds sterling.
Several sources say that the Talent Fund is set to increase in size over the period.
‘This fund has an annual 5 million EUR support from the national budget and an additional amount from tax donations of the citizens of a total sum of 1.5 million EUR in the first year doubled to 3 million EUR and 6 million EUR in the second and third years respectively.’ (Csermely 2012)
That would translate into a budget of £5.4m/£6.7m/£9.2m over the three years in question, but it is not quite clear which three years are included.
Even if we assume that the NTF budget remains the same in 2013 and 2014 as in 2012, the total investment over the period 2009-2014 amounts to approximately £60m.
That works out at about £17 per eligible Hungarian. Unfortunately I could find no reliable estimate of the total number of Hungarians that have benefited directly from the initiative to date.
On the basis of the figures I have seen, my guesstimate is that the total will be below 10% of the total eligible population – so under 350,000. But I must stress that there is no evidence to support this.
Whether or not the intention is to reach 100% of the population, or whether there is an in-built assumption that only a proportion of the population are amenable to talent development, is a moot point. I found occasional references to a 25% assumption, but it was never clear whether this was official policy.
Even if this applies, there is clearly a significant scalability challenge even within Hungary’s national programme.
It is also evident that the Hungarians have received some £18m from the European Social Fund over the past five years and have invested at least twice as much of their own money. That is a very significant budget indeed for a country of this size.
Hungary’s heavy reliance on EU funding is such that they will find it very difficult to sustain the current effort if that largesse disappears.
One imagines that they will be seeking continued support from EU sources over the period 2014-2020. But, equally, one would expect the EU to demand robust evidence that continued heavy dependency on EU funding will not be required.
And of course a budget of this size also begs questions about scalability to Europe in the conspicuous absence of a commensurate figure. There is zero prospect of equivalent funding being available to extend the model across Europe. The total bill would run into billions of pounds!
A ‘Hungarian-lite’ model would not be as expensive, but it would require a considerable budget.
However, it is clear from the table that the present level of expenditure on the European network has been tiny by comparison with the domestic investment – probably not much more than £100,000 per year.
Initially this came from the National Talent Fund budget but it seems as though the bulk is now provided through the ESF, until mid-2014 at least.
This shift seems to have removed a necessity for the European Talent Centre to receive its funding in biannual tranches through a perpetual retendering process.
For the sums expended from the NTF budget are apparently tied to periods of six months or less.
The European Talent Centre website currently bears the legend:
‘Operation of the European Talent Centre – Budapest between 15th December 2012 and 30th June 2013 is realised with the support of Grant Scheme No. NTP-EUT-M-12 announced by the Institute for Educational Research and Development and the Human Resources Support Manager on commission of the Ministry of Human Resources “To support international experience exchange serving the objectives of the National Talent Programme, and to promote the operation and strategic further development of the European Talent Centre – Budapest”.’
But when I wrote my 2012 review it said:
‘The operation of the European Talent Centre — Budapest is supported from 1 July 2012 through 30 November 2012 by the grant of the National Talent Fund. The grant is realised under Grant Scheme No. NTP-EU-M-12 announced by the Hungarian Institute for Educational Research and Development and the SándorWekerle Fund Manager of the Ministry of Administration and Justice on commission of the Ministry of Human Resources, from the Training Fund Segment of the Labour Market Fund.’
A press release confirmed the funding for this period as HUF 30m.
Presumably it will now need to be amended to reflect the arrival of £21.3K under Grant Scheme No. NTP-EU-M-13 – and possibly to reflect income from the ESF-supported TBP too.
Danube Bend at Visegrad courtesy of Phillipp Weigell
Origins of the European Talent Project: Evolution to December 2012
Hungary identified talent support as a focus during its EU Presidency, in the first half of 2011, citing four objectives:
A talent support conference scheduled for April 2011
A first European Talent Day to coincide with the conference, initially ‘a Hungarian state initiative…expanding it into a public initiative by 2014’.
Talent support to feature in EU strategies and documents, as well as a Non-Legislative Act (NLA). It is not specified whether this should be a regulation, decision, recommendation or opinion. (Under EU legislation the two latter categories have no binding force.)
An OMCexpert group on talent support – ie an international group run under the aegis of the Commission.
‘Call the European Commission and the European Parliament to make every effort to officially declare the 25th of March the European Day of the Talented and Gifted.’
‘Stress the importance of…benefits and best practices appearing in documents of the European Commission, the European Council and the European Parliament.’
‘Propose to establish a European Talent Resource and Support Centre in Budapest’ to ‘coordinate joint European actions in the field’.
‘Agree to invite stakeholders from every country of the European Union to convene annually to discuss the developments and current questions in talent support. Upon the invitation of the Government of Poland the next conference will take place in Warsaw.’
The possibility of siting a European Centre anywhere other than Budapest was not seriously debated.
Evolution of a Written Declaration to the EU
Following the Conference an outline Draft Resolution of the European Parliament was circulated for comment.
This proposed that:
‘A Europe-wide talent support network should be formed and supported with an on-line and physical presence to support information-sharing, partnership and collaborations. This network should be open for co-operation with all European talent support efforts, use the expertise and networking experiences of existing multinational bodies such as the European Council of High Ability and support both national and multinational efforts to help talents not duplicating existing efforts but providing an added European value.’
Moreover, ‘A European Talent Support Centre should be established…in Budapest’. This:
‘…should have an Advisory Board having the representatives of interested EU member states, all-European talent support-related institutions as well as key figures of European talent support.’
The Centre’s functions are five-fold:
‘Using the minimum bureaucracy and maximising its use of online solutions the European Talent Support Centre should:
facilitate the development and dissemination of best curricular and extra-curricular talent support practices;
coordinate the trans-national cooperation of Talent Points forming an EU Talent Point network;
help the spread of the know-how of successful organization of Talent Days;
organize annual EU talent support conferences in different EU member states overseeing the progress of cooperation in European talent support;
provide a continuously updated easy Internet access for all the above information.’
Note the references on the one hand to an inclusive approach, a substantial advisory group (though without the status of an EU-hosted OMC expert group) and a facilitating/co-ordinating role, but also – on the other hand – the direct organisation of annual EU-wide conferences and provision of a sophisticated supporting online environment.
MEPs were lined up to submit the Resolution in Autumn 2011 but, for whatever reason, this did not happen.
Instead a new draft Written Declaration was circulated in January 2012. This called on:
‘Member States to consider measures helping curricular and extracurricular forms of talent support including the training of educational professionals to recognize and help talent;
The Commission to consider talent support as a priority of future European strategies, such as the European Research Area and the European Social Fund;
Member States and the Commission to support the development of a Europe-wide talent support network, formed by talent support communities, Talent Points and European Talent Centres facilitating cooperation, development and dissemination of best talent support practices;
Member States and the Commission to celebrate the European Day of the Talented and Gifted.’
The focus has shifted from the Budapest-centric network to EU-led activity amongst member states collectively. Indeed, no specific role for Hungary is mentioned.
There is a new emphasis on professional development and – critically – a reference to ‘European talent centres’. All mention of NLAs and OMC expert groups has disappeared.
There are some subtle adjustments in the final version of WD 0034/2012. The second bullet point has become:
‘The Commission to consider talent support as part of ‘non-formal learning’ and a priority in future European strategies, such as the strategies guiding the European Research Area and the European Social Fund’.
While the third now says:
‘Member States and the Commission to support the development of a Europe-wide talent support network bringing together talent support communities, Talent Points and European Talent Centres in order to facilitate cooperation and the development and dissemination of the best talent support practices.’
And the fourth is revised to:
‘Member States and the Commission to celebrate the European Day of Highly Able People.’
The introduction of a phrase that distinguishes between education and talent support is curious.
CEDEFOP – which operates a European Inventory on Validation of Non-formal and Informal Learning – defines the latter as:
‘…learning resulting from daily work-related, family or leisure activities. It is not organised or structured (in terms of objectives, time or learning support). Informal learning is in most cases unintentional from the learner’s perspective. It typically does not lead to certification.’
One assumes that a distinction is being attempted between learning organised by a school or other formal education setting and that which takes place elsewhere – presumably because EU member states are so fiercely protective of their independence when it comes to compulsory education.
But surely talent support encompasses formal and informal learning alike?
Moreover, the adoption of this terminology appears to rule out any provision that is ‘organised or structured’, excluding huge swathes of activity (including much of that featured in the Hungarian programme). Surely this cannot have been intentional.
Such a distinction is increasingly anachronistic, especially in the case of gifted learners, who might be expected to access their learning from a far richer blend of sources than simply in-school classroom teaching.
Their schools are no longer the sole providers of gifted education, but facilitators and co-ordinators of diverse learning streams.
The ‘gifted and talented’ terminology has also disappeared, presumably on the grounds that it would risk frightening the EU horses.
Both of these adjustments seem to have been a temporary aberration. One wonders who exactly they were designed to accommodate and whether they were really necessary.
Establishment and early activity of the EU Talent Centre in Budapest
The Budapest centre was initially scheduled to launch in February 2012, but funding issues delayed this, first until May and then the end of June.
‘…to contribute on the basis of the success of the Hungarian co-operation model to organising the European talent support actors into an open and flexible network overarching the countries of Europe.’
Its mission is to:
‘…offer the organisations and individuals active in an isolated, latent form or in a minor network a framework structure and an opportunity to work together to achieve the following:
to provide talent support an emphasis commensurate with its importance in every European country
to reduce talent loss to the minimum in Europe,
to give talent support a priority role in the transformation of the sector of education; to provide talented young persons access to the most adequate forms of education in every Member State,
to make Europe attractive for the talented youth,
to create talent-friendly societies in every European country.’
The text continues:
‘It is particularly important that network hubs setting targets similar to those of the European Talent Centre in Budapest should proliferate in the longer term.
The first six months represent the first phase of the work: we shall lay the bases [sic] for establishing the European Talent Support Network. The expected key result is to set up a team of voluntary experts from all over Europe who will contribute to that work and help draw the European talent map.’
But what exactly are these so-called network hubs? We had to wait some time for an explanation.
There was relatively little material on the website at this stage and this was also slow to change.
My December 2012 post summarised progress thus:
‘The Talent Map includes only a handful of links, none in the UK.
The page of useful links is extensive but basically just a very long list, hard to navigate and not very user-friendly. Conversely, ‘best practices’ contains only three resources, all of them produced in house.
The whole design is rather complex and cluttered, several of the pages are too text-heavy and occasionally the English leaves something to be desired.’
Here ends the first part of this post. Part Twoexplains the subsequent development of the ‘network hubs’ concept, charts the continuation of the advocacy effort and reviews progress in delivering the services for which the Budapest Centre is responsible.
It concludes with an overall assessment of the initiative highlighting some of its key weaknesses.
Across the Blogosphere and five of the most influential English language social media platforms – Facebook, Google+, LinkedIn, Twitter and You Tube and
Utilising four content curation tools particularly favoured by gifted educators, namely PaperLi, Pinterest, ScoopIt and Storify.
Considers the gap between current practice and the proposed quality criteria – and whether there has been an improvement in the application of social media across the five dimensions of gifted education identified in my previous post.
I should declare at the outset that I am a Trustee of Potential Plus UK and have been working with them to improve their online and social media presence. This post lies outside that project, but some of the underlying research is the same.
I have been this way before
This is my second excursion into this territory.
In September 2012 I published a two-part response to the question ‘Can Social Media Help Overcome the Problems We Face in Gifted Education?’
Part One outlined an analytical framework based on five dimensions of gifted education. Each dimension is stereotypically associated with a particular stakeholder group though, in reality, each group operates across more than one area. The dimensions (with their associated stakeholder groups in brackets) are: advocacy (parents); learning (learners); policy-making (policy makers); professional development (educators); and research (academics).
Part Two used this framework to review the challenges faced by gifted education, to what extent these were being addressed through social media and how social media could be applied more effectively to tackle them. It also outlined the limitations of a social media-driven approach and highlighted some barriers to progress.
The conclusions I reached might be summarised as follows:
Many of the problems associated with gifted education are longstanding and significant, but not insurmountable. Social media will not eradicate these problems but can make a valuable contribution towards that end by virtue of their unrivalled capacity to ‘only connect’.
Gifted education needs to adapt if it is to thrive in a globalised environment with an increasingly significant online dimension driven by a proliferation of social media. The transition from early adoption to mainstream practice has not yet been effected, but rapid acceleration is necessary otherwise gifted education will be left behind.
Gifted education is potentially well-placed to pioneer new developments in social media but there is limited awareness of this opportunity, or the benefits it could bring.
The post was intended to inform discussion at a Symposium at the ECHA Conference in Munster, Germany in September 2012. I published the participants’ presentations and a report on proceedings (which is embedded within a review of the Conference as a whole).
I have not previously attempted to pin down what constitutes a high quality website or blog and effective social media usage, not least because so many have gone before me.
But, on reviewing their efforts, I could find none that embodied every dimension I considered important, while several appeared unduly restrictive.
It seems virtually impossible to reconcile these two conflicting pressures, defining quality with brevity but without compromising flexibility. Any effort to pin down quality risks reductionism while also fettering innovation and wilfully obstructing the pioneering spirit.
I am a strong advocate of quality standards in gifted education but, in this context, it seemed beyond my capacity to find or generate the ideal ‘flexible framework’, offering clear guidance without compromising innovation and capacity to respond to widely varying needs and circumstances.
But the project for Potential Plus UK required us to consult stakeholders on their understanding of quality provision, so that we could reconcile any difference between their perceptions and our own.
And, in order to consult effectively, we needed to make a decent stab at the task ourselves.
So I prepared some draft success criteria, drawing on previous efforts I could find online as well as my own experience over the last four years.
I have reproduced the draft criteria below, with slight amendment to make them more universally applicable. The first set – for a blog or website – are generic, while those relating to wider online and social media presence are made specific to gifted education.
Draft Quality Criteria for a Blog or Website
1. The site is inviting to regular and new readers alike; its purpose is up front and explicit; as much content as possible is accessible to all.
2. Readers are encouraged to interact with the content through a variety of routes – and to contribute their own (moderated) content.
3. The structure is logical and as simple as possible, supported by clear signposting and search.
4. The design is contemporary, visually attractive but not obtrusive, incorporating consistent branding and a complementary colour scheme. There is no external advertising.
5. The layout makes generous and judicious use of space and images – and employs other media where appropriate.
6. Text is presented in small blocks and large fonts to ensure readability on both tablet and PC.
7. Content is substantial, diverse and includes material relevant to all the site’s key audiences.
8. New content is added weekly; older material is frequently archived (but remains accessible).
9. The site links consistently to – and is linked to consistently by – all other online and social media outlets maintained by the authors.
10. Readers can access site content by multiple routes, including other social media, RSS and email.
Draft quality criteria for wider online/social media activity
1. A body’s online and social media presence should be integral to its wider communications strategy which should, in turn, support its purpose, objectives and priorities.
2. It should:
a. Support existing users – whether they are learners, parents/carers, educators, policy-makers or academics – and help to attract new users;
b. Raise the entity’s profile and build its reputation – both nationally and internationally – as a first-rate provider in one or more of the five areas of gifted education;
c. Raise the profile of gifted education as an issue and support campaigning for stronger provision;
d. Help to generate income to support the pursuit of these objectives and the body’s continued existence.
3. It should aim to:
a. Provide a consistently higher quality and more compelling service than its main competitors, generating maximum benefit for minimum cost.
b. Use social media to strengthen interaction with and between users and provide more effective ‘bottom-up’ collaborative support.
c. Balance diversity and reach against manageability and effectiveness, prioritising media favoured by users but resisting pressure to diversify without justification and resource.
d. Keep the body’s online presence coherent and uncomplicated, with clear and consistent signposting so users can navigate quickly and easily between different online locations.
e. Integrate all elements of the body’s online presence, ensuring they are mutually supportive.
4. It should monitor carefully the preferences of users, as well as the development of online and social media services, adjusting the approach only when there is a proven business case for doing so.
Perth Pelicans by Gifted Phoenix
Applying the Criteria
These draft criteria reflect the compromise I outlined above. They are not the final word. I hope that you will help us to refine them as part of the consultation process now underway and I cannot emphasise too much that they are intended as guidelines, to be applied with some discretion.
I continue to maintain my inalienable right – as well as yours – to break any rules imposed by self-appointed arbiters of quality.
To give an example, readers will know that I am particularly exercised by any suggestion that good blog posts are, by definition, brief!
I also maintain your inalienable right to impose your own personal tastes and preferences alongside (or in place of) these criteria. But you might prefer to do so having reflected on the criteria – and having dismissed them for logical reasons.
There are also some fairly obvious limitations to these criteria.
For example, bloggers like me who use hosted platforms are constrained to some extent by the restrictions imposed by the host, as well as by our preparedness to pay for premium features.
Moreover, the elements of effective online and social media practice have been developed with a not-for-profit charity in mind and some in particular may not apply – or may not apply so rigorously – to other kinds of organisations, or to individuals engaged in similar activity.
In short, these are not templates to be followed slavishly, but rather a basis for reviewing existing provision and prompting discussion about how it might be further improved.
It would be forward of me to attempt a rigorous scrutiny against each of the criteria of the six key players mentioned above, or of any of the host of smaller players, including the 36 active gifted education blogs now listed on my blogroll.
I will confine myself instead to reporting factually all that I can find in the public domain about the activity of the six bodies, comparing and contrasting their approaches with broad reference to the criteria and arriving at an overall impressionistic judgement.
As for the blogs, I will be even more tactful, pointing out that my own quick and dirty self-review of this one – allocating a score out of ten for each of the ten items in the first set of criteria – generated a not very impressive 62%.
Of course I am biased. I still think my blog is better than yours, but now I have some useful pointers to how I might make it even better!
Comparing six major players
I wanted to compare the social media profile of the most prominent international organisations, the most active national organisations based in the US (which remains the dominant country in gifted education and in supporting gifted education online) and the two major national organisations in the UK.
I could have widened my reach to include many similar organisations around the world but that would have made this post more inaccessible. It also struck me that I could evidence my key messages by analysis of this small sample alone – and that my conclusions would be equally applicable to others in the field, wherever they are located geographically.
My analysis focuses on these organisations’:
Principal websites, including any information they contain about their wider online and social media activity;
Profile across the five selected social media platforms and use of blogs plus the four featured curational tools.
I have confined myself to universally accessible material, since several of these organisations have additional material available only to their memberships.
I have included only what I understand to be official channels, tied explicitly to the main organisation. I have included accounts that are linked to franchised operations – typically conferences – but have excluded personal accounts that belong to individual employees or trustees of the organisations in question.
Table 1 below shows which of the six organisations are using which social media. The table includes hyperlinks to the principal accounts and I have also repeated these in the commentary that follows.
Table 1: The social media used by the sample of six organisations
The table gives no information about the level or quality of activity on each account – that will be addressed in the commentary below – but it gives a broadly reliable indication of which organisations are comparatively active in social media and which are less so.
The analysis shows that Facebook and Twitter are somewhat more popular platforms than Google+, LinkedIn and You Tube, while Pinterest leads the way amongst the curational tools. This distribution of activity is broadly representative of the wider gifted education community.
The next section takes a closer look at this wider activity on each of the ten platforms and tools.
Comparing gifted-related activity on the ten selected platforms and tools
As far as I can establish, none of the six organisations currently maintains a blog. SENG does have what it describes as a Library of Articles, which is a blog to all intents and purposes – and Potential Plus UK is currently planning a blog.
Earlier this year I noticed that my blogroll was extremely out of date and that several of the blogs it contained were no longer active. I reviewed all the blogs I could find in the field and sought recommendations from others.
I imposed a rule to distinguish live blogs from those that are dead or dormant – they had to have published three or more relevant posts in the previous six months.
I also applied a slightly more subjective rule, in an effort to sift out those that had little relevance to anyone beyond the author (being cathartic diaries of sorts) and those that are entirely devoted to servicing a small local advocacy group.
I ended up with a long shortlist of 36 blogs, which now constitutes the revised blogroll in the right hand column. Most are written in English but I have also included a couple of particularly active blogs in other languages.
The overall number of active blogs is broadly comparable with what I remember in 2010 when I first began, but the number of posts has probably fallen.
I don’t know to what extent this reflects changes in the overall number of active blogs and posts, either generically or in the field of education. In England there has been a marked renaissance in edublogging over the last twelve months, yet only three bloggers venture regularly into the territory of gifted education.
Alongside Twitter, Facebook has the most active gifted education community.
There are dozens of Facebook Groups focused on giftedness and high ability. At the time of writing, the largest and most active are:
There is a Gifted Phoenix page, which is rigged up to my Twitter account so all my tweets are relayed there. Only those with a relevant hashtag – #gtchat or #gtvoice – will be relevant to gifted education.
To date there is comparatively little activity on Google+, though many have established an initial foothold there.
Part of the problem is lack of familiarity with the platform, but another obstacle is the limited capacity to connect other parts of one’s social media footprint with one’s Google+ presence.
There is only one Google+ Community to speak of: ‘Gifted and Talented’ currently with 134 members.
A search reveals a large number of people and pages ostensibly relevant to gifted education, but few are useful and many are dormant.
My own Google+ page is dormant. It should now be possible to have WordPress.com blogposts appear automatically on a Google+ page, but the service seems unreliable. There is no capacity to link Twitter and Google+ in this fashion. I am waiting on Google to improve the connectivity of their service.
LinkedIn is also comparatively little used by the gifted education community. There are several groups:
But none is particularly active, despite the rather impressive numbers above. Similarly, a handful of organisations have company pages on LinkedIn, but only one or two are active.
The search purports to include a staggering 98,360 people who mention ‘gifted’ in their profiles, but basic account holders can only see 100 results at a time.
My own LinkedIn page is registered under my real name rather than my social media pseudonym and is focused principally on my consultancy activity. I often forget it exists.
By comparison, Twitter is much more lively.
My brief January post mentioned my Twitter list containing every user I could find who mentions gifted education (or a similar term, whether in English or a selection of other languages) in their profile.
The list currently contains 1,263 feeds. You are welcome to subscribe to it. If you want to see it in action first, it is embedded in the right-hand column of this Blog, just beneath the blogroll.
The majority of the gifted-related activity on Twitter takes place under the #gtchat hashtag, which tends to be busier than even the most popular Facebook pages.
This hashtag also accommodates an hour long real-time chat every Friday (at around midnight UK time) and at least once a month on Sundays, at a time more conducive to European participants.
Other hashtags carrying information about gifted education include: #gtvoice (UK-relevant), #gtie (Ireland-relevant), #hoogbegaafd (Dutch-speaking); #altascapacidades (Spanish-speaking), #nagc and #gifteded.
Chats also take place on the #gtie and #nagc hashtags, though the latter may now be discontinued.
Several feeds provide gifted-relevant news and updates from around the world. Amongst the most followed are:
The most viewed video is called ‘Top 10 Myths in Gifted Education’, a dramatised presentation which was uploaded in March 2010 by the Gifted and Talented Association of Montgomery County. This has had almost 70,000 views.
Gifted Phoenix does not have a You Tube presence.
Paper.li describes itself as ‘a content curation service’ which ‘enables people to publish newspapers based on topics they like and treat their readers to fresh news, daily.’
It enables curators to draw on material from Facebook, Twitter, Google+, embeddable You Tube videos and websites via RSS feeds.
In September 2013 it reported 3.7m users each month.
I found six gifted-relevant ‘papers’ with over 1,000 subscriptions:
There is, as yet, no Gifted Phoenix presence on paper.li, though I have been minded for some months to give it a try.
Pinterest is built around a pinboard concept. Pins are illustrated bookmarks designating something found online or already on Pinterest, while Boards are used to organise a collection of pins. Users can follow each other and others’ boards.
Pinterest is said to have 70 million users, of which 80% are female.
A search on ‘gifted education’ reveals hundreds of boards dedicated to the topic, but unfortunately there is no obvious way to rank them by number of followers or number of pins.
Since advanced search capability is conspicuous by its absence, the user apparently has little choice but to sift laboriously through each board. I have not undertaken this task so I can bring you no useful information about the most used and most popular boards.
Judging by the names attached to these boards, they are owned almost exclusively by women. It is interesting to hypothesise about what causes this gender imbalance – and whether Pinterest is actively pursuing female users at the expense of males.
There are, however, some organisations in the field making active use of Pinterest. A search of ‘pinners’ suggests that amongst the most popular are:
IAGC Gifted which has 26 boards, 734 pins and 400 followers.
Gifted Phoenix is male and does not have a presence on Pinterest…yet!
Scoop.it stores material on a page somewhere between a paper.li-style newspaper and a Pinterest-style board. It is reported to have almost seven million unique visitors each month.
‘Scoopable’ material is drawn together via URLs, a programmable ‘suggestions engine’ and other social media, including all the ‘big four’. The free version permits a user to link only two social media accounts however, putting significant restrictions on Scoop.it’s curational capacity.
Scoop.it also has limited search engine capability. It is straightforward to conduct an elementary search like this one on ‘gifted’ which reveals 107 users.
There is no quick way of finding those pages that are most used or most followed, but one can hover over the search results for topics to find out which have most views:
Storify is a slightly different animal to the other three tools. It describes itself as:
‘the leading social storytelling platform, enabling users to easily collect tweets, photos, videos and media from across the web to create stories that can be embedded on any website. With Storify, anyone can curate stories from the social web to embed on their own site and share on the Storify platform.’
Estimates of user numbers vary but are typically from 850,000 to 1m.
Storify is a flexible tool whose free service permits one to collect material already located on the platform and from a range of other sources including Twitter, Facebook, You Tube, Flickr, Instagram, Google search, Tumblr – or via RSS or URL.
The downside is that there is no way to search within Storify for stories or users, so one cannot provide information about the level of activity or users that it might be helpful to follow.
However, a Google search reveals that users of Storify include:
These tiny numbers show that Storify has not really taken off as a curational platform in its own right, though it is an excellent supporting tool, particularly for recording transcripts of Twitter chats.
So, having reviewed wider gifted education-related activity on these ten social media platforms and tools, it is time to revisit the online and social media profile of the six selected organisations.
The WCGTC website was revised in 2012 and has a clear and contemporary design.
The Council’s Mission Statement has a strong networking feel to it and elsewhere the website emphasises the networking benefits associated with membership:
‘…But while we’re known for our biennial conference the spirit of sharing actually goes on year round among our membership.
By joining the World Council you can become part of this vital network and have access to hundreds of other peers while learning about the latest developments in the field of gifted children.’
The home page includes direct links to the organisation’s Facebook Page and Twitter feed. There is also an RSS feed symbol but it is not active.
Both Twitter and Facebook are of course available to members and non-members alike.
At the time of writing, the Facebook page has 1,616 ‘likes’ and is relatively current, with five posts in the last month, though there is relatively little comment on these.
The Twitter feed typically manages a daily Tweet. Hashtags are infrequently if ever employed. At the time of writing the feed has 1,076 followers.
Almost all the Tweets are links to a daily paper.li production ‘WCGTC Daily’ which was first published in late July 2013, just before the last biennial conference. This has 376 subscribers at the present time, although the gifted education coverage is selective and limited.
As noted above, the World Council website provides links to two of its six strands of social media activity, but not the remaining four. It is not yet serving as an effective hub for the full range of this activity.
Some of the strands link together well – eg Twitter to paper.li – but there is considerable scope to improve the incidence and frequency of cross-referencing.
Of the six organisations in this sample, ECHA is comfortably the least active in social media with only a Facebook page available to supplement its website.
The site itself is rather old-fashioned and could do with a refresh. It includes a section ‘Introducing ECHA’ which emphasises the organisation’s networking role:
‘The major goal of ECHA is to act as a communications network to promote the exchange of information among people interested in high ability – educators, researchers, psychologists, parents and the highly able themselves. As the ECHA network grows, provision for highly able people improves and these improvements are beneficial to all members of society.’
There is no reference on the website to the Facebook group which is closed, but not confined solely to ECHA members. There are currently 191 members. The group is fairly active, but does not rival those with far more members listed above.
There’s not much evidence of cross-reference between the Facebook group and the website, but that may be because the website is infrequently updated.
As with the World Council, ECHA conferences have their own social media profile.
At the 2012 Conference in In Munster this was left largely to the delegates. Several of us live Tweeted the event.
I blogged about the Conference and my part in it, providing links to transcripts of the Twitter record. The post concluded with a series of learning points for this year’s ECHA Conference in Slovenia.
The Conference website explains that the theme of the 2014 event is ‘Rethinking Giftedness: Giftedness in the Digital Age’.
Six months ahead of the event, there is a Twitter feed with 29 followers that has been dormant for three months at the time of writing and a LinkedIn group with 47 members that has been quiet for five months.
A Forum was also established which has not been used for over a year. There is no information on the website about how the event will be supported by social media.
I sincerely hope that my low expectations will not be fulfilled!
SENG is far more active across social media. Its website carries a 2012 copyright notice and has a more contemporary feel than many of the others in this sample.
The bottom of the home page extends an invitation to ‘connect with the SENG community’ and carries links to Facebook, Twitter and LinkedIn (though not to Google+ or You Tube).
In addition, each page carries a set of buttons to support the sharing of this information across a wide range of social media.
The organisation’s Strategic Plan 2012-2017 makes only fleeting reference to social media, in relation to creating a ‘SENG Liaison Facebook page’ to support inter-state and international support.
It does, however, devote one of its nine goals to the further development of its webinar programme (each costs $40 to access or $40 to purchase a recording for non-participants).
SENG offers online parent support groups but does not state which platform is used to host these. It has a Technology/Social Media Committee but its proceedings are not openly available.
Reference has already been made above to the principal Facebook Page which is popular, featuring posts on most days and a fair amount of interaction from readers.
The parallel group for SENG Liaisons is also in place, but is closed to outsiders, which rather seems to defeat the object.
The SENG Twitter feed is relatively well followed and active on most days. The LinkedIn page is somewhat less active but can boast 142 followers while Google+ is clearly a new addition to the fold.
The You Tube channel has 257 subscribers however and carries 16 videos, most of them featuring presentations by James Webb. Rather strangely, these don’t seem to feature in the media library carried by the website.
SENG is largely a voluntary organisation with little staff resource, but it is successfully using social media to extend its footprint and global influence. There is, however, scope to improve coherence and co-ordination.
National Association for Gifted Children
The NAGC’s website is also in some need of refreshment. Its copyright notice dates from 2008, which was probably when it was designed.
There are no links to social media on the home page but ‘NAGC at a glance’ carries a direct link to the Facebook group and a Twitter logo without a link, while the page listing NAGC staff has working links to both Facebook and Twitter.
In the past, NAGC has been more active in this field.
This post was filled by July 2013. The postholder seems to have been concentrating primarily on editing the magazine edition of Parenting High Potential, which is confined to members only (but also has a Facebook presence – see below).
NAGC’s website carries a document called ‘NAGC leadership initiatives 2013-14’ which suggests further developments in the next few months.
The initiatives include:
‘Leverage content to intentionally connect NAGC resources, products and programs to targeted audiences through an organization-wide social media strategy.’
‘Implement a new website and membership database that integrates with social media and provides a state-of-the-art user interface.’
One might expect NAGC to build on its current social media profile which features:
A Facebook Group which currently has 2,420 members and is reasonably active, though not markedly so. Relatively few posts generate significant comments.
There is additional activity associated with the Annual NAGC Convention. There was extensive live Tweeting from the 2013 Convention under the rival hashtags #NAGC2013 and #NAGC13. #NAGC14 looks the favourite for this year’s Convention which has also established a Facebook presence
NAGC also has its own networks. The website lists 15 of these but hardly any of their pages give details of their social media activity. A cursory review reveals that:
Overall, NAGC has a fairly impressive array of social media activity but demonstrates relatively little evidence of strategic coherence and co-ordination. This may be expected to improve in the next six months, however.
NACE is not quite the poorest performer in our sample but, like ECHA, it has so far made relatively little progress towards effective engagement with social media.
Its website dates from 2010 but looks older. Prominent links to Twitter and Facebook appear on the front page as well as – joy of joys – an RSS feed.
However, the Facebook link is not to a NACE-specific page or group and the RSS feed doesn’t work.
There are references on the website to the networking benefits of NACE membership, but not to any role for the organisation in wider networking activity via social media. Current efforts seem focused primarily on advertising NACE and its services to prospective members and purchasers.
The Twitter feed has a respectable 1,426 followers but Tweets tend to appear in blocks of three or four spaced a few days apart. Quality and relevance are variable.
Whereas the old Facebook page had reached 1,344 likes, the new one is currently at roughly half that level – 683 – but the level of activity is reasonably impressive.
There is a third Facebook page dedicated to the organisation’s ‘It’s Alright to Be Bright’ campaign, which is not quite dormant.
All website pages carry buttons supporting information-sharing via a wide range of social media outlets. But there is little reference in the website content to its wider social media activity.
The Twitter feed is fairly lively, boasting 1,093 followers. It currently has some 400 fewer followers than NACE but has published about 700 more Tweets. Both are publishing at about the same rate. Quality and relevance are similarly variable.
The LinkedIn page is little more than a marker and does not list the products offered.
The Google+ presence uses the former NAGC Britain name and is also no more than a marker.
But the level of activity on Pinterest is more significant. There are 14 boards each containing a total of 271 pins and attracting 26 followers. This material has been uploaded during 2014.
I shall begin by reflecting on Gifted Phoenix’s profile across the ten elements included in this analysis:
He has what he believes is a reasonable Blog.
He is one of the leading authorities on gifted education on Twitter (if not the leading authority).
His Facebook profile consists almost exclusively of ‘repeats’ from his Twitter feed.
His LinkedIn page reflects a different identity and is not connected properly to the rest of his profile.
His Google+ presence is embryonic.
He has used Scoop.it and Storify to some extent, but not Paper.li or Pinterest.
GP currently has a rather small social media footprint, since he is concentrating on doing only two things – blogging and microblogging – effectively.
He might be advised to extend his sphere of influence by distributing the limited available human resource more equitably across the range of available media.
On the other hand he is an individual with no organisational objectives to satisfy. Fundamentally he can follow his own preferences and inclinations.
Maybe he should experiment with this post, publishing it as widely as possible and monitoring the impact via his blog analytics…
The Six Organisations
There is a strong correlation between the size of each organisation’s social media footprint and the effectiveness with which they use social media.
There are no obvious examples – in this sample at least – of organisations that have a small footprint because of a deliberate choice to specialise in a narrow range of media.
If we were to rank the six in order of effectiveness, the World Council, NAGC and SENG would be vying for top place, while ECHA and NACE would be competing for bottom place and Potential Plus UK would be somewhere in the middle.
But none of the six organisations would achieve more than a moderate assessment against the two sets of quality criteria. All of them have huge scope for improvement.
Their priorities will vary, according to what is set out in their underlying social media strategies. (If they have no social media strategy, the obvious priority is to develop one, or to revise it if it is outdated.)
The Overall Picture across the Five Aspects of Gifted Education
This analysis has been based on the activities of a small sample of six generalist organisations in the gifted education field, as well as wider activity involving a cross-section of tools and platforms.
It has not considered providers who specialise in one of the five aspects – advocacy, learning, professional development, policy-making and research – or the use being made of specialist social media, such as MOOCs and research tools.
So the judgements that follow are necessarily approximate. But nothing I have seen across the wider spectrum of social media over the past 18 months would seriously call into question the conclusions reached below.
Advocacy via social media is slightly stronger than it was in 2012 but there is still much insularity and too little progress has been made towards a joined up global movement. The international organisations remain fundamentally inward-looking and have been unable to offer the leadership and sense of direction required. The grip of the old guard has been loosened and some of the cliquey atmosphere has dissipated, but academic research remains the dominant culture.
Learning via social media remains limited. There are still several niche providers but none has broken through in a global sense. The scope for fruitful partnership between gifted education interests and one or more of the emerging MOOC powerhouses remains unfulfilled. The potential for social media to support coherent and targeted blended learning solutions – and to support collaborative learning amongst gifted learners worldwide – is still largely unexploited.
Professional development via social media has been developed at a comparatively modest level by several providers, but the prevailing tendency seems to be to regard this as a ‘cash cow’ generating income to support other activities. There has been negligible progress towards securing the benefits that would accrue from systematic international collaboration.
Policy-making via social media is still the poor relation. The significance of policy-making (and of policy makers) within gifted education is little appreciated and little understood. What engagement there is seems focused disproportionately on lobbying politicians, rather than on developing at working level practical solutions to the policy problems that so many countries face in common.
Research via social media is negligible. The vast majority of academic researchers in the field are still caught in a 20th Century paradigm built around publication in paywalled journals and a perpetual round of face-to-face conferences. I have not seen any significant examples of collaboration between researchers. A few make a real effort to convey key research findings through social media but most do not. Some of NAGC’s networks are beginning to make progress and the 2013 World Conference went further than any of its predecessors in sharing proceedings with those who could not attend. Now the pressure is on the EU Talent Conference in Budapest and ECHA 2014 in Slovenia to push beyond this new standard.
Overall progress has been limited and rather disappointing. The three conclusions I drew in 2012 remain valid.
In September 2012 I concluded that ‘rapid acceleration is necessary otherwise gifted education will be left behind’. Eighteen months on, there are some indications of slowly gathering speed, but the gap between practice in gifted education and leading practice has widened meanwhile – and the chances of closing it seem increasingly remote.
Back in 2010 and 2011 several of my posts had an optimistic ring. It seemed then that there was an opportunity to ‘only connect’ globally, but also at European level via the EU Talent Centre and in the UK via GT Voice. But both those initiatives are faltering.
My 2012 post also finished on an optimistic note:
‘Moreover, social media can make a substantial and lasting contribution to the scope, value and quality of gifted education, to the benefit of all stakeholders, but ultimately for the collective good of gifted learners.
No, ‘can’ is too cautious, non-assertive, unambitious. Let’s go for WILL instead!’
Now in 2014 I am resigned to the fact that there will be no great leap forward. The very best we can hope for is disjointed incremental improvement achieved through competition rather than collaboration.
I will be doing my best for Potential Plus UK. Now what about you?
We discussed the issue of labelling gifted learners and the idea that such labels may not be permanent sifting devices, but temporary markers attached to such learners only while they need additional challenge and support.
This is not to deny that some gifted learners may warrant a permanent marker, but it does imply that many – probably most – will move in and out of scope as they develop in non-linear fashion and differentially to their peers.
Of course much depends on one’s understanding of giftedness and gifted education, a topic I have addressed frequently, starting with my inaugural post in May 2010.
Three-and-a-half years on, it seems to me that the default position has shifted somewhat further towards the Nurture, Equity and Personalisation polarities.
But the notion of giftedness as dynamic in both directions – with learners shifting in and out of scope as they develop – may be an exception to that broader direction of travel.
Of course there’s been heavy emphasis on movement into scope (the broader notion of giftedness as learned behaviour and achievable through effort) but very little attention given to progress in the opposite direction.
It is easy to understand how this would be a red rag to several bulls in the gifted education field, while outward movement raises difficult questions for everybody – whether or not advocates for gifted education – about communication and management of self-esteem.
But reform and provocation are often stalwart bedfellows. Feel free to vent your spleen in the comments section below.
I have been doing some groundwork for an impending analysis of the coverage of gifted education (and related issues) in social media – and reflecting on how that has changed in the four years I have been involved.
As a first step I revised my Blogroll (normally found in the right hand margin, immediately below the Archives).
I decided to include only Blogs that have published three or more relevant posts in the last six months – and came up with the following list of 23, which I have placed in alphabetical order.
This is rather a short list, which might suggest a significant falling off of blogging activity since 2010. I had to delete the majority of the entries in the previous version of the Blogroll because they were dormant or dead.
But I might have missed some deserving blogs, particularly in other languages. Most on this list are written in English.
If you have other candidates for inclusion do please suggest them through the comments facility below, or pass them on via Twitter.
You may have views about the quantity and quality of blogging activity – and whether there is an issue here that needs to be addressed. Certainly the apparent decline in gifted education blogging comes at a time when edublogging in England has never been more popular. Perhaps you have ideas for stimulating more posts.
On the other hand, you might take the view that blogging is increasingly irrelevant, given the inexorable rise of microblogging – aka Twitter – and the continued popularity of Facebook, let alone the long list of alternatives.
Speaking of Twitter, I thought it might be an interesting exercise to compile a public list of every feed I could find that references gifted education (or an equivalent term, whether in English or another language) in its profile.
The list includes some leading academic authorities on the subject, but is dominated by gifted education teachers and the parents of gifted learners, probably in roughly equal measure.
The clear majority is based in the United States, but there is a particularly strong community in the Netherlands and reasonable representation in Australia, Canada, the Netherlands, Spain and the UK. Several other countries are more sparsely represented.
(One authority – who shall remain nameless – has unaccountably blocked me, which prevents his inclusion in the list. But he has only produced eight tweets, the most recent over a year old, so I suppose he is no great loss.)
I cannot compare this with earlier lists, but it feels as though there has been a significant expansion of the gifted Twittersphere since I began in 2010.
That said I have no information yet about how many of the feeds are active – and just how active they are.
If I have inadvertently omitted you from the list, please Tweet to let me know. Please feel free to make use of the list as you wish, or to offer suggestions for how I might use it.
There will be further segmented lists in due course.
Postscript 13 January:
Many thanks for your really positive response. The blogroll now has 34 entries…and there’s always room for more.
If you’d like to subscribe to the Twitter list but are not sure how, here’s Twitter’s guide (see bottom of page).
If you’re not on the list but would like to be, please either follow me (making sure there’s a reference to gifted or similar in your profile) or send me a tweet requesting to be added.
You can follow or tweet me direct from this blog by going to the ‘Gifted Phoenix on Twitter’ embed in the right hand column.