‘Poor but Bright’ v ‘Poor but Dim’

 

light-147810_640This post explores whether, in supporting learners from disadvantaged backgrounds, educators should prioritise low attainers over high attainers, or give them equal priority.

 

 

 

Introduction

Last week I took umbrage at a blog post and found myself engaged in a Twitter discussion with the author, one Mr Thomas.

 

 

Put crudely, the discussion hinged on the question whether the educational needs of ‘poor but dim’ learners should take precedence over those of the ‘poor but bright’. (This is Mr Thomas’s shorthand, not mine.)

He argued that the ‘poor but dim’ are the higher priority; I countered that all poor learners should have equal priority, regardless of their ability and prior attainment.

We began to explore the issue:

  • as a matter of educational policy and principle
  • with reference to inputs – the allocation of financial and human resources between these competing priorities and
  • in terms of outcomes – the comparative benefits to the economy and to society from investment at the top or the bottom of the attainment spectrum.

This post presents the discussion, adding more flesh and gloss from the Gifted Phoenix perspective.

It might or might not stimulate some interest in how this slightly different take on a rather hoary old chestnut plays out in England’s current educational landscape.

But I am particularly interested in how gifted advocates in different countries respond to these arguments. What is the consensus, if any, on the core issue?

Depending on the answer to this first question, how should gifted advocates frame the argument for educationalists and the wider public?

To help answer the first question I have included a poll at the end of the post.

Do please respond to that – and feel free to discuss the second question in the comments section below.

The structure of the post is fairly complex, comprising:

  • A (hopefully objective) summary of Mr Thomas’s original post.
  • An embedded version of the substance of our Twitter conversation. I have removed some Tweets – mostly those from third parties – and reordered a little to make this more accessible. I don’t believe I’ve done any significant damage to either case.
  • Some definition of terms, because there is otherwise much cause for confusion as we push further into the debate.
  • A digressionary exploration of the evidence base, dealing with attainment data and budget allocations respectively. The former exposes what little we are told about how socio-economic gaps vary across the attainment spectrum; the latter is relevant to the discussion of inputs. Those pressed for time may wish to proceed directly to…
  • …A summing up, which expands in turn the key points we exchanged on the point of principle, on inputs and on outcomes respectively.

I have reserved until close to the end a few personal observations about the encounter and how it made me feel.

And I conclude with the customary brief summary of key points and the aforementioned poll.

It is an ambitious piece and I am in two minds as to whether it hangs together properly, but you are ultimately the judges of that.

 

What Mr Thomas Blogged

The post was called ‘The Romance of the Poor but Bright’ and the substance of the argument (incorporating several key quotations) ran like this:

  • The ‘effort and resources, of schools but particularly of business and charitable enterprise, are directed disproportionately at those who are already high achieving – the poor but bright’.
  • Moreover ‘huge effort is expended on access to the top universities, with great sums being spent to make marginal improvements to a small set of students at the top of the disadvantaged spectrum. They cite the gap in entry, often to Oxbridge, as a significant problem that blights our society.’
  • This however is ‘the pretty face of the problem. The far uglier face is the gap in life outcomes for those who take least well to education.’
  • Popular discourse is easily caught up in the romance of the poor but bright’ but ‘we end up ignoring the more pressing problem – of students for whom our efforts will determine whether they ever get a job or contribute to society’. For ‘when did you last hear someone advocate for the poor but dim?’ 
  • ‘The gap most damaging to society is in life outcomes for the children who perform least well at school.’ Three areas should be prioritised to improve their educational outcomes:

o   Improving alternative provision (AP) which ‘operates as a shadow school system, largely unknown and wholly unappreciated’ – ‘developing a national network of high–quality alternative provision…must be a priority if we are to close the gap at the bottom’.

o   Improving ‘consistency in SEN support’ because ‘schools are often ill equipped to cope with these, and often manage only because of the extraordinary effort of dedicated staff’. There is ‘inconsistency in funding and support between local authorities’.

o   Introducing clearer assessment of basic skills, ‘so that a student could not appear to be performing well unless they have mastered the basics’.

  • While ‘any student failing to meet their potential is a dreadful thing’, the educational successes of ‘students with incredibly challenging behaviour’ and ‘complex special needs…have the power to change the British economy, far more so than those of their brighter peers.’

A footnote adds ‘I do not believe in either bright or dim, only differences in epigenetic coding or accumulated lifetime practice, but that is a discussion for another day.’

Indeed it is.

 

Our ensuing Twitter discussion

The substance of our Twitter discussion is captured in the embedded version immediately below. (Scroll down to the bottom for the beginning and work your way back to the top.)

 

 

Defining Terms

 

Poor, Bright and Dim

I take poor to mean socio-economic disadvantage, as opposed to any disadvantage attributable to the behaviours, difficulties, needs, impairments or disabilities associated with AP and/or SEN.

I recognise of course that such a distinction is more theoretical than practical, because, when learners experience multiple causes of disadvantage, the educational response must be holistic rather than disaggregated.

Nevertheless, the meaning of ‘poor’ is clear – that term cannot be stretched to include these additional dimensions of disadvantage.

The available performance data foregrounds two measures of socio-economic disadvantage: current eligibility for and take up of free school meals (FSM) and qualification for the deprivation element of the Pupil Premium, determined by FSM eligibility at some point within the last 6 years (known as ‘ever-6’).

Both are used in this post. Distinctions are typically between the disadvantaged learners and non-disadvantaged learners, though some of the supporting data compares outcomes for disadvantaged learners with outcomes for all learners, advantaged and disadvantaged alike.

The gaps that need closing are therefore:

  • between ‘poor and bright’ and other ‘bright’ learners (The Excellence Gap) and 
  • between ‘poor and dim’ and other ‘dim’ learners. I will christen this The Foundation Gap.

The core question is whether The Foundation Gap takes precedence over The Excellence Gap or vice versa, or whether they should have equal billing.

This involves immediate and overt recognition that classification as AP and/or SEN is not synonymous with the epithet ‘poor’, because there are many comparatively advantaged learners within these populations.

But such a distinction is not properly established in Mr Thomas’ blog, which applies the epithet ‘poor’ but then treats the AP and SEN populations as homogenous and somehow associated with it.

 

By ‘dim’ I take Mr Thomas to mean the lowest segment of the attainment distribution – one of his tweets specifically mentions ‘the bottom 20%’. The AP and/or SEN populations are likely to be disproportionately represented within these two deciles, but they are not synonymous with it either.

This distinction will not be lost on gifted advocates who are only too familiar with the very limited attention paid to twice exceptional learners.

Those from poor backgrounds within the AP and/or SEN populations are even more likely to be disproportionately represented in ‘the bottom 20%’ than their more advantaged peers, but even they will not constitute the entirety of ‘the bottom 20%’. A Venn diagram would likely show significant overlap, but that is all.

Hence disadvantaged AP/SEN are almost certainly a relatively poor proxy for the ‘poor but dim’.

That said I could find no data that quantifies these relationships.

The School Performance Tables distinguish a ‘low attainer’ cohort. (In the Secondary Tables the definition is determined by prior KS2 attainment and in the Primary Tables by prior KS1 attainment.)

These populations comprise some 15.7% of the total population in the Secondary Tables and about 18.0% in the Primary Tables. But neither set of Tables applies the distinction in their reporting of the attainment of those from disadvantaged backgrounds.

 

It follows from the definition of ‘dim’ that, by ‘bright’, MrThomas probably intends the two corresponding deciles at the top of the attainment distribution (even though he seems most exercised about the subset with the capacity to progress to competitive universities, particularly Oxford and Cambridge. This is a far more select group of exceptionally high attainers – and an even smaller group of exceptionally high attainers from disadvantaged backgrounds.)

A few AP and/or SEN students will likely fall within this wider group, fewer still within the subset of exceptionally high attainers. AP and/or SEN students from disadvantaged backgrounds will be fewer again, if indeed there are any at all.

The same issues with data apply. The School Performance Tables distinguish ‘high attainers’, who constitute over 32% of the secondary cohort and 25% of the primary cohort. As with low attainers, we cannot isolate the performance of those from disadvantaged backgrounds.

We are forced to rely on what limited data is made publicly available to distinguish the performance of disadvantaged low and high attainers.

At the top of the distribution there is a trickle of evidence about performance on specific high attainment measures and access to the most competitive universities. Still greater transparency is fervently to be desired.

At the bottom, I can find very little relevant data at all – we are driven inexorably towards analyses of the SEN population, because that is the only dataset differentiated by disadvantage, even though we have acknowledged that such a proxy is highly misleading. (Equivalent AP attainment data seems conspicuous by its absence.)

 

AP and SEN

Before exploring these datasets I ought to provide some description of the different programmes and support under discussion here, if only for the benefit of readers who are unfamiliar with the English education system.

 

Alternative Provision (AP) is intended to meet the needs of a variety of vulnerable learners:

‘They include pupils who have been excluded or who cannot attend mainstream school for other reasons: for example, children with behaviour issues, those who have short- or long-term illness, school phobics, teenage mothers, pregnant teenagers, or pupils without a school place.’

AP is provided in a variety of settings where learners engage in timetabled education activities away from their school and school staff.

Providers include further education colleges, charities, businesses, independent schools and the public sector. Pupil Referral Units (PRUs) are perhaps the best-known settings – there are some 400 nationally.

A review of AP was undertaken by Taylor in 2012 and the Government subsequently embarked on a substantive improvement programme. This rather gives the lie to Mr Thomas’ contention that AP is ‘largely unknown and wholly unappreciated’.

Taylor complains of a lack of reliable data about the number of learners in AP but notes that the DfE’s 2011 AP census recorded 14,050 pupils in PRUs and a further 23,020 in other settings on a mixture of full-time and part-time placements. This suggests a total of slightly over 37,000 learners, though the FTE figure is unknown.

He states that AP learners are:

‘…twice as likely as the average pupil to qualify for free school meals’

A supporting Equality Impact Assessment qualifies this somewhat:

‘In Jan 2011, 34.6% of pupils in PRUs and 13.8%* of pupils in other AP, were eligible for and claiming free school meals, compared with 14.6% of pupils in secondary schools. [*Note: in some AP settings, free school meals would not be available, so that figure is under-stated, but we cannot say by how much.]’

If the PRU population is typical of the wider AP population, approximately one third qualify under this FSM measure of disadvantage, meaning that the substantial majority are not ‘poor’ according to our definition above.

Taylor confirms that overall GCSE performance in AP is extremely low, pointing out that in 2011 just 1.4% achieved five or more GCSE grades A*-C including [GCSEs in] maths and English, compared to 53.4% of pupils in all schools.

By 2012/13 the comparable percentages were 1.7% and 61.7% respectively (the latter for all state-funded schools), suggesting an increasing gap in overall performance. This is a cause for concern but not directly relevant to the issue under consideration.

The huge disparity is at least partly explained by the facts that many AP students take alternative qualifications and that the national curriculum does not apply to PRUs.

Data is available showing the full range of qualifications pursued. Taylor recommended that all students in AP should continue to receive ‘appropriate and challenging English and Maths teaching’.

Interestingly, he also pointed out that:

‘In some PRUs and AP there is no provision for more able pupils who end up leaving without the GCSE grades they are capable of earning.’

However, he fails to offer a specific recommendation to address this point.

 

Special Educational Needs (SEN) are needs or disabilities that affect children’s ability to learn. These may include behavioural and social difficulties, learning difficulties or physical impairments.

This area has also been subject to a major Government reform programme now being implemented.

There is significant overlap between AP and SEN, with Taylor’s review of the former noting that the population in PRUs is 79% SEN.

We know from the 2013 SEN statistics that 12.6% of all pupils on roll at PRUs had SEN statements and 68.9% had SEN without statements. But these populations represent only a tiny proportion of the total SEN population in schools.

SEN learners also have higher than typical eligibility for FSM. In January 2013, 30.1% of all SEN categories across all primary, secondary and special schools were FSM-eligible, roughly twice the rate for all pupils. However, this means that almost seven in ten are not caught by the definition of ‘poor’ provided above.

In 2012/13 23.4% of all SEN learners achieved five or more GCSEs at A*-C or equivalent, including GCSEs in English and maths, compared with 70.4% of those having no identified SEN – another significant overall gap, but not directly relevant to our comparison of the ‘poor but bright’ and the ‘poor but dim’.

 

Data on socio-economic attainment gaps across the attainment spectrum

Those interested in how socio-economic attainment gaps vary at different attainment levels cannot fail to be struck by how little material of this kind is published, particularly in the secondary sector, when such gaps tend to increase in size.

One cannot entirely escape the conviction that this reticence deliberately masks some inconvenient truths.

  • The ideal would be to have the established high/middle/low attainer distinctions mapped directly onto performance by advantaged/disadvantaged learners in the Performance Tables but, as we have indicated, this material is conspicuous by its absence. Perhaps it will appear in the Data Portal now under development.
  • Our next best option is to examine socio-economic attainment gaps on specific attainment measures that will serve as decent proxies for high/middle/low attainment. We can do this to some extent but the focus is disproportionately on the primary sector because the Secondary Tables do not include proper high attainment measures (such as measures based exclusively on GCSE performance at grades A*/A). Maybe the Portal will come to the rescue here as well. We can however supply some basic Oxbridge fair access data.
  • The least preferable option is deploy our admittedly poor proxies for low attainers – SEN and AP. But there isn’t much information from this source either. 

The analysis below looks consecutively at data for the primary and secondary sectors.

 

Primary

We know, from the 2013 Primary School Performance Tables, that the percentage of disadvantaged and other learners achieving different KS2 levels in reading, writing and maths combined, in 2013 and 2012 respectively, were as follows:

 

Table 1: Percentage of disadvantaged and all other learners achieving each national curriculum level at KS2 in 2013 in reading, writing and maths combined

L3  or below L4  or above L4B or above L5 or above
Dis Oth Gap Dis Oth Gap Dis Oth Gap Dis Oth Gap
2013 13 5 +8 63 81 -18 49 69 -20 10 26 -16
2012 x x x 61 80 -19 x x x 9 24 -15

 

This tells us relatively little, apart from the fact that disadvantaged learners are heavily over-represented at L3 and below and heavily under-represented at L5 and above.

The L5 gap is somewhat lower than the gaps at L4 and 4B respectively, but not markedly so. However, the L5 gap has widened slightly since 2012 while the reverse is true at L4.

This next table synthesises data from SFR51/13: ‘National curriculum assessments at key stage 2: 2012 to 2013’. It also shows gaps for disadvantage, as opposed to FSM gaps.

 

Table 2: Percentage of disadvantaged and all other learners achieving each national curriculum level, including differentiation by gender, in each 2013 end of KS2 test

L3 L4 L4B L5 L6
D O Gap D O Gap D O Gap D O Gap D O Gap
Reading All 12 6 +6 48 38 +10 63 80 -17 30 50 -20 0 1 -1
B 13 7 +6 47 40 +7 59 77 -18 27 47 -20 0 0 0
G 11 5 +6 48 37 +11 67 83 -16 33 54 -21 0 1 -1
GPS All 28 17 +11 28 25 +3 52 70 -18 33 51 -18 1 2 -1
B 30 20 +10 27 27 0 45 65 -20 28 46 -18 0 2 -2
G 24 13 +11 28 24 +4 58 76 -18 39 57 -18 1 3 -2
Maths All 16 9 +7 50 41 +9 62 78 -16 24 39 -15 2 8 -6
B 15 8 +7 48 39 +9 63 79 -16 26 39 -13 3 10 -7
G 17 9 +8 52 44 +8 61 78 -17 23 38 -15 2 7 -5

 

This tells a relatively consistent story across each test and for boys as well as girls.

We can see that, at Level 4 and below, learners from disadvantaged backgrounds are in the clear majority, perhaps with the exception of L4 GPS.  But at L4B and above they are very much in the minority.

Moreover, with the exception of L6 where low percentages across the board mask the true size of the gaps, disadvantaged learners tend to be significantly more under-represented at L4B and above than they are over-represented at L4 and below.

A different way of looking at this data is to compare the percentages of advantaged and disadvantaged learners respectively at L4 and L5 in each assessment.

  • Reading: Amongst disadvantaged learners the proportion at L5 is -18 percentage points fewer than the proportion at L4, but amongst advantaged learners the proportion at L5 is +12 percentage points higher than at L4.
  • GPS: Amongst disadvantaged learners the proportion at L5 is +5 percentage points more than the proportion at L4, but amongst advantaged learners the proportion at L5 is +26 percentage points higher than at L4.
  • Maths: Amongst disadvantaged learners the proportion at L5 is -26 percentage points fewer than the proportion at L4, but amongst advantaged learners the proportion at L5 is only 2 percentage points fewer than at L4.

If we look at 2013 gaps compared with 2012 (with teacher assessment of writing included in place of the GSP test introduced in 2013) we can see there has been relatively little change across the board, with the exception of L5 maths, which has been affected by the increasing success of advantaged learners at L6.

 

Table 3: Percentage of disadvantaged and all other learners achieving  national curriculum levels 3-6 in each of reading, writing and maths in 2012 and 2013 respectively

L3 L4 L5 L6
D O Gap D O Gap D O Gap D O Gap
Reading 2012 11 6 +5 46 36 +10 33 54 -21 0 0 0
2013 12 6 +6 48 38 +10 30 50 -20 0 1 -1
Writing 2012 22 11 +11 55 52 +3 15 32 -17 0 1 -1
2013 19 10 +9 56 52 +4 17 34 -17 1 2 -1
Maths 2012 17 9 +8 50 41 +9 23 43 -20 1 4 -3
2013 16 9 +9 50 41 +9 24 39 -15 2 8 -6

 

To summarise, as far as KS2 performance is concerned, there are significant imbalances at both the top and the bottom of the attainment distribution and these gaps have not changed significantly since 2012. There is some evidence to suggest that gaps at the top are larger than those at the bottom.

 

Secondary

Unfortunately there is a dearth of comparable data at secondary level, principally because of the absence of published measures of high attainment.

SFR05/2014 provides us with FSM gaps (as opposed to disadvantaged gaps) for a series of GCSE measures, none of which serve our purpose particularly well:

  • 5+ A*-C GCSE grades: gap = 16.0%
  • 5+ A*-C grades including English and maths GCSEs: gap = 26.7%
  • 5+ A*-G grades: gap = 7.6%
  • 5+ A*-G grades including English and maths GCSEs: gap = 9.9%
  • A*-C grades in English and maths GCSEs: gap = 26.6%
  • Achieving the English Baccalaureate: gap = 16.4%

Perhaps all we can deduce is that the gaps vary considerably in size, but tend to be smaller for the relatively less demanding and larger for the relatively more demanding measures.

For specific high attainment measures we are forced to rely principally on data snippets released in answer to occasional Parliamentary Questions.

For example:

  • In 2003, 1.0% of FSM-eligible learners achieved five or more GCSEs at A*/A including English and maths but excluding equivalents, compared with 6.8% of those not eligible, giving a gap of 5.8%. By 2009 the comparable percentages were 1.7% and 9.0% respectively, giving an increased gap of 7.3% (Col 568W)
  • In 2006/07, the percentage of FSM-eligible pupils securing A*/A grades at GCSE in different subjects, compared with the percentage of all pupils in maintained schools doing so were as shown in the table below (Col 808W)

 

Table 4: Percentage of FSM-eligible and all pupils achieving GCSE A*/A grades in different GCSE subjects in 2007

FSM All pupils Gap
Maths 3.7 15.6 11.9
Eng lit 4.1 20.0 15.9
Eng lang 3.5 16.4 12.9
Physics 2.2 49.0 46.8
Chemistry 2.5 48.4 45.9
Biology 2.5 46.8 44.3
French 3.5 22.9 19.4
German 2.8 23.2 20.4

 

  • In 2008, 1% of FSM-eligible learners in maintained schools achieved A* in GCSE maths compared with 4% of all pupils in maintained schools. The comparable percentages for Grade A were 3% and 10% respectively, giving an A*/A gap of 10% (Col 488W)

There is much variation in the subject-specific outcomes at A*/A described above. But, when it comes to the overall 5+ GCSEs high attainment measure based on grades A*/A, the gap is much smaller than on the corresponding standard measure based on grades A*-C.

Further material of broadly the same vintage is available in a 2007 DfE statistical publication: ‘The Characteristics of High Attainers’.

There is a complex pattern in evidence here which is very hard to explain with the limited data available. More time series data of this nature – illustrating Excellence and Foundation Gaps alike – should be published annually so that we have a more complete and much more readily accessible dataset.

I could find no information at all about the comparative performance of disadvantaged learners in AP settings compared with those not from disadvantaged backgrounds.

Data is published showing the FSM gap for SEN learners on all the basic GCSE measures listed above. I have retained the generic FSM gaps in brackets for the sake of comparison:

  • 5+ A*-C GCSE grades: gap = 12.5% (16.0%)
  • 5+ A*-C grades including English and maths GCSEs: gap = 12.1% (26.7%)
  • 5+ A*-G grades: gap = 10.4% (7.6%)
  • 5+ A*-G grades including English and maths GCSEs: gap = 13.2% (9.9%)
  • A*-C grades in English and maths GCSEs: gap = 12.3% (26.6%)
  • Achieving the English Baccalaureate: gap = 3.5% (16.4%)

One can see that the FSM gaps for the more demanding measures are generally lower for SEN learners than they are for all learners. This may be interesting but, for the reasons given above, this is not a reliable proxy for the FSM gap amongst ‘dim’ learners.

 

When it comes to fair access to Oxbridge, I provided a close analysis of much relevant data in this post from November 2013.

The chart below shows the number of 15 year-olds eligible for and claiming FSM at age 15 who progressed to Oxford or Cambridge by age 19. The figures are rounded to the nearest five.

 

Chart 1: FSM-eligible learners admitted to Oxford and Cambridge 2005/06 to 2010/11

Oxbridge chart

 

In sum, there has been no change in these numbers over the last six years for which data has been published. So while there may have been consistently significant expenditure on access agreements and multiple smaller mentoring programmes, it has had negligible impact on this measure at least.

My previous post set out a proposal for what to do about this sorry state of affairs.

 

Budgets

For the purposes of this discussion we need ideally to identify and compare total national budgets for the ‘poor but bright’ and the ‘poor but dim’. But that is simply not possible. 

Many funding streams cannot be disaggregated in this manner. As we have seen, some – including the AP and SEN budgets – may be aligned erroneously with the second of these groups, although they also support learners who are neither ‘poor’ nor ‘dim’ and have a broader purpose than raising attainment. 

There may be some debate, too, about which funding streams should be weighed in the balance.

On the ‘bright but poor’ side, do we include funding for grammar schools, even though the percentage of disadvantaged learners attending many of them is virtually negligible (despite recent suggestions that some are now prepared to do something about this)? Should the Music and Dance Scheme (MDS) be within scope of this calculation?

The best I can offer is a commentary that gives a broad sense of orders of magnitude, to illustrate in very approximate terms how the scales tend to tilt more towards the ‘poor but dim’ rather than the ‘poor but bright’, but also to weave in a few relevant asides about some of the funding streams in question.

 

Pupil Premium and the EEF

I begin with the Pupil Premium – providing schools with additional funding to raise the attainment of disadvantaged learners.

The Premium is not attached to the learners who qualify for it, so schools are free to aggregate the funding and use it as they see fit. They are held accountable for these decisions through Ofsted inspection and the gap-narrowing measures in the Performance Tables.

Mr Thomas suggests in our Twitter discussion that AP students are not significant beneficiaries of such support, although provision in PRUs features prominently in the published evaluation of the Premium. It is for local authorities to determine how Pupil Premium funding is allocated in AP settings.

One might also make a case that ‘bright but poor’ learners are not a priority either, despite suggestions from the Pupil Premium Champion to the contrary.

 

 

As we have seen, the Performance Tables are not sharply enough focused on the excellence gaps at the top of the distribution and I have shown elsewhere that Ofsted’s increased focus on the most able does not yet extend to the impact on those attracting the Pupil Premium, even though there was a commitment that it would do so. 

If there is Pupil Premium funding heading towards high attainers from disadvantaged backgrounds, the limited data to which we have access does not yet suggest a significant impact on the size of Excellence Gaps. 

The ‘poor but bright’ are not a priority for the Education Endowment Foundation (EEF) either.

This 2011 paper explains that it is prioritising the performance of disadvantaged learners in schools below the floor targets. At one point it says:

‘Looking at the full range of GCSE results (as opposed to just the proportions who achieve the expected standards) shows that the challenge facing the EEF is complex – it is not simply a question of taking pupils from D to C (the expected level of attainment). Improving results across the spectrum of attainment will mean helping talented pupils to achieve top grades, while at the same time raising standards amongst pupils currently struggling to pass.’

But this is just after it has shown that the percentages of disadvantaged high attainers in its target schools are significantly lower than elsewhere. Other things being equal, the ‘poor but dim’ will be the prime beneficiaries.

It may now be time for the EEF to expand its focus to all schools. A diagram from this paper – reproduced below – demonstrates that, in 2010, the attainment gap between FSM and non-FSM was significantly larger in schools above the floor than in those below the floor that the EEF is prioritising. This is true in both the primary and secondary sectors.

It would be interesting to see whether this is still the case.

 

EEF Capture

 

AP and SEN

Given the disaggregation problems discussed above, this section is intended simply to give some basic sense of orders of magnitude – lending at least some evidence to counter Mr Thomas’ assertion that the ‘effort and resources, of schools… are directed disproportionately at those who are already high achieving – the poor but bright’.

It is surprisingly hard to get a grip on the overall national budget for AP. A PQ from early 2011 (Col 75W) supplies a net current expenditure figure for all English local authorities of £530m.

Taylor’s Review fails to offer a comparable figure but my rough estimates based on the per pupil costs he supplies suggests a revenue budget of at least £400m. (Taylor suggests average per pupil costs of £9,500 per year for full-time AP, although PRU places are said to cost between £12,000 and £18,000 per annum.)

I found online a consultation document from Kent – England’s largest local authority – stating its revenue costs at over £11m in FY2014-15. Approximately 454 pupils attended Kent’s AP/PRU provision in 2012-13.

There must also be a significant capital budget. There are around 400 PRUs, not to mention a growing cadre of specialist AP academies and free schools. The total capital cost of the first AP free school – Derby Pride Academy – was £2.147m for a 50 place setting.

In FY2011-12, total annual national expenditure on SEN was £5.77 billion (Col 391W). There will have been some cost-cutting as a consequence of the latest reforms, but the order of magnitude is clear.

The latest version of the SEN Code of Practice outlines the panoply of support available, including the compulsory requirement that each school has a designated teacher to be responsible for co-ordinating SEN provision (the SENCO).

In short, the national budget for AP is sizeable and the national budget for SEN is huge. Per capita expenditure is correspondingly high. If we could isolate the proportion of these budgets allocated to raising the attainment of the ‘poor but dim’, the total would be substantial.

 

Fair Access, especially to Oxbridge, and some related observations

Mr Thomas refers specifically to funding to support fair access to universities – especially Oxbridge – for those from disadvantaged backgrounds. This is another area in which it is hard to get a grasp on total expenditure, not least because of the many small-scale mentoring projects that exist.

Mr Thomas is quite correct to remark on the sheer number of these, although they are relatively small beer in budgetary terms. (One suspects that they would be much more efficient and effective if they could be linked together within some sort of overarching framework.)

The Office for Fair Access (OFFA) estimates University access agreement expenditure on outreach in 2014-15 at £111.9m and this has to be factored in, as does DfE’s own small contribution – the Future Scholar Awards.

Were any expenditure in this territory to be criticised, it would surely be the development and capital costs for new selective 16-19 academies and free schools that specifically give priority to disadvantaged students.

The sums are large, perhaps not outstandingly so compared with national expenditure on SEN for example, but they will almost certainly benefit only a tiny localised proportion of the ‘bright but poor’ population.

There are several such projects around the country. Some of the most prominent are located in London.

The London Academy of Excellence (capacity 420) is fairly typical. It cost an initial £4.7m to establish plus an annual lease requiring a further £400K annually.

But this is dwarfed by the projected costs of the Harris Westminster Sixth Form, scheduled to open in September 2014. Housed in a former government building, the capital cost is reputed to be £45m for a 500-place institution.

There were reportedly disagreements within Government:

‘It is understood that the £45m cost was subject to a “significant difference of opinion” within the DfE where critics say that by concentrating large resources on the brightest children at a time when budgets are constrained means other children might miss out…

But a spokeswoman for the DfE robustly defended the plans tonight. “This is an inspirational collaboration between the country’s top academy chain and one of the best private schools in the country,” she said. “It will give hundreds of children from low income families across London the kind of top quality sixth-form previously reserved for the better off.”’

Here we have in microcosm the debate to which this post is dedicated.

One blogger – a London College Principal – pointed out that the real issue was not whether the brightest should benefit over others, but how few of the ‘poor but bright’ would do so:

‘£45m could have a transformative effect on thousands of 16-19 year olds across London… £45m could have funded at least 50 extra places in each college for over 10 years, helped build excellent new facilities for all students and created a city-wide network to support gifted and talented students in sixth forms across the capital working with our partner universities and employers.’

 

Summing Up

There are three main elements to the discussion: the point of principle, the inputs and the impact. The following sections deal with each of these in turn.

 

Principle

Put bluntly, should ‘poor but dim’ kids have higher priority for educators than ‘poor but bright’ kids (Mr Thomas’ position) or should all poor kids have equal priority and an equal right to the support they need to achieve their best (the Gifted Phoenix position)?

For Mr Thomas, it seems this priority is determined by whether – and how far – the learner is behind undefined ‘basic levels of attainment’ and/or mastery of ‘the basics’ (presumably literacy and numeracy).

Those below the basic attainment threshold have higher priority than those above it. He does not say so but this logic suggests that those furthest below the threshold are the highest priority and those furthest above are the lowest.

So, pursued to its logical conclusion, this would mean that the highest attainers would get next to no support while a human vegetable would be the highest priority of all.

However, since Mr Thomas’ focus is on marginal benefit, it may be that those nearest the threshold would be first in the queue for scarce resources, because they would require the least effort and resources to lift above it.

This philosophy drives the emphasis on achievement of national benchmarks and predominant focus on borderline candidates that, until recently, dominated our assessment and accountability system.

For Gifted Phoenix, every socio-economically disadvantaged learner has equal priority to the support they need to improve their attainment, by virtue of that disadvantage.

There is no question of elevating some ahead of others in the pecking order because they are further behind on key educational measures since, in effect, that is penalising some disadvantaged learners on the grounds of their ability or, more accurately, their prior attainment.

This philosophy underpins the notion of personalised education and is driving the recent and welcome reforms of the assessment and accountability system, designed to ensure that schools are judged by how well they improve the attainment of all learners, rather than predominantly on the basis of the proportion achieving the standard national benchmarks.

I suggested that, in deriding ‘the romance of the poor but bright’, Mr Thomas ran the risk of falling into ‘the slough of anti-elitism’. He rejected that suggestion, while continuing to emphasise the need to ‘concentrate more’ on ‘those at risk of never being able to engage with society’.

I have made the assumption that Thomas is interested primarily in KS2 and GCSE or equivalent qualifications at KS4 given his references to KS2 L4, basic skills and ‘paper qualifications needed to enter meaningful employment’.

But his additional references to ‘real qualifications’ (as opposed to paper ones) and engaging with society could well imply a wider range of personal, social and work-related skills for employability and adult life.

My preference for equal priority would apply regardless: there is no guarantee that high attainers from disadvantaged backgrounds will necessarily possess these vital skills.

But, as indicated in the definition above, there is an important distinction to be maintained between:

  • educational support to raise the attainment, learning and employability skills of socio-economically disadvantaged learners and prepare them for adult life and
  • support to manage a range of difficulties – whether behavioural problems, disability, physical or mental impairment – that impact on the broader life chances of the individuals concerned.

Such a distinction may well be masked in the everyday business of providing effective holistic support for learners facing such difficulties, but this debate requires it to be made and sustained given Mr Thomas’s definition of the problem in terms of the comparative treatment of the ‘poor but bright’ and the ‘poor but dim’.

Having made this distinction, it is not clear whether he himself sustains it consistently through to the end of his post. In the final paragraphs the term ‘poor but dim’ begins to morph into a broader notion encompassing all AP and SEN learners regardless of their socio-economic status.

Additional dimensions of disadvantage are potentially being brought into play. This is inconsistent and radically changes the nature of the argument.

 

Inputs

By inputs I mean the resources – financial and human – made available to support the education of ‘dim’ and ‘bright’ disadvantaged learners respectively.

Mr Thomas also shifts his ground as far as inputs are concerned.

His post opens with a statement that ‘the effort and resources’ of schools, charities and businesses are ‘directed disproportionately’ at the poor but bright – and he exemplifies this with reference to fair access to competitive universities, particularly Oxbridge.

When I point out the significant investment in AP compared with fair access, he changes tack – ‘I’m measuring outcomes not just inputs’.

Then later he says ‘But what some need is just more expensive’, to which I respond that ‘the bottom end already has the lion’s share of funding’.

At this point we have both fallen into the trap of treating the entirety of the AP and SEN budgets as focused on the ‘poor but dim’.

We are failing to recognise that they are poor proxies because the majority of AP and SEN learners are not ‘poor’, many are not ‘dim’, these budgets are focused on a wider range of needs and there is significant additional expenditure directed at ‘poor but dim’ learners elsewhere in the wider education budget.

Despite Mr Thomas’s opening claim, it should be reasonably evident from the preceding commentary that my ‘lion’s share’ point is factually correct. His suggestion that AP is ‘largely unknown and wholly unappreciated’ flies in the face of the Taylor Review and the Government’s subsequent work programme.

SEN may depend heavily on the ‘extraordinary effort of dedicated staff’, but at least there are such dedicated staff! There may be inconsistencies in local authority funding and support for SEN, but the global investment is colossal by comparison with the funding dedicated on the other side of the balance.

Gifted Phoenix’s position acknowledges that inputs are heavily loaded in favour of the SEN and AP budgets. This is as it should be since, as Thomas rightly notes, many of the additional services they need are frequently more expensive to provide. These services are not simply dedicated to raising their attainment, but also to tackling more substantive problems associated with their status.

Whether the balance of expenditure on the ‘bright’ and ‘dim’ respectively is optimal is a somewhat different matter. Contrary to Mr Thomas’s position, gifted advocates are often convinced that too much largesse is focused on the latter at the expense of the former.

Turning to advocacy, Mr Thomas says ‘we end up ignoring the more pressing problem’ of the poor but dim. He argues in the Twitter discussion that too few people are advocating for these learners, adding that they are failed ‘because it’s not popular to talk about them’.

I could not resist countering that advocacy for gifted learners is equally unpopular, indeed ‘the word is literally taboo in many settings’. I cannot help thinking – from his footnote reference to ‘epigenetic coding’ – that Mr Thomas is amongst those who are distinctly uncomfortable with the term.

Where advocacy does survive it is focused exclusively on progression to competitive universities and, to some extent, high attainment as a route towards that outcome. The narrative has shifted away from concepts of high ability or giftedness, because of the very limited consensus about that condition (even amongst gifted advocates) and even considerable doubt in some quarters whether it exists at all.

 

Outcomes

Mr Thomas maintains in his post that the successes of his preferred target group ‘have the power to change the British economy, far more so than those of their brighter peers’. This is because ‘the gap most damaging to society is in life outcomes for the children that perform least well at school’.

As noted above, it is important to remember that we are discussing here the addition of educational and economic value by tackling underachievement amongst learners from disadvantaged backgrounds, rather than amongst all the children that perform least well.

We are also leaving to one side the addition of value through any wider engagement by health and social services to improve life chances.

It is quite reasonable to advance the argument that ‘improving the outcomes ‘of the bottom 20%’ (the Tail) will have ‘a huge socio-economic impact’ and ‘make the biggest marginal difference to society’.

But one could equally make the case that society would derive similar or even higher returns from a decision to concentrate disproportionately on the highest attainers (the Smart Fraction).

Or, as Gifted Phoenix would prefer, one could reasonably propose that the optimal returns should be achieved by means of a balanced approach that raises both the floor and the ceiling, avoiding any arbitrary distinctions on the basis of prior attainment.

From the Gifted Phoenix perspective, one should balance the advantages of removing the drag on productivity of an educational underclass against those of developing the high-level human capital needed to drive economic growth and improve our chances of success in what Coalition ministers call the ‘global race’.

According to this perspective, by eliminating excellence gaps between disadvantaged and advantaged high attainers we will secure a stream of benefits broadly commensurate to that at the bottom end.

These will include substantial spillover benefits, achieved as a result of broadening the pool of successful leaders in political, social, educational and artistic fields, not to mention significant improvements in social mobility.

It is even possible to argue that, by creating a larger pool of more highly educated parents, we can also achieve a significant positive impact on the achievement of subsequent generations, thus significantly reducing the size of the tail.

And in the present generation we will create many more role models: young people from disadvantaged backgrounds who become educationally successful and who can influence the aspirations of younger disadvantaged learners.

This avoids the risk that low expectations will be reinforced and perpetuated through a ‘deficit model’ approach that places excessive emphasis on removing the drag from the tail by producing a larger number of ‘useful members of society’.

This line of argument is integral to the Gifted Phoenix Manifesto.

It seems to me entirely conceivable that economists might produce calculations to justify any of these different paths.

But it would be highly inequitable to put all our eggs in the ‘poor but bright’ basket, because that penalises some disadvantaged learners for their failure to achieve high attainment thresholds.

And it would be equally inequitable to focus exclusively on the ‘poor but dim’, because that penalises some disadvantaged learners for their success in becoming high attainers.

The more equitable solution must be to opt for a ‘balanced scorecard’ approach that generates a proportion of the top end benefits and a proportion of the bottom end benefits simultaneously.

There is a risk that this reduces the total flow of benefits, compared with one or other of the inequitable solutions, but there is a trade-off here between efficiency and a socially desirable outcome that balances the competing interests of the two groups.

 

The personal dimension

After we had finished our Twitter exchanges, I thought to research Mr Thomas online. Turns out he’s quite the Big-Cheese-in-Embryo. Provided he escapes the lure of filthy lucre, he’ll be a mover and shaker in education within the next decade.

I couldn’t help noticing his own educational experience – public school, a First in PPE from Oxford, leading light in the Oxford Union – then graduation from Teach First alongside internships with Deutsche Bank and McKinsey.

Now he’s serving his educational apprenticeship as joint curriculum lead for maths at a prominent London Academy. He’s also a trustee of ‘a university mentoring project for highly able 11-14 year old pupils from West London state schools’.

Lucky I didn’t check earlier. Such a glowing CV might have been enough to cow this grammar school Oxbridge reject, even if I did begin this line of work several years before he was born. Not that I have a chip on my shoulder…

The experience set me wondering about the dominant ideology amongst the Teach First cadre, and how it is tempered by extended exposure to teaching in a challenging environment.

There’s more than a hint of idealism about someone from this privileged background espousing the educational philosophy that Mr Thomas professes. But didn’t he wonder where all the disadvantaged people were during his own educational experience, and doesn’t he want to change that too?

His interest in mentoring highly able pupils would suggest that he does, but also seems directly to contradict the position he’s reached here. It would be a pity if the ‘poor but bright’ could not continue to rely on his support, equal in quantity and quality to the support he offers the ‘poor but dim’.

For he could make a huge difference at both ends of the attainment spectrum – and, with his undeniable talents, he should certainly be able to do so

 

Conclusion

We are entertaining three possible answers to the question whether in principle to prioritise the needs of the ‘poor but bright’ or the ‘poor but dim’:

  • Concentrate principally – perhaps even exclusively – on closing the Excellence Gaps at the top
  • Concentrate principally – perhaps even exclusively – on closing the Foundation Gaps at the bottom
  • Concentrate equally across the attainment spectrum, at the top and bottom and all points in between.

Speaking as an advocate for those at the top, I favour the third option.

It seems to me incontrovertible – though hard to quantify – that, in the English education system, the lion’s share of resources go towards closing the Foundation Gaps.

That is perhaps as it should be, although one could wish that the financial scales were not tipped so excessively in their direction, for ‘poor but bright’ learners do in my view have an equal right to challenge and support, and should not be penalised for their high attainment.

Our current efforts to understand the relative size of the Foundation and Excellence Gaps and how these are changing over time are seriously compromised by the limited data in the public domain.

There is a powerful economic case to be made for prioritising the Foundation Gaps as part of a deliberate strategy for shortening the tail – but an equally powerful case can be constructed for prioritising the Excellence Gaps, as part of a deliberate strategy for increasing the smart fraction.

Neither of these options is optimal from an equity perspective however. The stream of benefits might be compromised somewhat by not focusing exclusively on one or the other, but a balanced approach should otherwise be in our collective best interests.

You may or may not agree. Here is a poll so you can register your vote. Please use the comments facility to share your wider views on this post.

 

 

 

 

Epilogue

 

We must beware the romance of the poor but bright,

But equally beware

The romance of rescuing the helpless

Poor from their sorry plight.

We must ensure the Tail

Does not wag the disadvantaged dog!

 

GP

May 2014

One for The Echo Chamber

 

We must get better at educating clever kids

 

 

 

 

 

 

 

 

 

GP

May 2014

A Closer Look at Level 6

This post provides a data-driven analysis of Level 6 (L6) performance at Key Stage 2, so as to:

pencil-145970_640

  • Marshall the published information and provide a commentary that properly reflects this bigger picture;
  • Establish which data is not yet published but ought to be in the public domain;
  • Provide a baseline against which to measure L6 performance in the 2014 SATs; and
  • Initiate discussion about the likely impact of new tests for the full attainment span on the assessment and performance of the highest attainers, both before and after those tests are introduced in 2016.

Following an initial section highlighting key performance data across the three L6 tests – reading; grammar, punctuation and spelling (GPS); and maths – the post undertakes a more detailed examination of L6 achievement in English, maths and science, taking in both teacher assessment and test outcomes.

It  concludes with a summary of key findings reflecting the four purposes above.

Those who prefer not to read the substantive text can jump straight to the summary from here

I apologise in advance for any transcription errors and statistical shortcomings in the analysis below.

Background

Relationship with previous posts

This discussion picks up themes explored in several previous posts.

In May 2013 I reviewed an Investigation of Level 6 Key Stage 2 Tests commissioned and published by in February that year by the Department for Education.

My overall assessment of that report?

‘A curate’s egg really. Positive and useful in a small way, not least in reminding us that primary-secondary transition for gifted learners remains problematic, but also a missed opportunity to flag up some other critical issues – and of course heavily overshadowed by the primary assessment consultation on the immediate horizon.’

The performance of the highest primary attainers also featured strongly in an analysis of the outcomes of NAHT’s Commission on Assessment (February 2014) and this parallel piece on the response to the consultation on primary assessment and accountability (April 2014).

The former offered the Commission two particularly pertinent recommendations, namely that it should:

‘shift from its narrow and ‘mildly accelerative’ view of high attainment to accommodate a richer concept that combines enrichment (breadth), extension (depth) and acceleration (faster pace) according to learners’ individual needs.’

Additionally it should:

‘incorporate a fourth ‘far exceeded’ assessment judgement, since the ‘exceeded’ judgement covers too wide a span of attainment.’

The latter discussed plans to discontinue L6 tests by introducing from 2016 single tests for the full attainment span at the end of KS2, from the top of the P-scales to a level the initial consultation document described as ‘at least of the standard of’ the current L6.

It opined:

‘The task of designing an effective test for all levels of prior attainment at the end of key stage 2 is…fraught with difficulty…I have grave difficulty in understanding how such assessments can be optimal for high attainers and fear that this is bad assessment practice.’

Aspects of L6 performance also featured in a relatively brief review of High Attainment in 2013 Primary School Performance Tables (December 2013). This post expands significantly on the relevant data included in that one.

The new material is drawn from three principal sources:

The recent history of L6 tests

Level 6 tests have a rather complex history. The footnotes to SFR 51/2013 simplify this considerably, noting that:

  • L6 tests were initially available from 1995 to 2002
  • In 2010 there was a L6 test for mathematics only
  • Since 2012 there have been tests of reading and mathematics
  • The GPS test was introduced in 2013.

In fact, the 2010 maths test was the culmination of an earlier QCDA pilot of single level tests. In that year the results from the pilot were reported as statutory National Curriculum test results in pilot schools.

In 2011 optional L6 tests were piloted in reading, writing and maths. These were not externally marked and the results were not published.

The June 2011 Bew Report came out in favour:

‘We believe that the Government should continue to provide level 6 National Curriculum Tests for schools to use on an optional basis, whose results should be reported to parents and secondary schools.’

Externally marked L6 tests were offered in reading and maths in 2012, alongside L6 teacher assessment in writing. The GPS test was added to the portfolio in the following year.

In 2012, ministers were talking up the tests describing them as:

‘…a central element in the Coalition’s drive to ensure that high ability children reach their potential. Nick Gibb, the schools minister, said: “Every child should be given the opportunity to achieve to the best of their abilities.

“These tests will ensure that the brightest pupils are stretched and standards are raised for all.”’

In 2012 the Primary Performance Tables used L6 results only in the calculation of ‘level 5+’, APS, value-added and progress measures, but this was not the case in 2013.

The Statement of Intent on the Tables said:

‘…the percentage of the number of children at the end of Key Stage 2 achieving level 6 in a school will also be shown in performance tables. The Department will not publish any information at school level about the numbers of children entered for the level 6 tests, or the percentage achieving level 6 of those entered for level 6.’

The nature of the test is unchanged for 2014: they took place on 12, 13 and 15 May respectively. This post is timed to coincide with their administration.

The KS2 ARA booklet  continues to explain that:

‘Children entered for level 6 tests are required to take the levels 3-5 tests. Headteachers should consider a child’s expected attainment before registering them for the level 6 tests as they should be demonstrating attainment above level 5. Schools may register children for the level 6 tests and subsequently withdraw them.

The child must achieve a level 5 in the levels 3-5 test and pass the corresponding level 6 test in the same year in order to be awarded an overall level 6 result. If the child does not pass the level 6 test they will be awarded the level achieved in the levels 3-5 test.’

Anticipated future developments

At the time of writing the Government has not published a Statement of Intent explaining whether there will be any change in the reporting of L6 results in the December 2014 Primary School Performance Tables.

An accompanying Data Warehouse (aka Portal) is also under development and early iterations are expected to appear before the next set of Tables. The Portal will make available a wider range of performance data, some of it addressing high attainment.

The discussion in this post of material not yet in the public domain is designed in part as a marker to influence consideration of material for inclusion in the Portal.

As noted above, the Government has published its response to the consultation on primary assessment and accountability arrangements, confirming that new single assessments for the full attainment span will be introduced in 2016.

At the time of writing, there is no published information about the number of entries for the 2014 tests. (In 2013 these details were released in the reply to a Parliamentary Question.)

Entries had to be confirmed by March 2014, so it may be that the decision to replace the L6 tests, not confirmed until that same month, has not impacted negatively on demand. The effect on 2015 entries remains to be seen, but there is a real risk that these will be significantly depressed.

L6 tests are scheduled to be taken for the final time in May 2015. The reading and maths tests will have been in place for four consecutive years; the GPS test for three.

Under the new arrangements there will continue to be tests in reading, GSP and maths – plus a sampling test in science – as well as teacher assessment in reading, writing, maths and science.

KS2 test outcomes (but not teacher assessment) will be reported by means of a scaled score for each test, alongside three average scaled scores, for the school, the local area and nationally.

The original consultation document proposed that each scaled score would be built around a ‘secondary readiness standard’ loosely aligned with the current L4B, but converted into a score of 100.

The test development frameworks mention that:

‘at the extremes of the scaled score distribution, as is standard practice, the scores will be truncated such that above and below a certain point, all children will be awarded the same scaled score in order to minimise the effect for children at the ends of the distribution where the test is not measuring optimally.’

A full set of sample materials including tests and mark schemes for every test will be published by September 2015, the beginning of the academic year in which the new tests are first deployed.

The consultation document said these single tests would:

‘include challenging material (at least of the standard of the current level 6 test) which all pupils will have the opportunity to answer, without the need for a separate test’.

The development frameworks published on 31 March made it clear that the new tests should:

‘provide a suitable challenge for all children and give every child the opportunity to achieve as high a standard…as possible.’

Additionally:

‘In order to improve general accessibility for all children, where possible, questions will be placed in order of difficulty.’

These various and potentially conflicting statements informed the opinion I have already repeated.

The question then arises whether the Government’s U turn on separate tests for the highest attainers is in the latter’s best interests. There cannot be a continuation of L6 tests per se, because the system of levels that underpins it will no longer exist, but separate tests could in principle continue.

Even if the new universal tests provide equally valid and reliable judgements of their attainment – which is currently open to question – one might reasonably argue that the U turn itself may undermine continuity of provision and continued improvement in schools’ practice.

The fact that this practice needs substantive improvement is evidenced by Ofsted’s recent decision to strengthen the attention given to the attainment and progress of what they call ‘the most able’ in all school inspection reports.

L6 tests: Key Performance Data

Entry and success rates

As noted above, the information in the public domain about entry rates to L6 tests is incomplete.

The 2013 Investigation provides the number of pupils entered for each test in 2012. We do not have comparable data for 2013, but a PQ reply does supply the number of pupils registered for the tests in both 2012 and 2013. This can be supplemented by material in the 2013 SFR and the corresponding 2012 publication.

The available data is synthesised in this table showing for each year – and where available – the number registered for each test, the number entered, the total number of pupils achieving L6 and, of those, the number attending state-funded schools.

                    2012                   2013
Reg Ent Pass PassSF Reg Ent Pass Pass SF
Reading 47,148 46,810 942 x 73,118 x 2,262 2,137
GPS x x x x 61,883 x 8,606 x
Maths 55,809 55,212 18,953 x 80,925 x 35,137 33,202

One can see that there are relatively small differences between the numbers of pupils registered and the number entered, so the former is a decent enough proxy for the latter. I shall use the former in the calculations immediately below.

It is also evident that the proportions of learners attending independent schools who achieve L6 are small though significant. But, given the incomplete data set for state-funded schools, I shall use the pass rate for all schools in the following calculations.

In sum then, in 2012, the pass rates per registered entry were:

  • Reading – 2.0%
  • Maths – 34.0%

And in 2013 they were:

  • Reading – 3.1%
  • GPS – 13.9%
  • Maths – 43.4%

The pass rates in 2013 have improved significantly in both reading and maths, the former from a very low base. However, the proportion of learners successful in the L6 reading test remains extremely small.

The 2013 Investigation asserted, on the basis of the 2012 results, that:

‘The cost of supporting and administering a test for such a small proportion of the school population appears to outweigh the benefits’

However it did not publish any information about that cost.

It went on to suggest that there is a case for reviewing whether the L6 test is the most appropriate means to  ‘identify a range of higher performing pupils, for example the top 10%’. The Government chose not to act on this suggestion.

Gender, ethnic background and disadvantage

The 2013 results demonstrate some very significant gender disparities, as revealed in Chart 1 below.

Girls account for 62% of successful pupils in GPS and a whopping 74% in reading, while boys account for 61% of successful pupils in maths. These imbalances raise important questions about whether gender differences in high attainment are really this pronounced, or whether there is significant underachievement amongst the under-represented gender in each case.

Chart 1: Number of pupils successful in 2013 L6 tests by gender

L6 chart 1

There are equally significant disparities in performance by ethnic background. Chart 2 below illustrates how the performance of three selected ethnic minority groups – white, Asian and Chinese – varies by test and gender.

It shows that pupils from Chinese backgrounds have a marked ascendancy in all three tests, while Asian pupils are ahead of white pupils in GPS and maths but not reading. Girls are ahead of boys within all three ethnic groups, girls leading in reading and GPS and boys leading in maths. Chinese girls comfortably out-perform white and Asian boys

Chinese pupils are way ahead in maths, with 29% overall achieving L6 and an astonishing 35% of Chinese boys achieving this outcome.

The reasons for this vast disparity are not explained and raise equally awkward questions about the distribution of high attainment and the incidence of underachievement.

 

Chart 2: Percentages of pupils successful in 2013 L6 tests by gender and selected ethnic background

L6 chart2

There are also significant excellence gaps on each of the tests, though these are hard to visualise when working solely with percentages (pupil numbers have not been published).

The percentage variations are shown in the table below. This sets out the FSM gap and the disadvantaged gap, the latter being based on the ever-6 FSM measure that underpins the Pupil Premium.

These figures suggest that, while learners eligible for the Pupil Premium are demonstrating success on the maths test (and, for girls at least, on the GPS test too), they are over three times less likely to be successful than those from advantaged backgrounds. The impact of the Pupil Premium is therefore limited.

The gap between the two groups reaches as high as 7% for boys in maths. Although this is low by comparison with the corresponding gap at level 4, it is nonetheless significant. There is more about excellence gaps in maths below.

 

Reading GPS        Maths   
G B G B G B
FSM 0 0 1 0 2 3
Non-FSM 1 0 2 1 6 9
Gap 1 0 1 1 4 6
 
Dis 0 0 1 0 2 3
Non-Dis 1 0 3 2 7 10
Gap 1 0 2 2 5 7

Schools achieving L6 success

Finally in this opening section, a comparison of schools achieving L6 success in the 2013 Primary School Performance Tables reveals different patterns for each test.

The table below shows how many schools secured different percentages of pupils at L6. The number of schools achieving 11-20% at L6 in the GPS test is over 20 times the number that achieved that outcome in reading. But over eight times more schools secured this outcome in maths than managed it in GPS.

No schools made it beyond 20% at L6 in reading and none pushed beyond 40% at L6 in GPS, but the outliers in maths managed well over 60% and even 70% returns.

11-20% 21-30% 31-40% 41-50% 51-60% 61-70% 71-80% Total
Reading 24 24
GPS 298 22 2 322
Maths 2521 531 106 25 0 1 2 3186

There is also some evidence of schools being successful in more than one test.

Amongst the small sample of 28 schools that secured 41% or more L6s in maths,  two also featured amongst the top 24 performers in reading and five amongst the top 24 performers in GSP.

The school with arguably the best record across all three tests is Christ Church Primary School in Hampstead, which secured 13% in reading, 21% in GPS and 46% in maths, from a KS2 cohort of 24. The FSM/Pupil Premium rates at the school are low but, nevertheless, this is an outstanding result.

The following sections look more closely at L6 test and teacher assessment results in each subject. Each section consists of a series of bullet points highlighting significant findings.

English

 

Reading Test

The evidence on performance on the L6 reading test is compromised to some extent by the tiny proportions of pupils that achieve it. However:

  • 9,605 schools registered pupils for the 2013 L6 reading test, up 48% from 6,469 in 2012, and the number of pupils registered increased from 47,148 in 2012 to 73,118 in 2013, an increase of 55%.
  • Of the 539,473 learners who undertook the 2013 KS2 reading tests, only 2,262 (about 0.42%) achieved L6. This figure includes some in independent schools; the comparable figure for state-funded schools only is 2,137, so 5.5% of L6s were secured in the independent sector.
  • Of this first total – ie including pupils from independent schools – 1,670 were girls (0.63% of all girls who undertook the KS2 reading tests) and 592 were boys (0.21% of all boys who undertook the KS2 reading tests).
  • These are significant improvements on the comparable 2012 figures which showed about 900 learners achieving L6, including 700 girls and 200 boys. (The figures were rounded in the SFR but the 2013 evaluation confirmed the actual number as 942). The overall percentage achieving L6 therefore increased by about 140% in 2013, compared with 2012. If we assume registration for L6 tests as a proxy for entry, this suggests that just over 3% of entrants passed in 2013.
  • In state-funded schools only, the percentage of learners from a Chinese background entered for KS2 reading tests who achieved L6 reaches 2%, compared with 1% for those of mixed background and 0% for learners from white, Asian and black backgrounds.
  • Amongst the defined sub-groups, learners of Irish, any other white, white and Asian and any other Asian backgrounds also make it to 1%. All the remainder are at 0%.
  • The same is true of EAL learners and native English speakers, FSM-eligible and disadvantaged learners, making worthwhile comparisons almost impossible.
  • The 2013 transition matrices show that 12% of learners who had achieved L4 at the end of KS1 went on to achieve L6, while 1% of those who had achieved L3 did so. Hence the vast majority of those at L4 in KS1 did not make two levels of progress.
  • Progression data in the SFR shows that, of the 2,137 learners achieving L6 in state funded schools, 2,047 were at L3 or above at KS1, 77 were at L2A, 10 were at L2B and 3 were at L2C. Of the total population at KS1 L3 or above, 1.8% progressed to L6.
  • Regional and local authority breakdowns are given only as percentages, of limited value for comparative purposes because they are so small. Only London and the South East record 1% at L6 overall, with all the remaining regions at 0%. Only one local authority – Richmond upon Thames – reaches 2%.
  • However 1% of girls reach L6 in all regions apart from Yorkshire and Humberside and a few more authorities record 2% of girls at L6: Camden, Hammersmith and Fulham, Kensington and Chelsea, Kingston, Richmond and Solihull.
  • The 2013 Primary School Performance Tables show that some 12,700 schools recorded no learners achieving L6.
  • At the other end of the spectrum, 36 schools recorded 10% or more of their KS2 cohort achieving L6. Four of these recorded 15% or higher:

Iford and Kingston C of E Primary School, East Sussex (19%; cohort of 21).

Emmanuel C of E Primary School, Camden (17%; cohort of 12).

Goosnargh Whitechapel Primary School, Lancashire (17%; cohort of 6).

High Beech  C of E VC Primary School, Essex (15%; cohort of 13).

Reading TA

There is relatively little data about teacher assessment outcomes.

  • The total number of pupils in all schools achieving L6 in reading TA in 2013 is 15,864 from a cohort of 539,729 (2.94%). This is over seven times as many as achieved L6 in the comparable test (whereas in maths the figures are very similar). It would be useful to know how many pupils achieved L6 in TA, were entered for the test and did not succeed.
  • The number of successful girls is 10,166 (3.85% of females assessed) and the number of boys achieving L6 is 5,698 (2.06% of males assessed). Hence the gap between girls and boys is far narrower on TA than it is on the corresponding test.
  • Within the 2013 Performance Tables, eight schools recorded 50% or more of their pupils at L6, the top performer being Peppard Church of England Primary School, Oxfordshire, which reached 83% (five from a cohort of six).

 

Writing (including GPS)

 

GPS Test

The L6 Grammar, Punctuation and Spelling (GPS) test was newly introduced in 2013. This is what we know from the published data:

  • The number of schools that registered for the test was 7,870, almost 2,000 fewer than registered for the reading test. The number of pupil registrations was 61,883, over 12,000 fewer than for reading.
  • The total number of successful learners is 8,606, from a total of 539,438 learners assessed at KS2, including those in independent schools taking the tests, giving an actual percentage of 1.6%. As far as I can establish, a comparable figure for state-funded schools is not available.
  • As with reading, there are significant differences between boys and girls. There were 5,373 successful girls (2.04% of girls entered for KS2 GPS tests) and 3,233 successful boys (1.17% of boys entered for KS2 GPS). This imbalance in favour of girls is significant, but not nearly as pronounced as in the reading test.
  • The proportion of pupil registrations for the L6 GPS test resulting in L6 success is around one in seven (13.9%) well over four times as high as for reading.
  • The ethnic breakdown in state-funded schools shows that Chinese learners are again in the ascendancy. Overall, 7% of pupils from a Chinese background achieved L6, compared with 1% white, 2% mixed, 2% Asian and 1% black.
  • Chart 3 below shows how L6 achievement in GPS varies between ethnic sub-groups. Indian pupils reach 4% while white and Asian pupils score 3%, as do pupils from any other Asian background.

Chart 3: 2013 GPS L6 performance by ethnic sub-groups

L6 chart 3

  • When gender differences are taken into account, Chinese girls are at 8% (compared with boys at 7%), ahead of Indian girls at 5% (boys 3%), white and Asian girls at 4% (boys 3%) and any other Asian girls also at 4% (boys 3%). The ascendancy of Chinese girls over boys from any other ethnic background is particularly noteworthy and replicates the situation in maths (see below).
  • Interestingly, EAL learners and learners with English as a native language both record 2% at L6. Although these figures are rounded, it suggests that exceptional performance in this aspect of English does not correlate with being a native speaker.
  • FSM-eligible learners register 0%, compared with 2% for those not eligible. However, disadvantaged learners are at 1% and non-disadvantaged 2% (Disadvantaged boys are at 0% and non-disadvantaged girls at 3%). Without knowing the numbers involved we can draw few reliable conclusions from this data.
  • Chart 4 below gives illustrates the regional breakdown for boys, girls and both genders. At regional level, London reaches 3% success overall, with both the South East and Eastern regions at 2% and all other regions at 1%. Girls record 2% in every region apart from the North West and Yorkshire and Humberside. Only in London do boys reach 2%.

 

Chart 4: 2013 L6 GPS outcomes by gender and region

L6 chart 4

  • At local authority level the highest scoring are Richmond (7%); the Isles of Scilly (6%); Kingston and Sutton (5%); and Harrow, Hillingdon and Wokingham (4%).
  • The School Performance Tables reveal that some 10,200 schools posted no L6 results while, at the other extreme, 34 schools recorded 20% or more of their KS2 cohort at L6 and 463 schools managed 10% or above. The best records were achieved by:

St Joseph’s Catholic Primary School, Southwark (38%; cohort of 24).

The Vineyard School, Richmond  (38%; cohort of 56).

Cartmel C of E Primary School,  (29%; cohort of 7) and

Greystoke School, (29%; cohort of 7).

Writing TA

When it comes to teacher assessment:

  • 8,410 learners from both state and independent schools out of a total of 539,732 assessed (1.56%) were judged to be at L6 in writing. The total figure for state-funded schools is 7,877 pupils. This is very close to the number successful in the L6 GPS test, even though the focus is somewhat different.
  • Of these, 5,549 are girls (2.1% of the total cohort) and 2,861 boys (1.04% of the total cohort). Hence the imbalance in favour of girls is more pronounced in writing TA than in the GPS test, whereas the reverse is true for reading. 
  • About 5% of learners from Chinese backgrounds achieve L6, as do 3% of white Asian and 3% of Irish pupils.
  • The 2013 transition matrices record progression in writing TA, rather than in the GSP test. They show that 61% of those assessed at L4 at KS1 go on to achieve L6, so only 6 out of 10 are making the expected minimum two levels of progress. On the other hand, some 9% of those with KS1 L3 go on to achieve L6, as do 2% of those at L2A.
  • The SFR provides further progression data – again based on the TA outcomes – for state-funded schools only. It shows us that one pupil working towards L1 at KS1 went on to achieve L6 at KS2, as did 11 at L1, 54 at L2C, 393 at L2B, 1,724 at L2A and 5,694 at L3 or above. Hence some pupils are making five or more levels of progress.
  • The regional breakdown – this time including independent schools – gives the East Midlands, West Midlands, London and the South West at 2%, with all the rest at 1%. At local authority level, the best performers are: City of London at 10%; Greenwich, Kensington and Chelsea and Richmond at 5% and Windsor and Maidenhead at 4%.

English TA

There is additionally a little information about pupils achieving L6 across the subject:

  • The SFR confirms that 8,087 pupils (1.5%) were assessed at L6 in English, including 5,244 girls (1.99% of all girls entered) and 2,843 boys (1.03% of all boys entered). These figures are for all schools, including independent schools.
  • There is a regional breakdown showing the East and West Midlands, London and the South West at 2%, with all the remainder at 1%. Amongst local authorities, the strongest performers are City of London (10%); and Bristol, Greenwich, Hackney, Richmond, Windsor and Maidenhead (4%). The exceptional performance of Bristol, Greenwich and Hackney is noteworthy.
  • In the Performance Tables, 27 schools record 30% or more pupils at L6 across English, the top performer again being Newton Farm, at 60%.

Maths

L6 performance in maths is more common than in other tests and subjects and the higher percentages generated typically result in more meaningful comparisons.

  • The number of school registrations for L6 maths in 2013 was 11,369, up almost 40% from 8,130 in 2012. The number of pupil registrations was 80,925, up some 45% from 55,809 in 2012.
  • The number of successful pupils – in both independent and state schools – was 35,137 (6.51% of all entrants). The gender imbalance in reading and GPS is reversed, with 21,388 boys at this level (7.75% of males entered for the overall KS2 test) compared with 13,749 girls (5.22% of females entered for the test). The SFR gives a total for state-funded schools of 33,202 pupils, so some 5.5% of Level 6s were achieved in independent schools.
  • Compared with 2012, the numbers of successful pupils has increased from 18,953. This represents an increase of 85%, not as huge as the increase for reading but a very substantial increase nevertheless. 
  • The number of successful girls has risen by some 108% from 6,600 (rounded) and the number of successful boys by about 72%, from 12,400 (rounded), so the improvement in girls’ success is markedly larger than the corresponding improvement for boys.  
  • Assuming L6 test registration as a proxy for entry, the success rate in 2013 is around 43.4%, massively better than for reading (3%) and GPS (13.9%). The corresponding success rate in 2012 was around 34%. (Slightly different results would be obtained if one used actual entry rates and passes for state schools only, but we do not have these figures for both years.)
  • The breakdown in state-funded schools for the main ethnic groups by gender is illustrated by Chart 5 below. This shows how performance by boys and girls varies according to whether they are white ( W), mixed (M), Asian (A), black (B) or Chinese (C). It also compares the outcomes in 2012 and 2013. The superior performance of Chinese learners is evident, with Chinese boys reaching a staggering 35% success rate in 2013. As things stand, Chinese boys are almost nine times more likely to achieve L6 than black girls.
  • Chart 5 also shows that none of the gender or ethnic patterns has changed between 2012 and 2013, but some groups are making faster progress, albeit from a low base. This is especially true of white girls, black boys and, to a slightly lesser extent, Asian girls.
  • Chinese girls and boys have improved at roughly the same rate and black boys have progressed faster than black girls but, in the remaining three groups, girls are improving at a faster rate than boys.

Chart 5: L6 Maths test by main ethnic groups and gender

L6 chart 5

  • Amongst sub-groups, not included on this table, the highest performing are: any other Asian background 15%, Indian 14%, white and Asian 11% and Irish 10%. Figures for Gypsy/Roma and any other white background are suppressed, while travellers of Irish heritage are at 0%, black Caribbean at 2% and any other black background at 3%. In these latter cases, the differential with Chinese performance is huge.
  • EAL learners record a 7% success rate, compared with 6% for native English language speakers, an improvement on the level pegging recorded for GPS. This gap widens to 2% for boys – 9% versus 7% in favour of EAL, whereas for girls it is 1% – 6% versus 5% in favour of EAL. The advantage enjoyed by EAL learners was also evident in 2012.
  • The table below shows the position for FSM and disadvantaged learners by gender, and how this has changed since 2012.
FSM boys Non FSM boys Gap Dis boys Non dis boys Gap
2012 1% 5% 4% 1% 6% 5%
2013 3% 9% 6% 3% 10% 7%
FSM girls Non FSM girls Gap Dis girls Non dis girls Gap
2012 1% 3% 2% 1% 3% 2%
2013 2% 6% 4% 2% 7% 5%
FSM all Non FSM all Gap Dis all Non dis all Gap
2012 1% 4% 3% 1% 4% 3%
2013 2% 7% 5% 2% 8% 6%
  • This shows that the gap between FSM and non-FSM and between disadvantaged and non-disadvantaged has grown – for boys, girls and the groups as a whole – between 2012 and 2013. All the gaps have increased by 2% or 3%, with higher gaps between disadvantaged and advantaged girls and for disadvantaged boys and girls together, compared with their more advantaged peers.
  • The gaps are all between 2% and 7%, so not large compared with those lower down the attainment spectrum, but the fact that they are widening is a significant cause for concern, suggesting that Pupil Premium funding is not having an impact at L6 in maths.
  • The Transition Matrices show that 89% of learners assessed at L4 in KS1 went on to achieve L6, while 26% of those with L3 at KS1 did so, as did 4% of those with L2A and 1% of those with L2B. Hence a noticeable minority is making four levels of progress.
  • The progression data in the SFR, relating to state-funded schools, show that one pupil made it from W at KS1 to L6, while 8 had L1, 82 had 2C, 751 had 2B, 4,983 had 2A and 27,377 had L3. Once again, a small minority of learners is making four or five levels of progress.
  • At regional level, the breakdown is: NE 6%, NW 6%, Y+H 5%, EM 6%, WM 6%, E 6%, London 9%, SE 7% and SW 6%. So London has a clear lead in respect of the proportion of its learners achieving L6.
  • The local authorities leading the rankings are: City of London 24%, Richmond 19%, Isles of Scilly 17%, Harrow and Kingston 15%, Trafford and Sutton 14%. No real surprises there!
  • The Performance Tables show 33 schools achieved 40% or higher on this measure. Eight schools were at 50% or above. The best performing schools were:

St Oswald’s C of E Aided Primary School, Cheshire West and Chester (75%; cohort 8)

St Joseph’s Roman Catholic Primary School, Hurst Green, Lancashire (71%; cohort 7)

Haselor School, Warwickshire (67%; cohort 6).

  • Some of the schools achieving 50% were significantly larger, notably Bowdon C of E Primary School, Trafford, which had a KS2 cohort of 60.

Maths TA

The data available on maths TA is more limited:

  • Including pupils at independent schools, a total of 33,668 were assessed at L6 in maths (6.24% of all KS2 candidates). This included 20,336 boys (7.37% of all male KS2 candidates) and 13,332 girls (5.06% of all female candidates). The number achieving L6 maths TA is slightly lower than the corresponding number achieving L6 in the test.
  • The regional breakdown was as follows: NE 5%; NW 5%; Y+H 5%; EM 5%, WM 6%; E 6%, London 8%; SE 7%, SW 6%, so London’s ascendancy is not as significant as in the test. 
  • The strongest local authority performers are: City of London 24%; Harrow and Richmond 15%; Sutton 14%; Trafford 13%; Solihull and Bromley 12%.
  • In the Performance Tables, 63 schools recorded 40% or higher on this measure, 15 of them at 50% or higher. The top performer was St Oswald’s C of E Aided Primary School (see above) with 88%.

Science

Science data is confined to teacher assessment outcomes.

  • A total of just 1,633 pupils achieved L6 in 2013, equivalent to 0.3% of the KS2 science cohort. Of these, 1,029 were boys (0.37%) and 604 were girls (0.23%), suggesting a gender imbalance broadly similar to that in maths.
  • No regions and only a handful of local authorities recorded a success rate of 1%.
  • In the Performance Tables, 31 schools managed 20% or higher and seven schools were above 30%. The best performing were:

Newton Farm (see above) (50%; cohort 30)

Hunsdon Junior Mixed and Infant School, Hertfordshire (40%; cohort 10)

Etchingham Church of England Primary School, East Sussex (38%; cohort 16)

St Benedict’s Roman Catholic Primary School Ampleforth, North Yorkshire (36%; cohort 14).

Conclusions

 

Key findings from this data analysis

I will not repeat again all of the significant points highlighted above, but these seem particularly worthy of attention and further analysis:

  • The huge variation in success rates for the three L6 tests. The proportion of learners achieving L6 in the reading test is improving at a faster rate than in maths, but from a very low base. It remains unacceptably low, is significantly out of kilter with the TA results for L6 reading and – unless there has been a major improvement in 2014 – is likely to stay depressed for the limited remaining lifetime of the test.
  • In the tests, 74% of those successful in reading are girls, 62% of those successful in GPS are girls and 61% of those successful in maths are boys. In reading there are also interesting disparities between gender distribution at L6 in the test and in teacher assessment. Can these differences be attributed solely to gender distinctions or is there significant gender-related underachievement at the top of the attainment distribution? If so, how can this be addressed? 
  • There are also big variations in performance by ethnic background. Chinese learners in particular are hugely successful, especially in maths. In 2013, Chinese girls outscored significantly boys from all other backgrounds, while an astonishing 35% of Chinese boys achieved L6. This raises important questions about the distribution of high attainment, the incidence of underachievement and how the interaction between gender and ethnic background impacts on these.
  • There are almost certainly significant excellence gaps in performance on all three tests (ie between advantaged and disadvantaged learners), though in reading and GPS these are masked by the absence of numerical data. In maths we can see that the gaps are not as large as those lower down the attainment spectrum, but they widened significantly in 2013 compared with 2012. This suggests that the impact of the Pupil Premium on the performance of the highest attainers from disadvantaged backgrounds is extremely limited.  What can and should be done to address this issue?
  • EAL learners perform equally as well as their counterparts in the GPS test and even better in maths. This raises interesting questions about the relationship between language acquisition and mathematical performance and, even more intriguingly, the relationship between language acquisition and skill in manipulating language in its written form. Further analysis of why EAL learners are so successful may provide helpful clues that would improve L6 teaching for all learners.
  • Schools are recording very different success rates in each of the tests. Some schools that secure very high L6 success rates in one test fail to do so in the others, but a handful of schools are strong performers across all three tests. We should know more than we do about the characteristics and practices of these highly successful schools.

Significant gaps in the data

A data portal to underpin the School Performance Tables is under construction. There have been indications that it will contain material about high attainers’ performance but, while levels continue to be used in the Tables, this should include comprehensive coverage of L6 performance, as well as addressing the achievement of high attainers as they are defined for Performance Table purposes (a much broader subset of learners).

Subject to the need to suppress small numbers for data protection purposes, the portal might reasonably include, in addition to the data currently available:

  • For each test and TA, numbers of registrations, entries and successful pupils from FSM and disadvantaged backgrounds respectively, including analysis by gender and ethnic background, both separately and combined. All the data below should also be available for these subsets of the population.
  • Registrations and entries for each L6 test, for every year in which the tests have been administered, showing separately rates for state-funded and all schools and rates for different types of state-funded school.
  • Cross-referencing of L6 test and TA performance, to show how many learners are successful in one, the other and both – as well as how many learners achieve L6 on more than one test and/or TA and different combinations of assessments.
  • Numbers of pupils successful in each test and TA by region and LA, as well as regional breakdowns of the data above and below.
  • Trends in this data across all the years in which the tests and TA have been administered.
  • The annual cost of developing and administering each of the L6 tests so we can make a judgement about value for money.

It would also be helpful to produce case studies of schools that are especially successful in maximising L6 performance, especially for under-represented groups.

 

The impact of the new tests pre- and post-2016

We do not yet know whether the announcement that L6 tests will disappear after 2015 has depressed registration, entry and success rates in 2014. This is more likely in 2015, since the 2014 registration deadline and the response to the primary assessment and accountability consultation were broadly co-terminous.

All the signs are that the accountability regime will continue to focus some attention on the performance of high attainers:

  • Ofsted is placing renewed emphasis on the attainment and progress of the ‘most able’ in school inspection, though they have a broad conceptualisation of that term and may not necessarily highlight L6 achievement.
  • From 2016, schools will be required to publish ‘the percentage of pupils who achieve a high score in all areas at the end of key stage 2.’ But we do not know whether this means publishing separately the percentage of pupils achieving high scores in each area, or only the percentage of pupils achieving high scores across all areas. Nor do we know what will count as a high score for these purposes.
  • There were commitments in the original primary assessment and accountability consultation document to inclusion of measures in the Primary Performance Tables setting out:

‘How many of a school’s pupils are amongst the highest attaining nationally, by showing the percentage of pupils achieving a high scaled score in each subject.’

but these were not repeated in the consultation response.

In short, there are several unanswered questions and some cause to doubt the extent to which Level 6-equivalent performance will continue to be a priority. The removal of L6 tests could therefore reduce significantly the attention primary schools give to their highest attainers.

Moreover, questions remain over the suitability of the new tests for these highest attainers. These may possibly be overcome but there is considerable cause for concern.

It is quite conceivable that the test developers will not be able to accommodate effective assessment of L6 performance within single tests as planned.

If that is the case, the Government faces a choice between perpetuating separate tests, or the effective relegation of the assessment of the highest attainers to teacher assessment alone.

Such a decision would almost certainly need to be taken on this side of a General Election. But of course it need not be binding on the successor administration. Labour has made no commitments about support for high attainers, which suggests they will not be a priority for them should they form the next Government.

The recently published Assessment Principles are intended to underpin effective assessment systems within schools. They state that such systems:

‘Differentiate attainment between pupils of different abilities, giving early recognition of pupils who are falling behind and those who are excelling.’

This lends welcome support to the recommendations I offered to NAHT’s Commission on Assessment

But the national system for assessment and accountability has an equally strong responsibility to differentiate throughout the attainment spectrum and to recognise the achievement of those who excel.

As things stand, there must be some doubt whether it will do so.

Postscript

On 19 May 2014 two newspapers helpfully provided the entry figures for the 2014 L6 tests. These are included in the chart below.

L6 postscript chart

It is clear that entries to all three tests held up well in 2014 and, as predicted, numbers have not yet been depressed as a consequence of the decision to drop L6 tests after 2015.

The corresponding figures for the numbers of schools entering learners for each test have not been released, so we do not know to what extent the increase is driven by new schools signing up, as opposed to schools with previous entries increasing the numbers they enter.

This additional information makes it easier to project approximate trends into 2015, so we shall be able to tell next year whether the change of assessment policy will cause entry rates to tail off.

  • Entries for the L6 reading test were 49% up in 2013 and 36% up in 2014. Assuming the rate of increase in 2015 falls to 23% (ie again 13% down on the previous year), there would be some 117,000 entries in 2015.
  • Entries for the L6 maths test were 41% up in 2013 and 36% up in 2014. Assuming the rate of increase in 2015 falls to 31% (ie again 5% down on the previous year), there would be around 139,000 entries in 2015.
  • GPS is more problematic because we have only two years on which to base the trend. If we assume that the rate of increase in entries will fall somewhere between the rate for maths and the rate for reading in 2014 (their second year of operation) there would be somewhere between 126,000 and 133,000 entries in 2015 – so approximately 130,000 entries.

It is almost certainly a projection too far to estimate the 2014 pass rates on the basis of the 2014 entry rates, so I will resist the temptation. Nevertheless, we ought to expect continued improvement at broadly commensurate rates.

The press stories include a Government ‘line to take’ on the L6 tests.

In the Telegraph, this is:

‘Want to see every school stretching all their pupils and these figures show that primary schools have embraced the opportunity to stretch their brightest 11-year-olds.’

‘This is part of a package of measures – along with toughening up existing primary school tests, raising the bar and introducing higher floor standards – that will raise standards and help ensure all children arrive at secondary school ready to thrive.’

In the Mail it is:

‘We brought back these tests because we wanted to give teachers the chance to set high aspirations for pupils in literacy and numeracy.’

‘We want to see every school stretching all their pupils. These figures show that primary schools have embraced the opportunity to stretch their brightest 11-year-olds by  teaching them more demanding new material, in line with the new curriculum, and by entering them for the Level 6 test.’

There is additionally confirmation in the Telegraph article that ‘challenging material currently seen in the level 6 exams would be incorporated into all SATs tests’ when the new universal assessments are introduced, but nothing about the test development difficulties that this presents.

But each piece attributed this welcome statement to Mr Gove:

‘It is plain wrong to set a ceiling on the talents of the very brightest pupils and let them drift in class.’

‘Letting teachers offer level 6 tests means that the most talented children will be fully stretched and start secondary school razor sharp.’

Can we read into that a commitment to ensure that the new system – including curriculum, assessment, qualifications, accountability and (critically) Pupil Premium support for the disadvantaged – is designed in a joined up fashion to meet the needs of ‘the very brightest pupils’?

I wonder if Mr Hunt feels able to follow suit.

GP

May 2014

Air on the ‘G’ String: Hoagies’ Bloghop, May 2014

 

medium_17873944As I see it, there are three sets of issues with the ‘G’ word:

  • Terminological – the term carries with it associations that make some advocates uncomfortable and predispose others to resist such advocacy.
  • Definitional – there are many different ways to define the term and the subset of the population to which it can be applied; there is much disagreement about this, even amongst advocates.
  • Labelling – the application of the term to individuals can have unintended negative consequences, for them and for others.

 

Terminological issues

We need shared terminology to communicate effectively about this topic. A huge range of alternatives is available: able, more able, highly able, most able, talented, asynchronous, high potential, high learning potential… and so on.

These terms – the ‘g’ word in particular – are often qualified by an adjective – profoundly, highly, exceptionally – which adds a further layer of complexity. Then there is the vexed question of dual and multiple exceptionality…

Those of us who are native English speakers conveniently forget that there are also numerous terms available in other languages: surdoue, hochbegabung, hochbegaabte, altas capacidades, superdotados, altas habilidades, evnerik and many, many more!

Each of these terms has its own good and bad points, its positive and negative associations.

The ‘g’ word has a long history, is part of the lingua franca and is still most widely used. But its long ascendancy has garnered a richer mix of associations than some of the alternatives.

The negative associations can be unhelpful to those seeking to persuade others to respond positively and effectively to the needs of these children and young people. Some advocates feel uncomfortable using the term and this hampers effective communication, both within the community and outside it.

Some react negatively to its exclusive, elitist connotations; on the other hand, it can be used in a positive way to boost confidence and self-esteem.

But, ultimately, the term we use is less significant than the way in which we define it. There may be some vague generic distaste for the ‘g’ word, but logic should dictate that most reactions will depend predominantly on the meaning that is applied to the term.

 

Definitional issues

My very first blog post drew attention to the very different ways in which this topic is approached around the world. I identified three key polarities:

  • Nature versus nurture – the perceived predominance of inherited disposition over effort and practice, or vice versa.
  • Excellence versus equity – whether priority is given to raising absolute standards and meritocracy or narrowing excellence gaps and social mobility.
  • Special needs versus personalisation – whether the condition or state defined by the term should be addressed educationally as a special need, or through mainstream provision via differentiation and tailored support.

These definitional positions may be associated with the perceived pitch or incidence of the ‘g’ condition. When those at the extreme of the distribution are under discussion, or the condition is perceived to be extremely rare, a nature-excellence-special needs perspective is more likely to predominate. A broader conceptualisation pushes one towards the nurture-equity-personalisation nexus.

Those with a more inclusive notion of ‘g’-ness – who do not distinguish between ‘bright’ and ‘g’, include all high attainers amongst the latter and are focused on the belief that ‘g’-ness is evenly distributed in the population by gender, ethnic and socio-economic background – are much more likely to hold the latter perspective, or at least tend towards it.

There are also differences according to whether the focus is the condition itself – ‘g’-ness – or schooling for the learners to whom the term is applied – ‘g’ education. In the first case, nature, excellence and special needs tend to predominate; in the second the reverse is true. This can compromise interaction between parents and educators.

In my experience, if the ‘g’ word is qualified by a careful definition that takes account of these three polarities, a mature discussion about needs and how best to meet them is much more likely to occur.

In the absence of a shared definition, the associations of the term will likely predominate unchecked. Effective communication will be impossible; common ground cannot be established; the needs that the advocate is pressing will remain unfulfilled. That is in no-one’s best interests, least of all those who are ‘g’.

 

Labelling Issues 

When the ‘g’ word is applied to an individual, it is likely to influence how that individual perceives himself and how others perceive him.

Labelling is normally regarded as negative, because it implies a fixed and immutable state and may subject the bearers of the label to impossibly high expectations, whether of behaviour or achievement, that they cannot always fulfil.

Those who do not carry the label may see themselves as second class citizens, become demotivated and much less likely to succeed.

But, as noted above, it is also possible to use the ‘g’ label to confer much-needed status and attention on those who do not possess the former or receive enough of the latter. This can boost confidence and self-esteem, making the owners of the label more likely to conform to the expectations that it carries.

This is particularly valuable for those who strive to promote equity and narrow excellence gaps between those from advantaged and disadvantaged backgrounds.

Moreover, much depends on whether the label is permanently applied or confers a temporary status.

I recently published a Twitter conversation explaining how the ‘g’ label can be used as a marker to identify those learners who for the time being need additional learning support to maximise their already high achievement.

This approach reflects the fact that children and young people do not develop through a consistent linear process, but experience periods of rapid development and comparative stasis.

The timing and duration of these periods will vary so, at any one time in any group of such individuals, some will be progressing rapidly and others will not. Over the longer term some will prove precocious; others late developers.

This is not to deny that a few learners at the extreme of the distribution will retain the marker throughout their education, because they are consistently far ahead of their peers and so need permanent additional support to maximise their achievement.

But, critically, the label is earned through evidence of high achievement rather than through a test of intelligence or cognitive ability that might have been administered once only and in the distant past. ‘G’-ness depends on educational success. It also forces educators to address underachievement at the top of the attainment spectrum.

If a label is more typically used as a temporary marker it must be deployed sensitively, in a way that is clearly understood by learners and their parents. They must appreciate that the removal of the marker is not a punishment or downgrading that leads to loss of self-esteem.

Because the ‘g’ label typically denotes a non-permanent state that defines need rather than expectation, most if not all of the negative connotations can be avoided.

Nevertheless, this may be anathema to those with a nature-excellence-special needs perspective!

 

Conclusion 

I have avoided using the ‘g’ word within this post, partly to see if it could be done and partly out of respect for those of you who dislike it so much.

But I have also advanced some provocative arguments using terminology that some of you will find equally disturbing. That is deliberate and designed to make you think!

The ‘g’ word has substantial downside, but this can be minimised through careful definition and the application of the label as a non-permanent marker.

It may be that the residual negative associations are such that an alternative is still preferable. The question then arises whether there is a better term with the same currency and none of the negative connotations.

As noted above there are many contenders – not all of them part of the English language – but none stands head-and-shoulders above its competitors.

And of course it is simply impossible to ban a word. Indeed, any attempt to do so would provoke many of us – me included – to use the ‘g’ word even more frequently and with much stronger conviction.

 

 

Hoagies bloghop

 

This blog is part of the Hoagies’ Gifted Education Page inaugural Blog Hop on The “G” Word (“Gifted”).  To read more blogs in this hop, visit this Blog Hop at www.hoagiesgifted.org/blog_hop_the_g_word.htm

 

 

GP

May 2014

 

 

 

 

 

photo credit: <a href=”http://www.flickr.com/photos/neurollero/17873944/”>neurollero</a&gt; via <a href=”http://photopin.com”>photopin</a&gt; <a href=”http://creativecommons.org/licenses/by-sa/2.0/”>cc</a&gt;

How well is Ofsted reporting on the most able?

 

 

This post considers how Ofsted’s new emphasis on the attainment and progress of the most able learners is reflected in school inspection reports.

My analysis is based on the 87 Section 5 secondary school inspection reports published in the month of March 2014.

keep-calm-and-prepare-for-ofsted-6I shall not repeat here previous coverage of how Ofsted’s emphasis on the most able has been framed. Interested readers may wish to refer to previous posts for details:

The more specific purpose of the post is to explore how consistently Ofsted inspectors are applying their guidance and, in particular, whether there is substance for some of the concerns I expressed in these earlier posts, drawn together in the next section.

The remainder of the post provides an analysis of the sample and a qualitative review of the material about the most able (and analogous terms) included in the sample of 87 inspection reports.

It concludes with a summary of the key points, a set of associated recommendations and an overall inspection grade for inspectors’ performance to date. Here is a link to this final section for those who prefer to skip the substance of the post.

 

Background

Before embarking on the real substance of this argument I need to restate briefly some of the key issues raised in those earlier posts:

  • Ofsted’s definition of ‘the most able’ in its 2013 survey report is idiosyncratically broad, including around half of all learners on the basis of their KS2 outcomes.
  • The evidence base for this survey report included material suggesting that the most able students are supported well or better in only 20% of lessons – and are not making the progress of which they are capable in about 40% of schools.
  • The survey report’s recommendations included three commitments on Ofsted’s part. It would:

 o   ‘focus more closely in its inspections on the teaching and progress of the most able students, the curriculum available to them, and the information, advice and guidance provided to the most able students’;

o   ‘consider in more detail during inspection how well the pupil premium is used to support the most able students from disadvantaged backgrounds’ and

o   ‘report its inspection findings about this group of students more clearly in school inspection, sixth form and college reports.’

  • Subsequently the school inspection guidance was revised somewhat  haphazardly, resulting in the parallel use of several undefined terms (‘able pupils’, ‘most able’, ‘high attaining’, ‘highest attaining’),  the underplaying of the attainment and progress of the most able learners attracting the Pupil Premium and very limited reference to appropriate curriculum and IAG.
  • Within the inspection guidance, emphasis was placed primarily on learning and progress. I edited together the two relevant sets of level descriptors in the guidance to provide this summary for the four different inspection categories:

In outstanding schools the most able pupils’ learning is consistently good or better and they are making rapid and sustained progress.

In good schools the most able pupils’ learning is generally good, they make good progress and achieve well over time.

In schools requiring improvement the teaching of the most able pupils and their achievement are not good.

In inadequate schools the most able pupils are underachieving and making inadequate progress.

  • No published advice has been made available to inspectors on the interpretation of these amendments to the inspection guidance. In October 2013 I wrote:

‘Unfortunately, there is a real risk that the questionable clarity of the Handbook and Subsidiary Guidance will result in some inconsistency in the application of the Framework, even though the fundamental purpose of such material is surely to achieve the opposite.’

  • Analysis of a very small sample of reports for schools reporting poor results for high attainers in the school performance tables suggested inconsistency both before and after the amendments were introduced into the guidance. I commented:

‘One might expect that, unconsciously or otherwise, inspectors are less ready to single out the performance of the most able when a school is inadequate across the board, but the small sample above does not support this hypothesis. Some of the most substantive comments relate to inadequate schools.

It therefore seems more likely that the variance is attributable to the differing capacity of inspection teams to respond to the new emphases in their inspection guidance. This would support the case made in my previous post for inspectors to receive additional guidance on how they should interpret the new requirement.’

The material below considers the impact of these revisions on a more substantial sample of reports and whether this justifies some of the concerns expressed above.

It is important to add that, in January 2014, Ofsted revised its guidance document ‘Writing the report for school inspections’ to include the statement that:

Inspectors must always report in detail on the progress of the most able pupils and how effectively teaching engages them with work that is challenging enough.’ (p8)

This serves to reinforce the changes to the inspection guidance and clearly indicates that coverage of this issue – at least in these terms – is a non-negotiable: we should expect to see appropriate reference in every single section 5 report.

 

The Sample

The sample comprises 87 secondary schools whose Section 5 inspection reports were published by Ofsted in the month of March 2014.

The inspections were conducted between 26 November 2013 and 11 March 2014, so the inspectors will have had time to become familiar with the revised guidance.

However up to 20 of the inspections took place before Ofsted felt it necessary to emphasise that coverage of the progress and teaching of the most able is compulsory.

The sample happens to include several institutions inspected as part of wider-ranging reviews of schools in Birmingham and schools operated by the E-ACT academy chain. It also incorporates several middle-deemed secondary schools.

Chart 1 shows the regional breakdown of the sample, adopting the regions Ofsted uses to categorise reports, as opposed to its own regional structure (ie with the North East identified separately from Yorkshire and Humberside).

It contains a disproportionately large number of schools from the West Midlands while the South-West is significantly under-represented. All the remaining regions supply between 5 and 13 schools. A total of 57 local authority areas are represented.

 

Chart 1: Schools within the sample by region

Ofsted chart 1

 

Chart 2 shows the different statuses of schools within the sample. Over 40% are community schools, while almost 30% are sponsored academies. There are no academy converters but sponsored academies, free schools and studio schools together account for some 37% of the sample.

 

Chart 2: Schools within the sample by status

Ofsted chart 2

 

The vast majority of schools in the sample are 11-16 or 11-18 institutions, but four are all-through schools, five provide for learners aged 13 or 14 upwards and 10 are middle schools. There are four single sex schools.

Chart 3 shows the variation in school size. Some of the studio schools, free schools and middle schools are very small by secondary standards, while the largest secondary school in the sample has some 1,600 pupils. A significant proportion of schools have between 600 and 1,000 pupils.

 

Chart 3: Schools within the sample by number on roll

Ofsted chart 3

The distribution of overall inspection grades between the sample schools is illustrated by Chart 4 below. Eight of the sample were rated outstanding, 28 good, 35 as requiring improvement and 16 inadequate.

Of those rated inadequate, 12 were subject to special measures and four had serious weaknesses.

 

Chart 4: Schools within the sample by overall inspection grade

 Ofsted chart 4

The eight schools rated outstanding include:

  • A mixed 11-18 sponsored academy
  • A mixed 14-19 studio school
  • A mixed 11-18 free school
  • A mixed 11-16 VA comprehensive;
  • A girls’ 11-18  VA comprehensive
  • A boys’ 11-18 VA selective school
  • A girls’ 11-18 community comprehensive and
  • A mixed 11-18 community comprehensive

The sixteen schools rated inadequate include:

  • Eight mixed 11-18 sponsored academies
  • Two mixed 11-16 sponsored academies
  • An mixed all-through sponsored academy
  • A mixed 11-16 free school
  • Two mixed 11-16 community comprehensives
  • A mixed 11-18 community comprehensive and
  • A mixed 13-19 community comprehensive

 

Coverage of the most able in main findings and recommendations

 

Terminology 

Where they were mentioned, such learners were most often described as ‘most able’ but a wide range of other terminology is deployed included ‘most-able’, ‘the more able’, ‘more-able’, ‘higher attaining’, ‘high-ability’, ‘higher-ability’ and ‘able students’.

The idiosyncratic adoption of redundant hyphenation is an unresolved mystery.

It is not unusual for two or more of these terms to be used in the same report. Because there is no glossary in existence, this makes some reports rather less straightforward to interpret accurately.

It is also more difficult to compare and contrast reports. Helpful services like Watchsted’s word search facility become less useful.

 

Incidence of commentary in the main findings and recommendations

Thirty of the 87 inspection reports (34%) addressed the school’s most able learners explicitly (or applied a similar term) in the sections setting out the report’s main findings and the recommendations respectively.

The analysis showed that 28% of reports on academies (including studios and free schools) met this criterion, whereas 38% of reports on non-academy schools did so.

Chart 5 shows how the incidence of reference in both main findings and recommendations varies according to the overall inspection grade awarded.

One can see that this level of attention is most prevalent in schools requiring improvement, followed by those with inadequate grades. It was less common in schools rated good and less common still in outstanding schools. The gap between these two categories is perhaps smaller than expected.

The slight lead for schools requiring improvement over inadequate schools may be attributable to a view that more of the latter face more pressing priorities, or it may have something to do with the varying proportions of high attainers in such schools, or both of these factors could be in play, amongst others.

 

Chart 5: Most able covered in both main findings and recommendations by overall inspection rating (percentage)

Ofsted chart 5

A further eleven reports (13%) addressed the most able learners in the recommendations but not the main findings.

Only one report managed to feature the most able in the main findings but not in the recommendations and this was because the former recorded that ‘the most able students do well’.

Consequently, a total of 45 reports (52%) did not mention the most able in either the main findings or the recommendations.

This applied to some 56% of reports on academies (including free schools and studio schools) and 49% of reports on other state-funded schools.

So, according to these proxy measures, the most able in academies appear to receive comparatively less attention from inspectors than those in non-academy schools. It is not clear why. (The samples are almost certainly too small to support reliable comparison of academies and non-academies with different inspection ratings.)

Chart 6 below shows the inspection ratings for this subset of reports.

 

Chart 6: Most able covered in neither main findings nor recommendations by overall inspection rating (percentage)

Ofsted chart 6

Here is further evidence that the significant majority of outstanding schools are regarded as having no significant problems in respect of provision for the most able.

On the other hand, this is far from being universally true, since it is an issue for one in four of them. This ratio of 3:1 does not lend complete support to the oft-encountered truism that outstanding schools invariably provide outstandingly for the most able – and vice versa.

At the other end of the spectrum, and perhaps even more surprisingly, over 30% of inadequate schools are assumed not to have issues significant enough to warrant reference in these sections. Sometimes this may be because they are equally poor at providing for all their learners, so the most able are not separately singled out.

Chart 7 below shows differences by school size, giving the percentage of reports mentioning the most able in both main findings and recommendations and in neither.

It divides schools into three categories: small (24 schools with a NOR of 599 or lower), medium (35 schools with a NOR of 600-999) and large (28 schools with a NOR of 1000 or higher.

 

Chart 7: Reports mentioning the most able in main findings and recommendations by school size 

 Ofsted chart 7

It is evident that ‘neither’ exceeds ‘both’ in the case of all three categories. The percentages are not too dissimilar in the case of small and large schools, which record a very similar profile.

But there is a much more significant difference for medium-sized schools. They demonstrate a much smaller percentage of ‘both’ reports and comfortably the largest percentage of ‘neither’ reports.

This pattern – suggesting that inspectors are markedly less likely to emphasise provision for the most able in medium-sized schools – is worthy of further investigation.

It would be particularly interesting to explore further the relationship between school size, the proportion of high attainers in a school and their achievement.

 

Typical references in the main findings and recommendations

I could detect no obvious and consistent variations in these references by school status or size, but it was possible to detect a noticeably different emphasis between schools rated outstanding and those rated inadequate.

Where the most able featured in reports on outstanding schools, these included recommendations such as:

‘Further increase the proportion of outstanding teaching in order to raise attainment even higher, especially for the most able students.’ (11-16 VA comprehensive).

‘Ensure an even higher proportion of students, including the most able, make outstanding progress across all subjects’ (11-18 sponsored academy).

These statements suggest that such schools have made good progress in eradicating underachievement amongst the most able but still have further room for improvement.

But where the most able featured in recommendations for inadequate schools, they were typically of this nature:

‘Improve teaching so that it is consistently good or better across all subjects, but especially in mathematics, by: raising teachers’ expectations of the quality and amount of work students of all abilities can do, especially the most and least able.’  (11-16 sponsored academy).

‘Improve the quality of teaching in order to speed up the progress students make by setting tasks that are at the right level to get the best out of students, especially the most able.’ (11-18 sponsored academy).

‘Rapidly improve the quality of teaching, especially in mathematics, by ensuring that teachers: have much higher expectations of what students can achieve, especially the most able…’ (11-16 community school).

These make clear that poor and inconsistent teaching quality is causing significant underachievement at the top end (and ‘especially’ suggests that this top end underachievement is particularly pronounced compared with other sections of the attainment spectrum in such schools).

Recommendations for schools requiring improvement are akin to those for inadequate schools but typically more specific, pinpointing particular dimensions of good quality teaching that are absent, so limiting effective provision for the most able. It is as if these schools have some of the pieces in place but not yet the whole jigsaw.

By comparison, recommendations for good schools can seem rather more impressionistic and/or formulaic, focusing more generally on ‘increasing the proportion of outstanding teaching’. In such cases the assessment is less about missing elements and more about the consistent application of all of them across the school.

One gets the distinct impression that inspectors have a clearer grasp of the ‘fit’ between provision for the most able and the other three inspection outcomes, at least as far as the distinction between ‘good’ and ‘outstanding’ is concerned.

But it would be misleading to suggest that these lines of demarcation are invariably clear. The boundary between ‘good’ and ‘requires improvement’ seems comparatively distinct, but there was more evidence of overlap at the intersections between the other grades.

 

Coverage of the most able in the main body of reports 

References to the most able rarely turn up in the sections dealing with behaviour and safety and leadership and management. I counted no examples of the former and no more than one or two of the latter.

I could find no examples where information, advice and guidance available to the most able are separately and explicitly discussed and little specific reference to the appropriateness of the curriculum for the most able. Both are less prominent than the recommendations in the June 2013 survey report led us to expect.

Within this sample, the vast majority of reports include some description of the attainment and/or progress of the most able in the section about pupils’ achievement, while roughly half pick up the issue in relation to the quality of teaching.

The extent of the coverage of most able learners varied enormously. Some devoted a single sentence to the topic while others referred to it separately in main findings, recommendations, pupils’ achievement and quality of teaching. In a handful of cases reports seemed to give disproportionate attention to the topic.

 

Attainment and progress

Analyses of attainment and progress are sometimes entirely generic, as in:

‘The most able students make good progress’ (inadequate 11-18 community school).

‘The school has correctly identified a small number of the most able who could make even more progress’ (outstanding 11-16 RC VA school).

‘The most able students do not always secure the highest grades’ (11-16 community school requiring improvement).

‘The most able students make largely expected rates of progress. Not enough yet go on to attain the highest GCSE grades in all subjects.’ (Good 11-18 sponsored academy).

Sometimes such statements can be damning:

‘The most-able students in the academy are underachieving in almost every subject. This is even the case in most of those subjects where other students are doing well. It is an academy-wide issue.’ (Inadequate 11-18 sponsored academy).

These do not in my view constitute reporting ‘in detail on the progress of the most able pupils’ and so probably fall foul of Ofsted’s guidance to inspectors on writing reports.

More specific comments on attainment typically refer explicitly to the achievement of A*/A grades at GCSE and ideally to specific subjects, for example:

‘In 2013, standards in science, design and technology, religious studies, French and Spanish were also below average. Very few students achieved the highest A* and A grades.’ (Inadequate 11-18 sponsored academy)

‘Higher-ability students do particularly well in a range of subjects, including mathematics, religious education, drama, art and graphics. They do as well as other students nationally in history and geography.’ (13-18 community school  requiring improvement)

More specific comments on progress include:

‘The progress of the most able students in English is significantly better than that in other schools nationally, and above national figures in mathematics. However, the progress of this group is less secure in science and humanities.’  (Outstanding 11-18 sponsored academy)

‘In 2013, when compared to similar students nationally, more-able students made less progress than less-able students in English. In mathematics, where progress is less than in English, students of all abilities made similar progress.’ (11-18 sponsored academy requiring improvement).

Statements about progress rarely extend beyond English and maths (the first example above is exceptional) but, when attainment is the focus, some reports take a narrow view based exclusively on the core subjects, while others are far wider-ranging.

Despite the reference in Ofsted’s survey report, and subsequently the revised subsidiary guidance, to coverage of high attaining learners in receipt of the Pupil Premium, this is hardly ever addressed.

I could find only two examples amongst the 87 reports:

‘The gap between the achievement in English and mathematics of students for whom the school receives additional pupil premium funding and that of their classmates widened in 2013… During the inspection, it was clear that the performance of this group is a focus in all lessons and those of highest ability were observed to be achieving equally as well as their peers.’ (11-16 foundation school requiring improvement)

‘Students eligible for the pupil premium make less progress than others do and are consequently behind their peers by approximately one GCSE grade in English and mathematics. These gaps reduced from 2012 to 2013, although narrowing of the gaps in progress has not been consistent over time. More-able students in this group make relatively less progress.’ (11-16 sponsored academy requiring improvement)

More often than not it seems that the most able and those in receipt of the Pupil Premium are assumed to be mutually exclusive groups.

 

Quality of teaching 

There was little variation in the issues raised under teaching quality. Most inspectors select two or three options from a standard menu:

‘Where teaching is best, teachers provide suitably challenging materials and through highly effective questioning enable the most able students to be appropriately challenged and stretched…. Where teaching is less effective, teachers are not planning work at the right level of difficulty. Some work is too easy for the more able students in the class. (Good 11-16 community school)

 ‘In teaching observed during the inspection, the pace of learning for the most able students was too slow because the activities they were given were too easy. Although planning identified different activities for the most able students, this was often vague and not reflected in practice.  Work lacks challenge for the most able students.’ (Inadequate 11-16 community school)

‘In lessons where teaching requires improvement, teachers do not plan work at the right level to ensure that students of differing abilities build on what they already know. As a result, there is a lack of challenge in these lessons, particularly for the more able students, and the pace of learning is slow. In these lessons teachers do not have high enough expectations of what students can achieve.’ (11-18 community school requiring improvement)

‘Tasks set by teachers are sometimes too easy and repetitive for pupils, particularly the most able. In mathematics, pupils are sometimes not moved on quickly enough to new and more challenging tasks when they have mastered their current work.’ (9-13 community middle school requiring improvement)

‘Targets which are set for students are not demanding enough, and this particularly affects the progress of the most able because teachers across the year groups and subjects do not always set them work which is challenging. As a result, the most able students are not stretched in lessons and do not achieve as well as they should.’ (11-16 sponsored academy rated inadequate)

All the familiar themes are present – assessment informing planning, careful differentiation, pace and challenge, appropriate questioning, the application of subject knowledge, the quality of homework, high expectations and extending effective practice between subject departments.

 

Negligible coverage of the most able

Only one of the 87 reports failed to make any mention of the most able whatsoever. This is the report on North Birmingham Academy, an 11-19 mixed school requiring improvement.

This clearly does not meet the injunction to:

‘…report in detail on the progress of the most able pupils and how effectively teaching engages them with work that is challenging enough’.

It ought not to have passed through Ofsted’s quality assurance processes unscathed. The inspection was conducted in February 2014, after this guidance issued, so there is no excuse.

Several other inspections make only cursory references to the most able in the main body of the report, for example:

‘Where teaching is not so good, it was often because teachers failed to check students’ understanding or else to anticipate when to intervene to support students’ learning, especially higher attaining students in the class.’ (Good 11-18 VA comprehensive).

‘… the teachers’ judgements matched those of the examiners for a small group of more-able students who entered early for GCSE in November 2013.’ (Inadequate 11-18 sponsored academy).

‘More-able students are increasingly well catered for as part of the academy’s focus on raising levels of challenge.’ (Good 11-18 sponsored academy).

‘The most able students do not always pursue their work to the best of their capability.’ (11-16 free school requiring improvement).

These would also fall well short of the report writing guidance. At least 6% of my sample falls into this category.

Some reports note explicitly that the most able learners are not making sufficient progress, but fail to capture this in the main findings or recommendations, for example:

‘The achievement of more able students is uneven across subjects. More able students said to inspectors that they did not feel they were challenged or stretched in many of their lessons. Inspectors agreed with this view through evidence gathered in lesson observations…lessons do not fully challenge all students, especially the more able, to achieve the grades of which they are capable.’ (11-19 sponsored academy requiring improvement).

‘The 2013 results of more-able students show they made slower progress than is typical nationally, especially in mathematics.  Progress is improving this year, but they are still not always sufficiently challenged in lessons.’ (11-18 VC CofE school requiring improvement).

‘There is only a small proportion of more-able students in the academy. In 2013 they made less progress in English and mathematics than similar students nationally. Across all of their subjects, teaching is not sufficiently challenging for more-able students and they leave the academy with standards below where they should be.’ (Inadequate 11-18 sponsored academy).

‘The proportion of students achieving grades A* and A was well below average, demonstrating that the achievement of the most able also requires improvement.’  (11-18 sponsored academy requiring improvement).

Something approaching 10% of the sample fell into this category. It was not always clear why this issue was not deemed significant enough to feature amongst schools’ priorities for improvement. This state of affairs was more typical of schools requiring improvement than inadequate schools, so one could not so readily argue that the schools concerned were overwhelmed with the need to rectify more basic shortcomings.

That said, the example from an inadequate academy above may be significant. It is almost as if the small number of more able students is the reason why this shortcoming is not taken more seriously.

Inspectors must carry in their heads a somewhat subjective hierarchy of issues that schools are expected to tackle. Some inspectors appear to feature the most able at a relatively high position in this hierarchy; others push it further down the list. Some appear more flexible in the application of this hierarchy to different settings than others.

 

Formulaic and idiosyncratic references 

There is clear evidence of formulaic responses, especially in the recommendations for how schools can improve their practice.

Many reports adopt the strategy of recommending a series of actions featuring the most able, either in the target group:

‘Improve the quality of teaching to at least good so that students, including the most able, achieve higher standards, by ensuring that: [followed by a list of actions] (9-13 community middle school requiring improvement)

Or in the list of actions:

‘Improve the quality of teaching in order to raise the achievement of students by ensuring that teachers:…use assessment information to plan their work so that all groups of students, including those supported by the pupil premium and the most-able students, make good progress.’ (11-16 community school requiring improvement)

It was rare indeed to come across a report that referred explicitly to interesting or different practice in the school, or approached the topic in a more individualistic manner, but here are a few examples:

‘More-able pupils are catered for well and make good progress. Pupils enjoy the regular, extra challenges set for them in many lessons and, where this happens, it enhances their progress. They enjoy that extra element which often tests them and gets them thinking about their work in more depth. Most pupils are keen to explore problems which will take them to the next level or extend their skills.’  (Good 9-13 community middle school)

‘Although the vast majority of groups of students make excellent progress, the school has correctly identified a small number of the most able who could make even more progress. It has already started an impressive programme of support targeting the 50 most able students called ‘Students Targeted A grade Results’ (STAR). This programme offers individualised mentoring using high-quality teachers to give direct intervention and support. This is coupled with the involvement of local universities. The school believes this will give further aspiration to these students to do their very best and attend prestigious universities.’  (Outstanding 11-16 VA school)

I particularly liked:

‘Policies to promote equality of opportunity are ineffective because of the underachievement of several groups of students, including those eligible for the pupil premium and the more-able students.’ (Inadequate 11-18 academy) 

 

Conclusion

 

Main Findings

The principal findings from this survey, admittedly based on a rather small and not entirely representative sample, are that:

  • Inspectors are terminologically challenged in addressing this issue, because there are too many synonyms or near-synonyms in use.
  • Approximately one-third of inspection reports address provision for the most able in both main findings and recommendations. This is less common in academies than in community, controlled and aided schools. It is most prevalent in schools with an overall ‘requires improvement’ rating, followed by those rated inadequate. It is least prevalent in outstanding schools, although one in four outstanding schools is dealt with in this way.
  • Slightly over half of inspection reports address provision for the most able in neither the main findings nor the recommendations. This is relatively more common in the academies sector and in outstanding schools. It is least prevalent in schools rated inadequate, though almost one-third of inadequate schools fall into this category. Sometimes this is the case even though provision for the most able is identified as a significant issue in the main body of the report.
  • There is an unexplained tendency for reports on medium-sized schools to be significantly less likely to feature the most able in both main findings and recommendations and significantly more likely to feature it in neither. This warrants further investigation.
  • Overall coverage of the topic varies excessively between reports. One ignored it entirely, while several provided only cursory coverage and a few covered it to excess. The scope and quality of the coverage does not necessarily correlate with the significance of the issue for the school.
  • Coverage of the attainment and progress of the most able learners is variable. Some reports offer only generic descriptions of attainment and progress combined, some are focused exclusively on attainment in the core subjects while others take a wider curricular perspective. Outside the middle school sector, desirable attainment outcomes for the most able are almost invariably defined exclusively in terms of A* and A grade GCSEs.
  • Hardly any reports consider the attainment and/or progress of the most able learners in receipt of the Pupil Premium.
  • None of these reports make specific and explicit reference to IAG for the most able. It is rarely stated whether the school’s curriculum satisfies the needs of the most able.
  • Too many reports adopt formulaic approaches, especially in the recommendations they offer the school. Too few include reference to interesting or different practice.

In my judgement, too much current inspection reporting falls short of the commitments contained in the original Ofsted survey report and of the more recent requirement to:

‘always report in detail on the progress of the most able pupils and how effectively teaching engages them with work that is challenging enough.’

 

Recommendations

  • Ofsted should publish a glossary defining clearly all the terms for the most able that it employs, so that both inspectors and schools understand exactly what is intended when a particular term is deployed and which learners should be in scope when the most able are discussed.
  • Ofsted should co-ordinate the development of supplementary guidance clarifying their expectations of schools in respect of provision for the most able. This should set out in more detail what expectations would apply for such provision to be rated outstanding, good, requiring improvement and inadequate respectively. This should include the most able in receipt of the Pupil Premium, the suitability of the curriculum and the provision of IAG.
  • Ofsted should provide supplementary guidance for inspectors outlining and exemplifying the full range of evidence they might interrogate concerning the attainment and progress of the most able learners, including those in receipt of the Pupil Premium.
  • This guidance should specify the essential minimum coverage expected in reports and the ‘triggers’ that would warrant it being referenced in the main findings and/or recommendations for action.
  • This guidance should discourage inspectors from adopting formulaic descriptors and recommendations and specifically encourage them to identify unusual or innovative examples of effective practice.
  • The school inspection handbook and subsidiary guidance should be amended to reflect the supplementary guidance.
  • The School Data Dashboard should be expanded to include key data highlighting the attainment and progress of the most able.
  • These actions should also be undertaken for inspection of the primary and 16-19 sectors respectively.

 

Overall assessment: Requires Improvement.

 

GP

May 2014