Why Can’t We Have National Consensus on Educating High Attainers?

 

 

This post proposes a statement of core principles to provoke debate and ultimately build consensus about the education of high attaining learners.

focus-blur--long-way-to-the-top-2_2955981It incorporates an Aunt Sally – admittedly imperfect, provocative and prolix – to illustrate the concept and stimulate initial thinking about what such a statement might contain.

The principles are designed to underpin effective provision. They are intended to apply at every level of the education system (whether national, regional or local) and to every learning setting and age group, from entry to Reception to admission to higher education (or equivalent) and all points in between.

Alongside the draft core principles – which should have more or less global application – I offer a complementary set of ‘reform principles’ which are specific to the English context and describe how our national education reform programme might be harnessed and applied more consistently to support high attainers.

This is expressed in system-wide terms, but could be translated fairly straightforwardly into something more meaningful for schools and colleges.

 

Justification

As education reforms continue to be developed and implemented at a rapid pace, it is essential that they fit together coherently. The various reforms must operate together smoothly, like interlocking cogs in a well-oiled machine, such that the whole is greater than the sum of the parts.

Coherence must be achieved across three dimensions:

  • Horizontally, across the span of education policy.
  • Vertically across the age range, taking in the primary, secondary and tertiary sectors.
  • Laterally for each and every learning setting to which it applies.

There is a risk that such co-ordination becomes more approximate as capacity is stretched by the sheer weight of reform, especially if the central resource traditionally devoted to this task is contracting simultaneously.

In an increasingly bottom-up system, some of the responsibility for ensuring the ‘fit’ across the span of education reforms can be devolved from the centre, initially to a range of intermediary bodies and ultimately to learning settings themselves.

Regardless of where the responsibility lies, there can be a tendency to cut corners, by making these judgements with reference to some notional average learner. But this ignores the needs and circumstances of atypical constituencies including high attainers.

High attainers may even find themselves at the bottom of the pecking order amongst these atypical constituencies, typically as a consequence of the misguided view that they are more or less self-sufficient educationally speaking.

A framework of sorts is necessary to support this process, to protect against the risk that high attainers may otherwise be short-changed and also to ensure flexibility of provision within broad but common parameters.

The Government has recently set a precedent by publishing a set of Assessment Principles ‘to underpin effective assessment systems within schools’.

This post applies that precedent to support the education of high attainers, providing a flexible framework, capable of adoption (with adaptation where necessary) by all the different bodies and settings engaged in this process.

 

The English policy context

I have sought to incorporate in the second set of ‘reform’ principles the full range of areas explored by this blog, which began life at roughly the same time as the present Government began its education reform programme.

They are designed to capture the reform agenda now, as we draw to the close of the 2013/14 academic year. They highlight aspects of reform that are likely to be dominant over the next three academic years, subject of course to any adjustments to the reform programme in the light of the 2015 General Election.

These include:

  • Introduction of a new national curriculum incorporating both greater challenge and greater flexibility, together with full exemption for academies.
  • Introduction of new assessment arrangements, including internal assessment in schools following the withdrawal of national curriculum levels and external assessment arrangements, particularly at the end of KS2.
  • Introduction of revised GCSE and A level qualifications, including a new recalibrated grading system for GCSE.
  • Radical changes to the accountability system, including the reporting of learners’ achievement and the inspection of provision in different learning settings. 
  • Ensuring that the Pupil Premium drives accelerated progress in closing attainment gaps between disadvantaged and advantaged learners.
  • Ensuring accelerated progress against updated social mobility indicators, including improvements in fair access to selective universities.
  • Strengthening system-wide collaboration, ensuring that new types of institution play a significant role in this process, developing subject-specific support networks (especially in STEM) and building the capacity and reach of teaching school alliances.

 

Process

The Aunt Sally might be used as a starting point by a small group charged with generating a viable draft set of principles, either stand-alone or supported by any additional scaffolding deemed necessary.

The preparation of the draft core principles would itself be a consensus-establishing exercise, helping to distinguish areas of agreement and critical sticking points requiring negotiation to resolve.

This draft might be issued for consultation for a fixed period. Responses would be sought directly from a range of key national organisations, all of which would subsequently be invited to endorse formally the final version, revised in the light of consultation.

This stage might entail some further extended negotiation, but the process itself would help to raise the profile of the issue.

Out in the wider system, educators might be encouraged to interact with the final version of the principles, to discuss and record how they might be adjusted or qualified to fit their own particular settings.

There might be an online repository and forum (using a free online platform) enabling educators to discuss their response to the principles, suggest localised adjustments and variants to fit their unique contexts, provide exemplification and share supporting resources, materials and links.

Some of the key national organisations might be encouraged to develop programmes and resources within their own purlieux which would link explicitly with the core principles.

Costs would be limited to the human resource necessary to co-ordinate the initial task and subsequently curate the online repository.

 

Provisos

The focus on high attainment (as a subset of high achievement) has been selected in preference to any categorisation of high ability, talent or giftedness because there are fewer definitional difficulties, the terminology is less problematic and there should be a correspondingly stronger chance of reaching consensus.

I have not at this stage included a definition of high attainers. Potentially one could adopt the definition used in the Primary and Secondary Performance Tables, or an alternative derived from Ofsted’s ‘most able’ concept.

The PISA high achievement benchmarks could be incorporated, so permitting England to compare its progress with other countries.

But, since we are working towards new attainment measures at the end of KS2 and KS4 alike, it may be more appropriate to develop a working definition based on what we know of those measures, adapting the definition as necessary once the measures are themselves more fully defined.

In the two sections following I have set out the two parts of my Aunt Sally

  • A set of ten core principles, designed to embody a shared philosophy underpinning the education of high attainers and
  • A parallel set of ten reform principles, designed to show how England’s education reform agenda might be adapted and applied to support the education of high attainers.

As noted above, I have cast the latter in system-wide terms, hopefully as a precursor to developing a version that will apply (with some customisation) to every learning setting. I have chosen deliberately to set out the big picture from which these smaller versions might be derived.

My Aunt Sally is imbued with a personal belief in the middle way between a bottom-up, school-driven and market-based system on one hand and a rigid, top-down and centrally prescribed system on the other. The disadvantages of the latter still live in the memory, while those of the former are writ large in the current crisis.

Some of this flavour will be obvious below, especially in the last two reform principles, which embody what I call ‘flexible framework thinking’. You will need to make some allowances if you are of a different persuasion.

I have also been deliberately a little contentious in places, so as to stimulate reaction in readers. The final version will need to be more felicitously worded, but it should still be sharp enough to have real meaning and impact.

For there is no point in generating an anodyne ‘motherhood and apple pie’ statement that has no prospect of shifting opinion and behaviour in the direction required.

Finally, the current text is too long-winded, but I judged it necessary to include some broader context and signposting for those coming to this afresh. I am hopeful that, when this is shorn away, the slimmed-down version will be closer to its fighting weight.

 

Ten Core Principles

This section sets out ten essential principles that all parts of the education system should follow in providing for high achievers.

 

  1. Raising achievement – within the education system as a whole and for each and every learner – is one of the principal aims of education. It does not conflict with other aims, or with our duty to promote learners’ personal and social development, or their health, welfare and well-being.

 

  1. Securing high achievement – increasing the proportion of high achievers and raising the achievement of existing high-achievers – is integral to this aim.

 

  1. Both existing and potential high achievers have a right, equal to that of all other learners, to the blend of challenge and support they need to improve further – to become the best that they can be. No learner should be discriminated against educationally on the basis of their prior achievement, whether high or low or somewhere in between.

 

  1. We must pursue simultaneously the twin priorities of raising standards and closing gaps. We must give higher priority to all disadvantaged learners, regardless of their prior achievement. Standards should continue to rise amongst all high achievers, but they should rise faster amongst disadvantaged high achievers. This makes a valuable contribution to social mobility.

 

  1. Securing high attainment is integral to securing high achievement. The route to high attainment may involve any or all of greater breadth, increased depth and a faster pace of learning. These elements should be prioritised and combined appropriately to meet each learner’s needs; a one-size-fits-all solution should not be imposed, nor should any of these elements be ruled out automatically.

 

  1. There must be no artificial ceilings or boundaries restricting high attainment, whether imposed by chronological age or by the expertise available in the principal learning setting; equally, there must be no ‘hot-housing’, resulting from an imbalance between challenge and support and an associated failure to respond with sensitivity to the learner’s wider needs.

 

  1. High attainers are an extremely diverse and disparate population. Some are much higher attainers than others. Some may be ‘all-rounders’ while others have particular strengths and areas for development. All need the right blend of challenge and support to improve alike in areas of strength and any areas of comparative weakness.

 

  1. Amongst the high-attaining population there is significant over-representation of some learner characteristics. But there is also significant diversity, resulting from the interaction between gender, special needs, ethnic and socio-economic background (and several other characteristics besides). This diversity can and should increase as excellence gaps are closed.

 

  1. Educators must guard against the false assumption that high attainment is a corollary of advantage. Equally, they must accept that, while effective education can make a significant difference, external factors beyond their control will also impact upon high attainment. The debate about the relative strength of genetic and environmental influences is irrelevant, except insofar as it obstructs universally high expectations and instilling a positive ‘growth mindset’ in all learners.

 

  1. High attainers cannot meet their own educational needs without the support of educators. Nor is it true that they have no such needs by virtue of their prior attainment. Your investment in their continued improvement is valuable to them as individuals, but also to the country as a whole, economically, socially and culturally.

 

Ten Reform Principles

This section describes how different elements of educational reform might be harnessed to ensure a coherent, consistent and mutually supportive strategy for increasing high attainment

The elements below are described in national system-wide terms, as they apply to the primary and secondary school sectors, but each should be capable of adjustment so it is directly relevant at any level of the system and to every learning setting.

 

  1. Revised national curriculum arrangements offer greater flexibility to design school curricula to meet high attainers’ needs. ‘Top down’ curriculum design, embodying the highest expectations of all learners, is preferable to a ‘deficit model’ approach derived from lowest common denominator thresholds. Exemplary models should be developed and disseminated to support schools in developing their own.

 

  1. The assessment system must enable high attainers to show what they know, understand and can do. Their needs should not be overlooked in the pursuit of universally applicable assessment processes. Formative assessment must provide accurate, constructive feedback and sustain high expectations, regardless of the starting point. Internal and external assessment alike must be free of undesirable ceiling effects.

 

  1. Regardless of their school, all high attainers should have access to opportunities to demonstrate excellence through national assessments and public examinations, including Level 6 assessment (while it exists) and early entry (where it is in their best interests). Progression across transition points – eg primary to secondary – should not require unnecessary repetition and reinforcement. It, should be pre-planned, monitored and kept under review.

 

  1. High attainment measures should feature prominently when results are reported, especially in national School and College Performance Tables, but also on school websites and in the national data portal. Reporting should reveal clearly the extent of excellence gaps between the performance of advantaged and disadvantaged high attainers respectively.

 

  1. Ofsted’s inspection framework now focuses on the attainment and progress of ‘the most able’ in every school. Inspectors should adopt a consistent approach to judging all settings’ provision for high attainers, including explicit focus on disadvantaged high attainers. Inspectors and settings alike would benefit from succinct guidance on effective practice.

 

  1. The impact of the Pupil Premium on closing excellence gaps should be monitored closely. Effective practice should be captured and shared. The Education Endowment Foundation should ensure that impact on excellence gaps is mainstreamed within all its funded programmes and should also stimulate and support programmes dedicated to closing excellence gaps.

 

  1. The closing of excellence gaps should improve progression for disadvantaged high attainers, including to selective secondary, tertiary and higher education. Destination indicators should enable comparison of institutional success in this regard. Disadvantaged high attainers need access to tailored IAG to support fair access at every level. Targeted outreach to support effective transition is also essential at each transition point (typically 11, 16 and 18). Universities should be involved from KS2 onwards. The relevant social mobility measures should align with Pupil Premium ‘eligibility’. Concerted corrective action is required to improve progress whenever and wherever it stalls.

 

  1. System-wide collaboration is required to drive improvement. It must include all geographical areas, educational sectors and institutional types, including independent and selective schools.  All silos – whether associated with localities, academy chains, teaching school alliances, subject specialism or any other subset of provision – must be broken down. This requires joint action by educational settings, voluntary sector organisations and private sector providers alike. Organisations active in the field must stop protecting their fiefdoms and work together for the common good.

 

  1. To minimise fragmentation and patchiness of provision, high attaining learners should have guaranteed access to a menu of opportunities organised within a coherent but flexible framework. Their schools, as lead providers, should facilitate and co-ordinate on their behalves. A similar approach is required to support educators with relevant school improvement, initial training, professional development and research. To support this parallel framework, both theoretical and practical knowledge of the ‘pedagogy of high attainment’ should be collected, organised and shared.

 

  1. All providers should be invited to position their services within these frameworks, using intelligence about the balance between demand and supply to inform the development of new products and services. Responsibility for overseeing the frameworks and for monitoring and reporting progress should be allocated to an independent entity within this national community. As far as possible this should be a self-funding and self-sustaining system.

 

Next Steps

I have already had some welcome interest in developing a set of core principles to support the education of high attaining learners.

This may be a vehicle to stimulate a series of useful partnerships, but it would be premature to publicise these preliminary discussions for fear that they do not reach fruition.

This post is intended to stimulate others to consider the potential benefits of such an approach – and I am at your service should you wish to discuss the idea further.

But if I have only caused you to reflect more deeply about your personal contribution to the education of high attainers, even then this effort has been worthwhile.

GP

May 2014

 

 

‘Poor but Bright’ v ‘Poor but Dim’

 

light-147810_640This post explores whether, in supporting learners from disadvantaged backgrounds, educators should prioritise low attainers over high attainers, or give them equal priority.

 

 

 

Introduction

Last week I took umbrage at a blog post and found myself engaged in a Twitter discussion with the author, one Mr Thomas.

 

 

Put crudely, the discussion hinged on the question whether the educational needs of ‘poor but dim’ learners should take precedence over those of the ‘poor but bright’. (This is Mr Thomas’s shorthand, not mine.)

He argued that the ‘poor but dim’ are the higher priority; I countered that all poor learners should have equal priority, regardless of their ability and prior attainment.

We began to explore the issue:

  • as a matter of educational policy and principle
  • with reference to inputs – the allocation of financial and human resources between these competing priorities and
  • in terms of outcomes – the comparative benefits to the economy and to society from investment at the top or the bottom of the attainment spectrum.

This post presents the discussion, adding more flesh and gloss from the Gifted Phoenix perspective.

It might or might not stimulate some interest in how this slightly different take on a rather hoary old chestnut plays out in England’s current educational landscape.

But I am particularly interested in how gifted advocates in different countries respond to these arguments. What is the consensus, if any, on the core issue?

Depending on the answer to this first question, how should gifted advocates frame the argument for educationalists and the wider public?

To help answer the first question I have included a poll at the end of the post.

Do please respond to that – and feel free to discuss the second question in the comments section below.

The structure of the post is fairly complex, comprising:

  • A (hopefully objective) summary of Mr Thomas’s original post.
  • An embedded version of the substance of our Twitter conversation. I have removed some Tweets – mostly those from third parties – and reordered a little to make this more accessible. I don’t believe I’ve done any significant damage to either case.
  • Some definition of terms, because there is otherwise much cause for confusion as we push further into the debate.
  • A digressionary exploration of the evidence base, dealing with attainment data and budget allocations respectively. The former exposes what little we are told about how socio-economic gaps vary across the attainment spectrum; the latter is relevant to the discussion of inputs. Those pressed for time may wish to proceed directly to…
  • …A summing up, which expands in turn the key points we exchanged on the point of principle, on inputs and on outcomes respectively.

I have reserved until close to the end a few personal observations about the encounter and how it made me feel.

And I conclude with the customary brief summary of key points and the aforementioned poll.

It is an ambitious piece and I am in two minds as to whether it hangs together properly, but you are ultimately the judges of that.

 

What Mr Thomas Blogged

The post was called ‘The Romance of the Poor but Bright’ and the substance of the argument (incorporating several key quotations) ran like this:

  • The ‘effort and resources, of schools but particularly of business and charitable enterprise, are directed disproportionately at those who are already high achieving – the poor but bright’.
  • Moreover ‘huge effort is expended on access to the top universities, with great sums being spent to make marginal improvements to a small set of students at the top of the disadvantaged spectrum. They cite the gap in entry, often to Oxbridge, as a significant problem that blights our society.’
  • This however is ‘the pretty face of the problem. The far uglier face is the gap in life outcomes for those who take least well to education.’
  • Popular discourse is easily caught up in the romance of the poor but bright’ but ‘we end up ignoring the more pressing problem – of students for whom our efforts will determine whether they ever get a job or contribute to society’. For ‘when did you last hear someone advocate for the poor but dim?’ 
  • ‘The gap most damaging to society is in life outcomes for the children who perform least well at school.’ Three areas should be prioritised to improve their educational outcomes:

o   Improving alternative provision (AP) which ‘operates as a shadow school system, largely unknown and wholly unappreciated’ - ‘developing a national network of high–quality alternative provision…must be a priority if we are to close the gap at the bottom’.

o   Improving ‘consistency in SEN support’ because ‘schools are often ill equipped to cope with these, and often manage only because of the extraordinary effort of dedicated staff’. There is ‘inconsistency in funding and support between local authorities’.

o   Introducing clearer assessment of basic skills, ‘so that a student could not appear to be performing well unless they have mastered the basics’.

  • While ‘any student failing to meet their potential is a dreadful thing’, the educational successes of ‘students with incredibly challenging behaviour’ and ‘complex special needs…have the power to change the British economy, far more so than those of their brighter peers.’

A footnote adds ‘I do not believe in either bright or dim, only differences in epigenetic coding or accumulated lifetime practice, but that is a discussion for another day.’

Indeed it is.

 

Our ensuing Twitter discussion

The substance of our Twitter discussion is captured in the embedded version immediately below. (Scroll down to the bottom for the beginning and work your way back to the top.)

 

 

Defining Terms

 

Poor, Bright and Dim

I take poor to mean socio-economic disadvantage, as opposed to any disadvantage attributable to the behaviours, difficulties, needs, impairments or disabilities associated with AP and/or SEN.

I recognise of course that such a distinction is more theoretical than practical, because, when learners experience multiple causes of disadvantage, the educational response must be holistic rather than disaggregated.

Nevertheless, the meaning of ‘poor’ is clear – that term cannot be stretched to include these additional dimensions of disadvantage.

The available performance data foregrounds two measures of socio-economic disadvantage: current eligibility for and take up of free school meals (FSM) and qualification for the deprivation element of the Pupil Premium, determined by FSM eligibility at some point within the last 6 years (known as ‘ever-6’).

Both are used in this post. Distinctions are typically between the disadvantaged learners and non-disadvantaged learners, though some of the supporting data compares outcomes for disadvantaged learners with outcomes for all learners, advantaged and disadvantaged alike.

The gaps that need closing are therefore:

  • between ‘poor and bright’ and other ‘bright’ learners (The Excellence Gap) and 
  • between ‘poor and dim’ and other ‘dim’ learners. I will christen this The Foundation Gap.

The core question is whether The Foundation Gap takes precedence over The Excellence Gap or vice versa, or whether they should have equal billing.

This involves immediate and overt recognition that classification as AP and/or SEN is not synonymous with the epithet ‘poor’, because there are many comparatively advantaged learners within these populations.

But such a distinction is not properly established in Mr Thomas’ blog, which applies the epithet ‘poor’ but then treats the AP and SEN populations as homogenous and somehow associated with it.

 

By ‘dim’ I take Mr Thomas to mean the lowest segment of the attainment distribution – one of his tweets specifically mentions ‘the bottom 20%’. The AP and/or SEN populations are likely to be disproportionately represented within these two deciles, but they are not synonymous with it either.

This distinction will not be lost on gifted advocates who are only too familiar with the very limited attention paid to twice exceptional learners.

Those from poor backgrounds within the AP and/or SEN populations are even more likely to be disproportionately represented in ‘the bottom 20%’ than their more advantaged peers, but even they will not constitute the entirety of ‘the bottom 20%’. A Venn diagram would likely show significant overlap, but that is all.

Hence disadvantaged AP/SEN are almost certainly a relatively poor proxy for the ‘poor but dim’.

That said I could find no data that quantifies these relationships.

The School Performance Tables distinguish a ‘low attainer’ cohort. (In the Secondary Tables the definition is determined by prior KS2 attainment and in the Primary Tables by prior KS1 attainment.)

These populations comprise some 15.7% of the total population in the Secondary Tables and about 18.0% in the Primary Tables. But neither set of Tables applies the distinction in their reporting of the attainment of those from disadvantaged backgrounds.

 

It follows from the definition of ‘dim’ that, by ‘bright’, MrThomas probably intends the two corresponding deciles at the top of the attainment distribution (even though he seems most exercised about the subset with the capacity to progress to competitive universities, particularly Oxford and Cambridge. This is a far more select group of exceptionally high attainers – and an even smaller group of exceptionally high attainers from disadvantaged backgrounds.)

A few AP and/or SEN students will likely fall within this wider group, fewer still within the subset of exceptionally high attainers. AP and/or SEN students from disadvantaged backgrounds will be fewer again, if indeed there are any at all.

The same issues with data apply. The School Performance Tables distinguish ‘high attainers’, who constitute over 32% of the secondary cohort and 25% of the primary cohort. As with low attainers, we cannot isolate the performance of those from disadvantaged backgrounds.

We are forced to rely on what limited data is made publicly available to distinguish the performance of disadvantaged low and high attainers.

At the top of the distribution there is a trickle of evidence about performance on specific high attainment measures and access to the most competitive universities. Still greater transparency is fervently to be desired.

At the bottom, I can find very little relevant data at all – we are driven inexorably towards analyses of the SEN population, because that is the only dataset differentiated by disadvantage, even though we have acknowledged that such a proxy is highly misleading. (Equivalent AP attainment data seems conspicuous by its absence.)

 

AP and SEN

Before exploring these datasets I ought to provide some description of the different programmes and support under discussion here, if only for the benefit of readers who are unfamiliar with the English education system.

 

Alternative Provision (AP) is intended to meet the needs of a variety of vulnerable learners:

‘They include pupils who have been excluded or who cannot attend mainstream school for other reasons: for example, children with behaviour issues, those who have short- or long-term illness, school phobics, teenage mothers, pregnant teenagers, or pupils without a school place.’

AP is provided in a variety of settings where learners engage in timetabled education activities away from their school and school staff.

Providers include further education colleges, charities, businesses, independent schools and the public sector. Pupil Referral Units (PRUs) are perhaps the best-known settings – there are some 400 nationally.

A review of AP was undertaken by Taylor in 2012 and the Government subsequently embarked on a substantive improvement programme. This rather gives the lie to Mr Thomas’ contention that AP is ‘largely unknown and wholly unappreciated’.

Taylor complains of a lack of reliable data about the number of learners in AP but notes that the DfE’s 2011 AP census recorded 14,050 pupils in PRUs and a further 23,020 in other settings on a mixture of full-time and part-time placements. This suggests a total of slightly over 37,000 learners, though the FTE figure is unknown.

He states that AP learners are:

‘…twice as likely as the average pupil to qualify for free school meals’

A supporting Equality Impact Assessment qualifies this somewhat:

‘In Jan 2011, 34.6% of pupils in PRUs and 13.8%* of pupils in other AP, were eligible for and claiming free school meals, compared with 14.6% of pupils in secondary schools. [*Note: in some AP settings, free school meals would not be available, so that figure is under-stated, but we cannot say by how much.]’

If the PRU population is typical of the wider AP population, approximately one third qualify under this FSM measure of disadvantage, meaning that the substantial majority are not ‘poor’ according to our definition above.

Taylor confirms that overall GCSE performance in AP is extremely low, pointing out that in 2011 just 1.4% achieved five or more GCSE grades A*-C including [GCSEs in] maths and English, compared to 53.4% of pupils in all schools.

By 2012/13 the comparable percentages were 1.7% and 61.7% respectively (the latter for all state-funded schools), suggesting an increasing gap in overall performance. This is a cause for concern but not directly relevant to the issue under consideration.

The huge disparity is at least partly explained by the facts that many AP students take alternative qualifications and that the national curriculum does not apply to PRUs.

Data is available showing the full range of qualifications pursued. Taylor recommended that all students in AP should continue to receive ‘appropriate and challenging English and Maths teaching’.

Interestingly, he also pointed out that:

‘In some PRUs and AP there is no provision for more able pupils who end up leaving without the GCSE grades they are capable of earning.’

However, he fails to offer a specific recommendation to address this point.

 

Special Educational Needs (SEN) are needs or disabilities that affect children’s ability to learn. These may include behavioural and social difficulties, learning difficulties or physical impairments.

This area has also been subject to a major Government reform programme now being implemented.

There is significant overlap between AP and SEN, with Taylor’s review of the former noting that the population in PRUs is 79% SEN.

We know from the 2013 SEN statistics that 12.6% of all pupils on roll at PRUs had SEN statements and 68.9% had SEN without statements. But these populations represent only a tiny proportion of the total SEN population in schools.

SEN learners also have higher than typical eligibility for FSM. In January 2013, 30.1% of all SEN categories across all primary, secondary and special schools were FSM-eligible, roughly twice the rate for all pupils. However, this means that almost seven in ten are not caught by the definition of ‘poor’ provided above.

In 2012/13 23.4% of all SEN learners achieved five or more GCSEs at A*-C or equivalent, including GCSEs in English and maths, compared with 70.4% of those having no identified SEN – another significant overall gap, but not directly relevant to our comparison of the ‘poor but bright’ and the ‘poor but dim’.

 

Data on socio-economic attainment gaps across the attainment spectrum

Those interested in how socio-economic attainment gaps vary at different attainment levels cannot fail to be struck by how little material of this kind is published, particularly in the secondary sector, when such gaps tend to increase in size.

One cannot entirely escape the conviction that this reticence deliberately masks some inconvenient truths.

  • The ideal would be to have the established high/middle/low attainer distinctions mapped directly onto performance by advantaged/disadvantaged learners in the Performance Tables but, as we have indicated, this material is conspicuous by its absence. Perhaps it will appear in the Data Portal now under development.
  • Our next best option is to examine socio-economic attainment gaps on specific attainment measures that will serve as decent proxies for high/middle/low attainment. We can do this to some extent but the focus is disproportionately on the primary sector because the Secondary Tables do not include proper high attainment measures (such as measures based exclusively on GCSE performance at grades A*/A). Maybe the Portal will come to the rescue here as well. We can however supply some basic Oxbridge fair access data.
  • The least preferable option is deploy our admittedly poor proxies for low attainers – SEN and AP. But there isn’t much information from this source either. 

The analysis below looks consecutively at data for the primary and secondary sectors.

 

Primary

We know, from the 2013 Primary School Performance Tables, that the percentage of disadvantaged and other learners achieving different KS2 levels in reading, writing and maths combined, in 2013 and 2012 respectively, were as follows:

 

Table 1: Percentage of disadvantaged and all other learners achieving each national curriculum level at KS2 in 2013 in reading, writing and maths combined

L3  or below L4  or above L4B or above L5 or above
Dis Oth Gap Dis Oth Gap Dis Oth Gap Dis Oth Gap
2013 13 5 +8 63 81 -18 49 69 -20 10 26 -16
2012 x x x 61 80 -19 x x x 9 24 -15

 

This tells us relatively little, apart from the fact that disadvantaged learners are heavily over-represented at L3 and below and heavily under-represented at L5 and above.

The L5 gap is somewhat lower than the gaps at L4 and 4B respectively, but not markedly so. However, the L5 gap has widened slightly since 2012 while the reverse is true at L4.

This next table synthesises data from SFR51/13: ‘National curriculum assessments at key stage 2: 2012 to 2013’. It also shows gaps for disadvantage, as opposed to FSM gaps.

 

Table 2: Percentage of disadvantaged and all other learners achieving each national curriculum level, including differentiation by gender, in each 2013 end of KS2 test

L3 L4 L4B L5 L6
D O Gap D O Gap D O Gap D O Gap D O Gap
Reading All 12 6 +6 48 38 +10 63 80 -17 30 50 -20 0 1 -1
B 13 7 +6 47 40 +7 59 77 -18 27 47 -20 0 0 0
G 11 5 +6 48 37 +11 67 83 -16 33 54 -21 0 1 -1
GPS All 28 17 +11 28 25 +3 52 70 -18 33 51 -18 1 2 -1
B 30 20 +10 27 27 0 45 65 -20 28 46 -18 0 2 -2
G 24 13 +11 28 24 +4 58 76 -18 39 57 -18 1 3 -2
Maths All 16 9 +7 50 41 +9 62 78 -16 24 39 -15 2 8 -6
B 15 8 +7 48 39 +9 63 79 -16 26 39 -13 3 10 -7
G 17 9 +8 52 44 +8 61 78 -17 23 38 -15 2 7 -5

 

This tells a relatively consistent story across each test and for boys as well as girls.

We can see that, at Level 4 and below, learners from disadvantaged backgrounds are in the clear majority, perhaps with the exception of L4 GPS.  But at L4B and above they are very much in the minority.

Moreover, with the exception of L6 where low percentages across the board mask the true size of the gaps, disadvantaged learners tend to be significantly more under-represented at L4B and above than they are over-represented at L4 and below.

A different way of looking at this data is to compare the percentages of advantaged and disadvantaged learners respectively at L4 and L5 in each assessment.

  • Reading: Amongst disadvantaged learners the proportion at L5 is -18 percentage points fewer than the proportion at L4, but amongst advantaged learners the proportion at L5 is +12 percentage points higher than at L4.
  • GPS: Amongst disadvantaged learners the proportion at L5 is +5 percentage points more than the proportion at L4, but amongst advantaged learners the proportion at L5 is +26 percentage points higher than at L4.
  • Maths: Amongst disadvantaged learners the proportion at L5 is -26 percentage points fewer than the proportion at L4, but amongst advantaged learners the proportion at L5 is only 2 percentage points fewer than at L4.

If we look at 2013 gaps compared with 2012 (with teacher assessment of writing included in place of the GSP test introduced in 2013) we can see there has been relatively little change across the board, with the exception of L5 maths, which has been affected by the increasing success of advantaged learners at L6.

 

Table 3: Percentage of disadvantaged and all other learners achieving  national curriculum levels 3-6 in each of reading, writing and maths in 2012 and 2013 respectively

L3 L4 L5 L6
D O Gap D O Gap D O Gap D O Gap
Reading 2012 11 6 +5 46 36 +10 33 54 -21 0 0 0
2013 12 6 +6 48 38 +10 30 50 -20 0 1 -1
Writing 2012 22 11 +11 55 52 +3 15 32 -17 0 1 -1
2013 19 10 +9 56 52 +4 17 34 -17 1 2 -1
Maths 2012 17 9 +8 50 41 +9 23 43 -20 1 4 -3
2013 16 9 +9 50 41 +9 24 39 -15 2 8 -6

 

To summarise, as far as KS2 performance is concerned, there are significant imbalances at both the top and the bottom of the attainment distribution and these gaps have not changed significantly since 2012. There is some evidence to suggest that gaps at the top are larger than those at the bottom.

 

Secondary

Unfortunately there is a dearth of comparable data at secondary level, principally because of the absence of published measures of high attainment.

SFR05/2014 provides us with FSM gaps (as opposed to disadvantaged gaps) for a series of GCSE measures, none of which serve our purpose particularly well:

  • 5+ A*-C GCSE grades: gap = 16.0%
  • 5+ A*-C grades including English and maths GCSEs: gap = 26.7%
  • 5+ A*-G grades: gap = 7.6%
  • 5+ A*-G grades including English and maths GCSEs: gap = 9.9%
  • A*-C grades in English and maths GCSEs: gap = 26.6%
  • Achieving the English Baccalaureate: gap = 16.4%

Perhaps all we can deduce is that the gaps vary considerably in size, but tend to be smaller for the relatively less demanding and larger for the relatively more demanding measures.

For specific high attainment measures we are forced to rely principally on data snippets released in answer to occasional Parliamentary Questions.

For example:

  • In 2003, 1.0% of FSM-eligible learners achieved five or more GCSEs at A*/A including English and maths but excluding equivalents, compared with 6.8% of those not eligible, giving a gap of 5.8%. By 2009 the comparable percentages were 1.7% and 9.0% respectively, giving an increased gap of 7.3% (Col 568W)
  • In 2006/07, the percentage of FSM-eligible pupils securing A*/A grades at GCSE in different subjects, compared with the percentage of all pupils in maintained schools doing so were as shown in the table below (Col 808W)

 

Table 4: Percentage of FSM-eligible and all pupils achieving GCSE A*/A grades in different GCSE subjects in 2007

FSM All pupils Gap
Maths 3.7 15.6 11.9
Eng lit 4.1 20.0 15.9
Eng lang 3.5 16.4 12.9
Physics 2.2 49.0 46.8
Chemistry 2.5 48.4 45.9
Biology 2.5 46.8 44.3
French 3.5 22.9 19.4
German 2.8 23.2 20.4

 

  • In 2008, 1% of FSM-eligible learners in maintained schools achieved A* in GCSE maths compared with 4% of all pupils in maintained schools. The comparable percentages for Grade A were 3% and 10% respectively, giving an A*/A gap of 10% (Col 488W)

There is much variation in the subject-specific outcomes at A*/A described above. But, when it comes to the overall 5+ GCSEs high attainment measure based on grades A*/A, the gap is much smaller than on the corresponding standard measure based on grades A*-C.

Further material of broadly the same vintage is available in a 2007 DfE statistical publication: ‘The Characteristics of High Attainers’.

There is a complex pattern in evidence here which is very hard to explain with the limited data available. More time series data of this nature – illustrating Excellence and Foundation Gaps alike – should be published annually so that we have a more complete and much more readily accessible dataset.

I could find no information at all about the comparative performance of disadvantaged learners in AP settings compared with those not from disadvantaged backgrounds.

Data is published showing the FSM gap for SEN learners on all the basic GCSE measures listed above. I have retained the generic FSM gaps in brackets for the sake of comparison:

  • 5+ A*-C GCSE grades: gap = 12.5% (16.0%)
  • 5+ A*-C grades including English and maths GCSEs: gap = 12.1% (26.7%)
  • 5+ A*-G grades: gap = 10.4% (7.6%)
  • 5+ A*-G grades including English and maths GCSEs: gap = 13.2% (9.9%)
  • A*-C grades in English and maths GCSEs: gap = 12.3% (26.6%)
  • Achieving the English Baccalaureate: gap = 3.5% (16.4%)

One can see that the FSM gaps for the more demanding measures are generally lower for SEN learners than they are for all learners. This may be interesting but, for the reasons given above, this is not a reliable proxy for the FSM gap amongst ‘dim’ learners.

 

When it comes to fair access to Oxbridge, I provided a close analysis of much relevant data in this post from November 2013.

The chart below shows the number of 15 year-olds eligible for and claiming FSM at age 15 who progressed to Oxford or Cambridge by age 19. The figures are rounded to the nearest five.

 

Chart 1: FSM-eligible learners admitted to Oxford and Cambridge 2005/06 to 2010/11

Oxbridge chart

 

In sum, there has been no change in these numbers over the last six years for which data has been published. So while there may have been consistently significant expenditure on access agreements and multiple smaller mentoring programmes, it has had negligible impact on this measure at least.

My previous post set out a proposal for what to do about this sorry state of affairs.

 

Budgets

For the purposes of this discussion we need ideally to identify and compare total national budgets for the ‘poor but bright’ and the ‘poor but dim’. But that is simply not possible. 

Many funding streams cannot be disaggregated in this manner. As we have seen, some – including the AP and SEN budgets – may be aligned erroneously with the second of these groups, although they also support learners who are neither ‘poor’ nor ‘dim’ and have a broader purpose than raising attainment. 

There may be some debate, too, about which funding streams should be weighed in the balance.

On the ‘bright but poor’ side, do we include funding for grammar schools, even though the percentage of disadvantaged learners attending many of them is virtually negligible (despite recent suggestions that some are now prepared to do something about this)? Should the Music and Dance Scheme (MDS) be within scope of this calculation?

The best I can offer is a commentary that gives a broad sense of orders of magnitude, to illustrate in very approximate terms how the scales tend to tilt more towards the ‘poor but dim’ rather than the ‘poor but bright’, but also to weave in a few relevant asides about some of the funding streams in question.

 

Pupil Premium and the EEF

I begin with the Pupil Premium – providing schools with additional funding to raise the attainment of disadvantaged learners.

The Premium is not attached to the learners who qualify for it, so schools are free to aggregate the funding and use it as they see fit. They are held accountable for these decisions through Ofsted inspection and the gap-narrowing measures in the Performance Tables.

Mr Thomas suggests in our Twitter discussion that AP students are not significant beneficiaries of such support, although provision in PRUs features prominently in the published evaluation of the Premium. It is for local authorities to determine how Pupil Premium funding is allocated in AP settings.

One might also make a case that ‘bright but poor’ learners are not a priority either, despite suggestions from the Pupil Premium Champion to the contrary.

 

 

As we have seen, the Performance Tables are not sharply enough focused on the excellence gaps at the top of the distribution and I have shown elsewhere that Ofsted’s increased focus on the most able does not yet extend to the impact on those attracting the Pupil Premium, even though there was a commitment that it would do so. 

If there is Pupil Premium funding heading towards high attainers from disadvantaged backgrounds, the limited data to which we have access does not yet suggest a significant impact on the size of Excellence Gaps. 

The ‘poor but bright’ are not a priority for the Education Endowment Foundation (EEF) either.

This 2011 paper explains that it is prioritising the performance of disadvantaged learners in schools below the floor targets. At one point it says:

‘Looking at the full range of GCSE results (as opposed to just the proportions who achieve the expected standards) shows that the challenge facing the EEF is complex – it is not simply a question of taking pupils from D to C (the expected level of attainment). Improving results across the spectrum of attainment will mean helping talented pupils to achieve top grades, while at the same time raising standards amongst pupils currently struggling to pass.’

But this is just after it has shown that the percentages of disadvantaged high attainers in its target schools are significantly lower than elsewhere. Other things being equal, the ‘poor but dim’ will be the prime beneficiaries.

It may now be time for the EEF to expand its focus to all schools. A diagram from this paper – reproduced below – demonstrates that, in 2010, the attainment gap between FSM and non-FSM was significantly larger in schools above the floor than in those below the floor that the EEF is prioritising. This is true in both the primary and secondary sectors.

It would be interesting to see whether this is still the case.

 

EEF Capture

 

AP and SEN

Given the disaggregation problems discussed above, this section is intended simply to give some basic sense of orders of magnitude – lending at least some evidence to counter Mr Thomas’ assertion that the ‘effort and resources, of schools… are directed disproportionately at those who are already high achieving – the poor but bright’.

It is surprisingly hard to get a grip on the overall national budget for AP. A PQ from early 2011 (Col 75W) supplies a net current expenditure figure for all English local authorities of £530m.

Taylor’s Review fails to offer a comparable figure but my rough estimates based on the per pupil costs he supplies suggests a revenue budget of at least £400m. (Taylor suggests average per pupil costs of £9,500 per year for full-time AP, although PRU places are said to cost between £12,000 and £18,000 per annum.)

I found online a consultation document from Kent – England’s largest local authority – stating its revenue costs at over £11m in FY2014-15. Approximately 454 pupils attended Kent’s AP/PRU provision in 2012-13.

There must also be a significant capital budget. There are around 400 PRUs, not to mention a growing cadre of specialist AP academies and free schools. The total capital cost of the first AP free school – Derby Pride Academy – was £2.147m for a 50 place setting.

In FY2011-12, total annual national expenditure on SEN was £5.77 billion (Col 391W). There will have been some cost-cutting as a consequence of the latest reforms, but the order of magnitude is clear.

The latest version of the SEN Code of Practice outlines the panoply of support available, including the compulsory requirement that each school has a designated teacher to be responsible for co-ordinating SEN provision (the SENCO).

In short, the national budget for AP is sizeable and the national budget for SEN is huge. Per capita expenditure is correspondingly high. If we could isolate the proportion of these budgets allocated to raising the attainment of the ‘poor but dim’, the total would be substantial.

 

Fair Access, especially to Oxbridge, and some related observations

Mr Thomas refers specifically to funding to support fair access to universities – especially Oxbridge – for those from disadvantaged backgrounds. This is another area in which it is hard to get a grasp on total expenditure, not least because of the many small-scale mentoring projects that exist.

Mr Thomas is quite correct to remark on the sheer number of these, although they are relatively small beer in budgetary terms. (One suspects that they would be much more efficient and effective if they could be linked together within some sort of overarching framework.)

The Office for Fair Access (OFFA) estimates University access agreement expenditure on outreach in 2014-15 at £111.9m and this has to be factored in, as does DfE’s own small contribution – the Future Scholar Awards.

Were any expenditure in this territory to be criticised, it would surely be the development and capital costs for new selective 16-19 academies and free schools that specifically give priority to disadvantaged students.

The sums are large, perhaps not outstandingly so compared with national expenditure on SEN for example, but they will almost certainly benefit only a tiny localised proportion of the ‘bright but poor’ population.

There are several such projects around the country. Some of the most prominent are located in London.

The London Academy of Excellence (capacity 420) is fairly typical. It cost an initial £4.7m to establish plus an annual lease requiring a further £400K annually.

But this is dwarfed by the projected costs of the Harris Westminster Sixth Form, scheduled to open in September 2014. Housed in a former government building, the capital cost is reputed to be £45m for a 500-place institution.

There were reportedly disagreements within Government:

‘It is understood that the £45m cost was subject to a “significant difference of opinion” within the DfE where critics say that by concentrating large resources on the brightest children at a time when budgets are constrained means other children might miss out…

But a spokeswoman for the DfE robustly defended the plans tonight. “This is an inspirational collaboration between the country’s top academy chain and one of the best private schools in the country,” she said. “It will give hundreds of children from low income families across London the kind of top quality sixth-form previously reserved for the better off.”’

Here we have in microcosm the debate to which this post is dedicated.

One blogger – a London College Principal – pointed out that the real issue was not whether the brightest should benefit over others, but how few of the ‘poor but bright’ would do so:

‘£45m could have a transformative effect on thousands of 16-19 year olds across London… £45m could have funded at least 50 extra places in each college for over 10 years, helped build excellent new facilities for all students and created a city-wide network to support gifted and talented students in sixth forms across the capital working with our partner universities and employers.’

 

Summing Up

There are three main elements to the discussion: the point of principle, the inputs and the impact. The following sections deal with each of these in turn.

 

Principle

Put bluntly, should ‘poor but dim’ kids have higher priority for educators than ‘poor but bright’ kids (Mr Thomas’ position) or should all poor kids have equal priority and an equal right to the support they need to achieve their best (the Gifted Phoenix position)?

For Mr Thomas, it seems this priority is determined by whether – and how far – the learner is behind undefined ‘basic levels of attainment’ and/or mastery of ‘the basics’ (presumably literacy and numeracy).

Those below the basic attainment threshold have higher priority than those above it. He does not say so but this logic suggests that those furthest below the threshold are the highest priority and those furthest above are the lowest.

So, pursued to its logical conclusion, this would mean that the highest attainers would get next to no support while a human vegetable would be the highest priority of all.

However, since Mr Thomas’ focus is on marginal benefit, it may be that those nearest the threshold would be first in the queue for scarce resources, because they would require the least effort and resources to lift above it.

This philosophy drives the emphasis on achievement of national benchmarks and predominant focus on borderline candidates that, until recently, dominated our assessment and accountability system.

For Gifted Phoenix, every socio-economically disadvantaged learner has equal priority to the support they need to improve their attainment, by virtue of that disadvantage.

There is no question of elevating some ahead of others in the pecking order because they are further behind on key educational measures since, in effect, that is penalising some disadvantaged learners on the grounds of their ability or, more accurately, their prior attainment.

This philosophy underpins the notion of personalised education and is driving the recent and welcome reforms of the assessment and accountability system, designed to ensure that schools are judged by how well they improve the attainment of all learners, rather than predominantly on the basis of the proportion achieving the standard national benchmarks.

I suggested that, in deriding ‘the romance of the poor but bright’, Mr Thomas ran the risk of falling into ‘the slough of anti-elitism’. He rejected that suggestion, while continuing to emphasise the need to ‘concentrate more’ on ‘those at risk of never being able to engage with society’.

I have made the assumption that Thomas is interested primarily in KS2 and GCSE or equivalent qualifications at KS4 given his references to KS2 L4, basic skills and ‘paper qualifications needed to enter meaningful employment’.

But his additional references to ‘real qualifications’ (as opposed to paper ones) and engaging with society could well imply a wider range of personal, social and work-related skills for employability and adult life.

My preference for equal priority would apply regardless: there is no guarantee that high attainers from disadvantaged backgrounds will necessarily possess these vital skills.

But, as indicated in the definition above, there is an important distinction to be maintained between:

  • educational support to raise the attainment, learning and employability skills of socio-economically disadvantaged learners and prepare them for adult life and
  • support to manage a range of difficulties – whether behavioural problems, disability, physical or mental impairment – that impact on the broader life chances of the individuals concerned.

Such a distinction may well be masked in the everyday business of providing effective holistic support for learners facing such difficulties, but this debate requires it to be made and sustained given Mr Thomas’s definition of the problem in terms of the comparative treatment of the ‘poor but bright’ and the ‘poor but dim’.

Having made this distinction, it is not clear whether he himself sustains it consistently through to the end of his post. In the final paragraphs the term ‘poor but dim’ begins to morph into a broader notion encompassing all AP and SEN learners regardless of their socio-economic status.

Additional dimensions of disadvantage are potentially being brought into play. This is inconsistent and radically changes the nature of the argument.

 

Inputs

By inputs I mean the resources – financial and human – made available to support the education of ‘dim’ and ‘bright’ disadvantaged learners respectively.

Mr Thomas also shifts his ground as far as inputs are concerned.

His post opens with a statement that ‘the effort and resources’ of schools, charities and businesses are ‘directed disproportionately’ at the poor but bright – and he exemplifies this with reference to fair access to competitive universities, particularly Oxbridge.

When I point out the significant investment in AP compared with fair access, he changes tack – ‘I’m measuring outcomes not just inputs’.

Then later he says ‘But what some need is just more expensive’, to which I respond that ‘the bottom end already has the lion’s share of funding’.

At this point we have both fallen into the trap of treating the entirety of the AP and SEN budgets as focused on the ‘poor but dim’.

We are failing to recognise that they are poor proxies because the majority of AP and SEN learners are not ‘poor’, many are not ‘dim’, these budgets are focused on a wider range of needs and there is significant additional expenditure directed at ‘poor but dim’ learners elsewhere in the wider education budget.

Despite Mr Thomas’s opening claim, it should be reasonably evident from the preceding commentary that my ‘lion’s share’ point is factually correct. His suggestion that AP is ‘largely unknown and wholly unappreciated’ flies in the face of the Taylor Review and the Government’s subsequent work programme.

SEN may depend heavily on the ‘extraordinary effort of dedicated staff’, but at least there are such dedicated staff! There may be inconsistencies in local authority funding and support for SEN, but the global investment is colossal by comparison with the funding dedicated on the other side of the balance.

Gifted Phoenix’s position acknowledges that inputs are heavily loaded in favour of the SEN and AP budgets. This is as it should be since, as Thomas rightly notes, many of the additional services they need are frequently more expensive to provide. These services are not simply dedicated to raising their attainment, but also to tackling more substantive problems associated with their status.

Whether the balance of expenditure on the ‘bright’ and ‘dim’ respectively is optimal is a somewhat different matter. Contrary to Mr Thomas’s position, gifted advocates are often convinced that too much largesse is focused on the latter at the expense of the former.

Turning to advocacy, Mr Thomas says ‘we end up ignoring the more pressing problem’ of the poor but dim. He argues in the Twitter discussion that too few people are advocating for these learners, adding that they are failed ‘because it’s not popular to talk about them’.

I could not resist countering that advocacy for gifted learners is equally unpopular, indeed ‘the word is literally taboo in many settings’. I cannot help thinking – from his footnote reference to ‘epigenetic coding’ – that Mr Thomas is amongst those who are distinctly uncomfortable with the term.

Where advocacy does survive it is focused exclusively on progression to competitive universities and, to some extent, high attainment as a route towards that outcome. The narrative has shifted away from concepts of high ability or giftedness, because of the very limited consensus about that condition (even amongst gifted advocates) and even considerable doubt in some quarters whether it exists at all.

 

Outcomes

Mr Thomas maintains in his post that the successes of his preferred target group ‘have the power to change the British economy, far more so than those of their brighter peers’. This is because ‘the gap most damaging to society is in life outcomes for the children that perform least well at school’.

As noted above, it is important to remember that we are discussing here the addition of educational and economic value by tackling underachievement amongst learners from disadvantaged backgrounds, rather than amongst all the children that perform least well.

We are also leaving to one side the addition of value through any wider engagement by health and social services to improve life chances.

It is quite reasonable to advance the argument that ‘improving the outcomes ‘of the bottom 20%’ (the Tail) will have ‘a huge socio-economic impact’ and ‘make the biggest marginal difference to society’.

But one could equally make the case that society would derive similar or even higher returns from a decision to concentrate disproportionately on the highest attainers (the Smart Fraction).

Or, as Gifted Phoenix would prefer, one could reasonably propose that the optimal returns should be achieved by means of a balanced approach that raises both the floor and the ceiling, avoiding any arbitrary distinctions on the basis of prior attainment.

From the Gifted Phoenix perspective, one should balance the advantages of removing the drag on productivity of an educational underclass against those of developing the high-level human capital needed to drive economic growth and improve our chances of success in what Coalition ministers call the ‘global race’.

According to this perspective, by eliminating excellence gaps between disadvantaged and advantaged high attainers we will secure a stream of benefits broadly commensurate to that at the bottom end.

These will include substantial spillover benefits, achieved as a result of broadening the pool of successful leaders in political, social, educational and artistic fields, not to mention significant improvements in social mobility.

It is even possible to argue that, by creating a larger pool of more highly educated parents, we can also achieve a significant positive impact on the achievement of subsequent generations, thus significantly reducing the size of the tail.

And in the present generation we will create many more role models: young people from disadvantaged backgrounds who become educationally successful and who can influence the aspirations of younger disadvantaged learners.

This avoids the risk that low expectations will be reinforced and perpetuated through a ‘deficit model’ approach that places excessive emphasis on removing the drag from the tail by producing a larger number of ‘useful members of society’.

This line of argument is integral to the Gifted Phoenix Manifesto.

It seems to me entirely conceivable that economists might produce calculations to justify any of these different paths.

But it would be highly inequitable to put all our eggs in the ‘poor but bright’ basket, because that penalises some disadvantaged learners for their failure to achieve high attainment thresholds.

And it would be equally inequitable to focus exclusively on the ‘poor but dim’, because that penalises some disadvantaged learners for their success in becoming high attainers.

The more equitable solution must be to opt for a ‘balanced scorecard’ approach that generates a proportion of the top end benefits and a proportion of the bottom end benefits simultaneously.

There is a risk that this reduces the total flow of benefits, compared with one or other of the inequitable solutions, but there is a trade-off here between efficiency and a socially desirable outcome that balances the competing interests of the two groups.

 

The personal dimension

After we had finished our Twitter exchanges, I thought to research Mr Thomas online. Turns out he’s quite the Big-Cheese-in-Embryo. Provided he escapes the lure of filthy lucre, he’ll be a mover and shaker in education within the next decade.

I couldn’t help noticing his own educational experience – public school, a First in PPE from Oxford, leading light in the Oxford Union – then graduation from Teach First alongside internships with Deutsche Bank and McKinsey.

Now he’s serving his educational apprenticeship as joint curriculum lead for maths at a prominent London Academy. He’s also a trustee of ‘a university mentoring project for highly able 11-14 year old pupils from West London state schools’.

Lucky I didn’t check earlier. Such a glowing CV might have been enough to cow this grammar school Oxbridge reject, even if I did begin this line of work several years before he was born. Not that I have a chip on my shoulder…

The experience set me wondering about the dominant ideology amongst the Teach First cadre, and how it is tempered by extended exposure to teaching in a challenging environment.

There’s more than a hint of idealism about someone from this privileged background espousing the educational philosophy that Mr Thomas professes. But didn’t he wonder where all the disadvantaged people were during his own educational experience, and doesn’t he want to change that too?

His interest in mentoring highly able pupils would suggest that he does, but also seems directly to contradict the position he’s reached here. It would be a pity if the ‘poor but bright’ could not continue to rely on his support, equal in quantity and quality to the support he offers the ‘poor but dim’.

For he could make a huge difference at both ends of the attainment spectrum – and, with his undeniable talents, he should certainly be able to do so

 

Conclusion

We are entertaining three possible answers to the question whether in principle to prioritise the needs of the ‘poor but bright’ or the ‘poor but dim’:

  • Concentrate principally – perhaps even exclusively – on closing the Excellence Gaps at the top
  • Concentrate principally – perhaps even exclusively – on closing the Foundation Gaps at the bottom
  • Concentrate equally across the attainment spectrum, at the top and bottom and all points in between.

Speaking as an advocate for those at the top, I favour the third option.

It seems to me incontrovertible – though hard to quantify – that, in the English education system, the lion’s share of resources go towards closing the Foundation Gaps.

That is perhaps as it should be, although one could wish that the financial scales were not tipped so excessively in their direction, for ‘poor but bright’ learners do in my view have an equal right to challenge and support, and should not be penalised for their high attainment.

Our current efforts to understand the relative size of the Foundation and Excellence Gaps and how these are changing over time are seriously compromised by the limited data in the public domain.

There is a powerful economic case to be made for prioritising the Foundation Gaps as part of a deliberate strategy for shortening the tail – but an equally powerful case can be constructed for prioritising the Excellence Gaps, as part of a deliberate strategy for increasing the smart fraction.

Neither of these options is optimal from an equity perspective however. The stream of benefits might be compromised somewhat by not focusing exclusively on one or the other, but a balanced approach should otherwise be in our collective best interests.

You may or may not agree. Here is a poll so you can register your vote. Please use the comments facility to share your wider views on this post.

 

 

 

 

Epilogue

 

We must beware the romance of the poor but bright,

But equally beware

The romance of rescuing the helpless

Poor from their sorry plight.

We must ensure the Tail

Does not wag the disadvantaged dog!

 

GP

May 2014

How well is Ofsted reporting on the most able?

 

 

This post considers how Ofsted’s new emphasis on the attainment and progress of the most able learners is reflected in school inspection reports.

My analysis is based on the 87 Section 5 secondary school inspection reports published in the month of March 2014.

keep-calm-and-prepare-for-ofsted-6I shall not repeat here previous coverage of how Ofsted’s emphasis on the most able has been framed. Interested readers may wish to refer to previous posts for details:

The more specific purpose of the post is to explore how consistently Ofsted inspectors are applying their guidance and, in particular, whether there is substance for some of the concerns I expressed in these earlier posts, drawn together in the next section.

The remainder of the post provides an analysis of the sample and a qualitative review of the material about the most able (and analogous terms) included in the sample of 87 inspection reports.

It concludes with a summary of the key points, a set of associated recommendations and an overall inspection grade for inspectors’ performance to date. Here is a link to this final section for those who prefer to skip the substance of the post.

 

Background

Before embarking on the real substance of this argument I need to restate briefly some of the key issues raised in those earlier posts:

  • Ofsted’s definition of ‘the most able’ in its 2013 survey report is idiosyncratically broad, including around half of all learners on the basis of their KS2 outcomes.
  • The evidence base for this survey report included material suggesting that the most able students are supported well or better in only 20% of lessons – and are not making the progress of which they are capable in about 40% of schools.
  • The survey report’s recommendations included three commitments on Ofsted’s part. It would:

 o   ‘focus more closely in its inspections on the teaching and progress of the most able students, the curriculum available to them, and the information, advice and guidance provided to the most able students’;

o   ‘consider in more detail during inspection how well the pupil premium is used to support the most able students from disadvantaged backgrounds’ and

o   ‘report its inspection findings about this group of students more clearly in school inspection, sixth form and college reports.’

  • Subsequently the school inspection guidance was revised somewhat  haphazardly, resulting in the parallel use of several undefined terms (‘able pupils’, ‘most able’, ‘high attaining’, ‘highest attaining’),  the underplaying of the attainment and progress of the most able learners attracting the Pupil Premium and very limited reference to appropriate curriculum and IAG.
  • Within the inspection guidance, emphasis was placed primarily on learning and progress. I edited together the two relevant sets of level descriptors in the guidance to provide this summary for the four different inspection categories:

In outstanding schools the most able pupils’ learning is consistently good or better and they are making rapid and sustained progress.

In good schools the most able pupils’ learning is generally good, they make good progress and achieve well over time.

In schools requiring improvement the teaching of the most able pupils and their achievement are not good.

In inadequate schools the most able pupils are underachieving and making inadequate progress.

  • No published advice has been made available to inspectors on the interpretation of these amendments to the inspection guidance. In October 2013 I wrote:

‘Unfortunately, there is a real risk that the questionable clarity of the Handbook and Subsidiary Guidance will result in some inconsistency in the application of the Framework, even though the fundamental purpose of such material is surely to achieve the opposite.’

  • Analysis of a very small sample of reports for schools reporting poor results for high attainers in the school performance tables suggested inconsistency both before and after the amendments were introduced into the guidance. I commented:

‘One might expect that, unconsciously or otherwise, inspectors are less ready to single out the performance of the most able when a school is inadequate across the board, but the small sample above does not support this hypothesis. Some of the most substantive comments relate to inadequate schools.

It therefore seems more likely that the variance is attributable to the differing capacity of inspection teams to respond to the new emphases in their inspection guidance. This would support the case made in my previous post for inspectors to receive additional guidance on how they should interpret the new requirement.’

The material below considers the impact of these revisions on a more substantial sample of reports and whether this justifies some of the concerns expressed above.

It is important to add that, in January 2014, Ofsted revised its guidance document ‘Writing the report for school inspections’ to include the statement that:

Inspectors must always report in detail on the progress of the most able pupils and how effectively teaching engages them with work that is challenging enough.’ (p8)

This serves to reinforce the changes to the inspection guidance and clearly indicates that coverage of this issue – at least in these terms – is a non-negotiable: we should expect to see appropriate reference in every single section 5 report.

 

The Sample

The sample comprises 87 secondary schools whose Section 5 inspection reports were published by Ofsted in the month of March 2014.

The inspections were conducted between 26 November 2013 and 11 March 2014, so the inspectors will have had time to become familiar with the revised guidance.

However up to 20 of the inspections took place before Ofsted felt it necessary to emphasise that coverage of the progress and teaching of the most able is compulsory.

The sample happens to include several institutions inspected as part of wider-ranging reviews of schools in Birmingham and schools operated by the E-ACT academy chain. It also incorporates several middle-deemed secondary schools.

Chart 1 shows the regional breakdown of the sample, adopting the regions Ofsted uses to categorise reports, as opposed to its own regional structure (ie with the North East identified separately from Yorkshire and Humberside).

It contains a disproportionately large number of schools from the West Midlands while the South-West is significantly under-represented. All the remaining regions supply between 5 and 13 schools. A total of 57 local authority areas are represented.

 

Chart 1: Schools within the sample by region

Ofsted chart 1

 

Chart 2 shows the different statuses of schools within the sample. Over 40% are community schools, while almost 30% are sponsored academies. There are no academy converters but sponsored academies, free schools and studio schools together account for some 37% of the sample.

 

Chart 2: Schools within the sample by status

Ofsted chart 2

 

The vast majority of schools in the sample are 11-16 or 11-18 institutions, but four are all-through schools, five provide for learners aged 13 or 14 upwards and 10 are middle schools. There are four single sex schools.

Chart 3 shows the variation in school size. Some of the studio schools, free schools and middle schools are very small by secondary standards, while the largest secondary school in the sample has some 1,600 pupils. A significant proportion of schools have between 600 and 1,000 pupils.

 

Chart 3: Schools within the sample by number on roll

Ofsted chart 3

The distribution of overall inspection grades between the sample schools is illustrated by Chart 4 below. Eight of the sample were rated outstanding, 28 good, 35 as requiring improvement and 16 inadequate.

Of those rated inadequate, 12 were subject to special measures and four had serious weaknesses.

 

Chart 4: Schools within the sample by overall inspection grade

 Ofsted chart 4

The eight schools rated outstanding include:

  • A mixed 11-18 sponsored academy
  • A mixed 14-19 studio school
  • A mixed 11-18 free school
  • A mixed 11-16 VA comprehensive;
  • A girls’ 11-18  VA comprehensive
  • A boys’ 11-18 VA selective school
  • A girls’ 11-18 community comprehensive and
  • A mixed 11-18 community comprehensive

The sixteen schools rated inadequate include:

  • Eight mixed 11-18 sponsored academies
  • Two mixed 11-16 sponsored academies
  • An mixed all-through sponsored academy
  • A mixed 11-16 free school
  • Two mixed 11-16 community comprehensives
  • A mixed 11-18 community comprehensive and
  • A mixed 13-19 community comprehensive

 

Coverage of the most able in main findings and recommendations

 

Terminology 

Where they were mentioned, such learners were most often described as ‘most able’ but a wide range of other terminology is deployed included ‘most-able’, ‘the more able’, ‘more-able’, ‘higher attaining’, ‘high-ability’, ‘higher-ability’ and ‘able students’.

The idiosyncratic adoption of redundant hyphenation is an unresolved mystery.

It is not unusual for two or more of these terms to be used in the same report. Because there is no glossary in existence, this makes some reports rather less straightforward to interpret accurately.

It is also more difficult to compare and contrast reports. Helpful services like Watchsted’s word search facility become less useful.

 

Incidence of commentary in the main findings and recommendations

Thirty of the 87 inspection reports (34%) addressed the school’s most able learners explicitly (or applied a similar term) in the sections setting out the report’s main findings and the recommendations respectively.

The analysis showed that 28% of reports on academies (including studios and free schools) met this criterion, whereas 38% of reports on non-academy schools did so.

Chart 5 shows how the incidence of reference in both main findings and recommendations varies according to the overall inspection grade awarded.

One can see that this level of attention is most prevalent in schools requiring improvement, followed by those with inadequate grades. It was less common in schools rated good and less common still in outstanding schools. The gap between these two categories is perhaps smaller than expected.

The slight lead for schools requiring improvement over inadequate schools may be attributable to a view that more of the latter face more pressing priorities, or it may have something to do with the varying proportions of high attainers in such schools, or both of these factors could be in play, amongst others.

 

Chart 5: Most able covered in both main findings and recommendations by overall inspection rating (percentage)

Ofsted chart 5

A further eleven reports (13%) addressed the most able learners in the recommendations but not the main findings.

Only one report managed to feature the most able in the main findings but not in the recommendations and this was because the former recorded that ‘the most able students do well’.

Consequently, a total of 45 reports (52%) did not mention the most able in either the main findings or the recommendations.

This applied to some 56% of reports on academies (including free schools and studio schools) and 49% of reports on other state-funded schools.

So, according to these proxy measures, the most able in academies appear to receive comparatively less attention from inspectors than those in non-academy schools. It is not clear why. (The samples are almost certainly too small to support reliable comparison of academies and non-academies with different inspection ratings.)

Chart 6 below shows the inspection ratings for this subset of reports.

 

Chart 6: Most able covered in neither main findings nor recommendations by overall inspection rating (percentage)

Ofsted chart 6

Here is further evidence that the significant majority of outstanding schools are regarded as having no significant problems in respect of provision for the most able.

On the other hand, this is far from being universally true, since it is an issue for one in four of them. This ratio of 3:1 does not lend complete support to the oft-encountered truism that outstanding schools invariably provide outstandingly for the most able – and vice versa.

At the other end of the spectrum, and perhaps even more surprisingly, over 30% of inadequate schools are assumed not to have issues significant enough to warrant reference in these sections. Sometimes this may be because they are equally poor at providing for all their learners, so the most able are not separately singled out.

Chart 7 below shows differences by school size, giving the percentage of reports mentioning the most able in both main findings and recommendations and in neither.

It divides schools into three categories: small (24 schools with a NOR of 599 or lower), medium (35 schools with a NOR of 600-999) and large (28 schools with a NOR of 1000 or higher.

 

Chart 7: Reports mentioning the most able in main findings and recommendations by school size 

 Ofsted chart 7

It is evident that ‘neither’ exceeds ‘both’ in the case of all three categories. The percentages are not too dissimilar in the case of small and large schools, which record a very similar profile.

But there is a much more significant difference for medium-sized schools. They demonstrate a much smaller percentage of ‘both’ reports and comfortably the largest percentage of ‘neither’ reports.

This pattern – suggesting that inspectors are markedly less likely to emphasise provision for the most able in medium-sized schools – is worthy of further investigation.

It would be particularly interesting to explore further the relationship between school size, the proportion of high attainers in a school and their achievement.

 

Typical references in the main findings and recommendations

I could detect no obvious and consistent variations in these references by school status or size, but it was possible to detect a noticeably different emphasis between schools rated outstanding and those rated inadequate.

Where the most able featured in reports on outstanding schools, these included recommendations such as:

‘Further increase the proportion of outstanding teaching in order to raise attainment even higher, especially for the most able students.’ (11-16 VA comprehensive).

‘Ensure an even higher proportion of students, including the most able, make outstanding progress across all subjects’ (11-18 sponsored academy).

These statements suggest that such schools have made good progress in eradicating underachievement amongst the most able but still have further room for improvement.

But where the most able featured in recommendations for inadequate schools, they were typically of this nature:

‘Improve teaching so that it is consistently good or better across all subjects, but especially in mathematics, by: raising teachers’ expectations of the quality and amount of work students of all abilities can do, especially the most and least able.’  (11-16 sponsored academy).

‘Improve the quality of teaching in order to speed up the progress students make by setting tasks that are at the right level to get the best out of students, especially the most able.’ (11-18 sponsored academy).

‘Rapidly improve the quality of teaching, especially in mathematics, by ensuring that teachers: have much higher expectations of what students can achieve, especially the most able…’ (11-16 community school).

These make clear that poor and inconsistent teaching quality is causing significant underachievement at the top end (and ‘especially’ suggests that this top end underachievement is particularly pronounced compared with other sections of the attainment spectrum in such schools).

Recommendations for schools requiring improvement are akin to those for inadequate schools but typically more specific, pinpointing particular dimensions of good quality teaching that are absent, so limiting effective provision for the most able. It is as if these schools have some of the pieces in place but not yet the whole jigsaw.

By comparison, recommendations for good schools can seem rather more impressionistic and/or formulaic, focusing more generally on ‘increasing the proportion of outstanding teaching’. In such cases the assessment is less about missing elements and more about the consistent application of all of them across the school.

One gets the distinct impression that inspectors have a clearer grasp of the ‘fit’ between provision for the most able and the other three inspection outcomes, at least as far as the distinction between ‘good’ and ‘outstanding’ is concerned.

But it would be misleading to suggest that these lines of demarcation are invariably clear. The boundary between ‘good’ and ‘requires improvement’ seems comparatively distinct, but there was more evidence of overlap at the intersections between the other grades.

 

Coverage of the most able in the main body of reports 

References to the most able rarely turn up in the sections dealing with behaviour and safety and leadership and management. I counted no examples of the former and no more than one or two of the latter.

I could find no examples where information, advice and guidance available to the most able are separately and explicitly discussed and little specific reference to the appropriateness of the curriculum for the most able. Both are less prominent than the recommendations in the June 2013 survey report led us to expect.

Within this sample, the vast majority of reports include some description of the attainment and/or progress of the most able in the section about pupils’ achievement, while roughly half pick up the issue in relation to the quality of teaching.

The extent of the coverage of most able learners varied enormously. Some devoted a single sentence to the topic while others referred to it separately in main findings, recommendations, pupils’ achievement and quality of teaching. In a handful of cases reports seemed to give disproportionate attention to the topic.

 

Attainment and progress

Analyses of attainment and progress are sometimes entirely generic, as in:

‘The most able students make good progress’ (inadequate 11-18 community school).

‘The school has correctly identified a small number of the most able who could make even more progress’ (outstanding 11-16 RC VA school).

‘The most able students do not always secure the highest grades’ (11-16 community school requiring improvement).

‘The most able students make largely expected rates of progress. Not enough yet go on to attain the highest GCSE grades in all subjects.’ (Good 11-18 sponsored academy).

Sometimes such statements can be damning:

‘The most-able students in the academy are underachieving in almost every subject. This is even the case in most of those subjects where other students are doing well. It is an academy-wide issue.’ (Inadequate 11-18 sponsored academy).

These do not in my view constitute reporting ‘in detail on the progress of the most able pupils’ and so probably fall foul of Ofsted’s guidance to inspectors on writing reports.

More specific comments on attainment typically refer explicitly to the achievement of A*/A grades at GCSE and ideally to specific subjects, for example:

‘In 2013, standards in science, design and technology, religious studies, French and Spanish were also below average. Very few students achieved the highest A* and A grades.’ (Inadequate 11-18 sponsored academy)

‘Higher-ability students do particularly well in a range of subjects, including mathematics, religious education, drama, art and graphics. They do as well as other students nationally in history and geography.’ (13-18 community school  requiring improvement)

More specific comments on progress include:

‘The progress of the most able students in English is significantly better than that in other schools nationally, and above national figures in mathematics. However, the progress of this group is less secure in science and humanities.’  (Outstanding 11-18 sponsored academy)

‘In 2013, when compared to similar students nationally, more-able students made less progress than less-able students in English. In mathematics, where progress is less than in English, students of all abilities made similar progress.’ (11-18 sponsored academy requiring improvement).

Statements about progress rarely extend beyond English and maths (the first example above is exceptional) but, when attainment is the focus, some reports take a narrow view based exclusively on the core subjects, while others are far wider-ranging.

Despite the reference in Ofsted’s survey report, and subsequently the revised subsidiary guidance, to coverage of high attaining learners in receipt of the Pupil Premium, this is hardly ever addressed.

I could find only two examples amongst the 87 reports:

‘The gap between the achievement in English and mathematics of students for whom the school receives additional pupil premium funding and that of their classmates widened in 2013… During the inspection, it was clear that the performance of this group is a focus in all lessons and those of highest ability were observed to be achieving equally as well as their peers.’ (11-16 foundation school requiring improvement)

‘Students eligible for the pupil premium make less progress than others do and are consequently behind their peers by approximately one GCSE grade in English and mathematics. These gaps reduced from 2012 to 2013, although narrowing of the gaps in progress has not been consistent over time. More-able students in this group make relatively less progress.’ (11-16 sponsored academy requiring improvement)

More often than not it seems that the most able and those in receipt of the Pupil Premium are assumed to be mutually exclusive groups.

 

Quality of teaching 

There was little variation in the issues raised under teaching quality. Most inspectors select two or three options from a standard menu:

‘Where teaching is best, teachers provide suitably challenging materials and through highly effective questioning enable the most able students to be appropriately challenged and stretched…. Where teaching is less effective, teachers are not planning work at the right level of difficulty. Some work is too easy for the more able students in the class. (Good 11-16 community school)

 ‘In teaching observed during the inspection, the pace of learning for the most able students was too slow because the activities they were given were too easy. Although planning identified different activities for the most able students, this was often vague and not reflected in practice.  Work lacks challenge for the most able students.’ (Inadequate 11-16 community school)

‘In lessons where teaching requires improvement, teachers do not plan work at the right level to ensure that students of differing abilities build on what they already know. As a result, there is a lack of challenge in these lessons, particularly for the more able students, and the pace of learning is slow. In these lessons teachers do not have high enough expectations of what students can achieve.’ (11-18 community school requiring improvement)

‘Tasks set by teachers are sometimes too easy and repetitive for pupils, particularly the most able. In mathematics, pupils are sometimes not moved on quickly enough to new and more challenging tasks when they have mastered their current work.’ (9-13 community middle school requiring improvement)

‘Targets which are set for students are not demanding enough, and this particularly affects the progress of the most able because teachers across the year groups and subjects do not always set them work which is challenging. As a result, the most able students are not stretched in lessons and do not achieve as well as they should.’ (11-16 sponsored academy rated inadequate)

All the familiar themes are present – assessment informing planning, careful differentiation, pace and challenge, appropriate questioning, the application of subject knowledge, the quality of homework, high expectations and extending effective practice between subject departments.

 

Negligible coverage of the most able

Only one of the 87 reports failed to make any mention of the most able whatsoever. This is the report on North Birmingham Academy, an 11-19 mixed school requiring improvement.

This clearly does not meet the injunction to:

‘…report in detail on the progress of the most able pupils and how effectively teaching engages them with work that is challenging enough’.

It ought not to have passed through Ofsted’s quality assurance processes unscathed. The inspection was conducted in February 2014, after this guidance issued, so there is no excuse.

Several other inspections make only cursory references to the most able in the main body of the report, for example:

‘Where teaching is not so good, it was often because teachers failed to check students’ understanding or else to anticipate when to intervene to support students’ learning, especially higher attaining students in the class.’ (Good 11-18 VA comprehensive).

‘… the teachers’ judgements matched those of the examiners for a small group of more-able students who entered early for GCSE in November 2013.’ (Inadequate 11-18 sponsored academy).

‘More-able students are increasingly well catered for as part of the academy’s focus on raising levels of challenge.’ (Good 11-18 sponsored academy).

‘The most able students do not always pursue their work to the best of their capability.’ (11-16 free school requiring improvement).

These would also fall well short of the report writing guidance. At least 6% of my sample falls into this category.

Some reports note explicitly that the most able learners are not making sufficient progress, but fail to capture this in the main findings or recommendations, for example:

‘The achievement of more able students is uneven across subjects. More able students said to inspectors that they did not feel they were challenged or stretched in many of their lessons. Inspectors agreed with this view through evidence gathered in lesson observations…lessons do not fully challenge all students, especially the more able, to achieve the grades of which they are capable.’ (11-19 sponsored academy requiring improvement).

‘The 2013 results of more-able students show they made slower progress than is typical nationally, especially in mathematics.  Progress is improving this year, but they are still not always sufficiently challenged in lessons.’ (11-18 VC CofE school requiring improvement).

‘There is only a small proportion of more-able students in the academy. In 2013 they made less progress in English and mathematics than similar students nationally. Across all of their subjects, teaching is not sufficiently challenging for more-able students and they leave the academy with standards below where they should be.’ (Inadequate 11-18 sponsored academy).

‘The proportion of students achieving grades A* and A was well below average, demonstrating that the achievement of the most able also requires improvement.’  (11-18 sponsored academy requiring improvement).

Something approaching 10% of the sample fell into this category. It was not always clear why this issue was not deemed significant enough to feature amongst schools’ priorities for improvement. This state of affairs was more typical of schools requiring improvement than inadequate schools, so one could not so readily argue that the schools concerned were overwhelmed with the need to rectify more basic shortcomings.

That said, the example from an inadequate academy above may be significant. It is almost as if the small number of more able students is the reason why this shortcoming is not taken more seriously.

Inspectors must carry in their heads a somewhat subjective hierarchy of issues that schools are expected to tackle. Some inspectors appear to feature the most able at a relatively high position in this hierarchy; others push it further down the list. Some appear more flexible in the application of this hierarchy to different settings than others.

 

Formulaic and idiosyncratic references 

There is clear evidence of formulaic responses, especially in the recommendations for how schools can improve their practice.

Many reports adopt the strategy of recommending a series of actions featuring the most able, either in the target group:

‘Improve the quality of teaching to at least good so that students, including the most able, achieve higher standards, by ensuring that: [followed by a list of actions] (9-13 community middle school requiring improvement)

Or in the list of actions:

‘Improve the quality of teaching in order to raise the achievement of students by ensuring that teachers:…use assessment information to plan their work so that all groups of students, including those supported by the pupil premium and the most-able students, make good progress.’ (11-16 community school requiring improvement)

It was rare indeed to come across a report that referred explicitly to interesting or different practice in the school, or approached the topic in a more individualistic manner, but here are a few examples:

‘More-able pupils are catered for well and make good progress. Pupils enjoy the regular, extra challenges set for them in many lessons and, where this happens, it enhances their progress. They enjoy that extra element which often tests them and gets them thinking about their work in more depth. Most pupils are keen to explore problems which will take them to the next level or extend their skills.’  (Good 9-13 community middle school)

‘Although the vast majority of groups of students make excellent progress, the school has correctly identified a small number of the most able who could make even more progress. It has already started an impressive programme of support targeting the 50 most able students called ‘Students Targeted A grade Results’ (STAR). This programme offers individualised mentoring using high-quality teachers to give direct intervention and support. This is coupled with the involvement of local universities. The school believes this will give further aspiration to these students to do their very best and attend prestigious universities.’  (Outstanding 11-16 VA school)

I particularly liked:

‘Policies to promote equality of opportunity are ineffective because of the underachievement of several groups of students, including those eligible for the pupil premium and the more-able students.’ (Inadequate 11-18 academy) 

 

Conclusion

 

Main Findings

The principal findings from this survey, admittedly based on a rather small and not entirely representative sample, are that:

  • Inspectors are terminologically challenged in addressing this issue, because there are too many synonyms or near-synonyms in use.
  • Approximately one-third of inspection reports address provision for the most able in both main findings and recommendations. This is less common in academies than in community, controlled and aided schools. It is most prevalent in schools with an overall ‘requires improvement’ rating, followed by those rated inadequate. It is least prevalent in outstanding schools, although one in four outstanding schools is dealt with in this way.
  • Slightly over half of inspection reports address provision for the most able in neither the main findings nor the recommendations. This is relatively more common in the academies sector and in outstanding schools. It is least prevalent in schools rated inadequate, though almost one-third of inadequate schools fall into this category. Sometimes this is the case even though provision for the most able is identified as a significant issue in the main body of the report.
  • There is an unexplained tendency for reports on medium-sized schools to be significantly less likely to feature the most able in both main findings and recommendations and significantly more likely to feature it in neither. This warrants further investigation.
  • Overall coverage of the topic varies excessively between reports. One ignored it entirely, while several provided only cursory coverage and a few covered it to excess. The scope and quality of the coverage does not necessarily correlate with the significance of the issue for the school.
  • Coverage of the attainment and progress of the most able learners is variable. Some reports offer only generic descriptions of attainment and progress combined, some are focused exclusively on attainment in the core subjects while others take a wider curricular perspective. Outside the middle school sector, desirable attainment outcomes for the most able are almost invariably defined exclusively in terms of A* and A grade GCSEs.
  • Hardly any reports consider the attainment and/or progress of the most able learners in receipt of the Pupil Premium.
  • None of these reports make specific and explicit reference to IAG for the most able. It is rarely stated whether the school’s curriculum satisfies the needs of the most able.
  • Too many reports adopt formulaic approaches, especially in the recommendations they offer the school. Too few include reference to interesting or different practice.

In my judgement, too much current inspection reporting falls short of the commitments contained in the original Ofsted survey report and of the more recent requirement to:

‘always report in detail on the progress of the most able pupils and how effectively teaching engages them with work that is challenging enough.’

 

Recommendations

  • Ofsted should publish a glossary defining clearly all the terms for the most able that it employs, so that both inspectors and schools understand exactly what is intended when a particular term is deployed and which learners should be in scope when the most able are discussed.
  • Ofsted should co-ordinate the development of supplementary guidance clarifying their expectations of schools in respect of provision for the most able. This should set out in more detail what expectations would apply for such provision to be rated outstanding, good, requiring improvement and inadequate respectively. This should include the most able in receipt of the Pupil Premium, the suitability of the curriculum and the provision of IAG.
  • Ofsted should provide supplementary guidance for inspectors outlining and exemplifying the full range of evidence they might interrogate concerning the attainment and progress of the most able learners, including those in receipt of the Pupil Premium.
  • This guidance should specify the essential minimum coverage expected in reports and the ‘triggers’ that would warrant it being referenced in the main findings and/or recommendations for action.
  • This guidance should discourage inspectors from adopting formulaic descriptors and recommendations and specifically encourage them to identify unusual or innovative examples of effective practice.
  • The school inspection handbook and subsidiary guidance should be amended to reflect the supplementary guidance.
  • The School Data Dashboard should be expanded to include key data highlighting the attainment and progress of the most able.
  • These actions should also be undertaken for inspection of the primary and 16-19 sectors respectively.

 

Overall assessment: Requires Improvement.

 

GP

May 2014

 

 

 

 

 

 

 

 

 

 

 

PISA 2012 Creative Problem Solving: International Comparison of High Achievers’ Performance

This post compares the performance of high achievers from selected jurisdictions on the PISA 2012 creative problem solving test.

It draws principally on the material in the OECD Report ‘PISA 2012 Results: Creative Problem Solving’ published on 1 April 2014.

Pisa ps cover CaptureThe sample of jurisdictions includes England, other English-speaking countries (Australia, Canada, Ireland and the USA) and those that typically top the PISA rankings (Finland, Hong Kong, South Korea, Shanghai, Singapore and Taiwan).

With the exception of New Zealand, which did not take part in the problem solving assessment, this is deliberately identical to the sample I selected for a parallel post reviewing comparable results in the PISA 2012 assessments of reading, mathematics and science: ‘PISA 2012: International Comparisons of High Achievers’ Performance’ (December 2013)

These eleven jurisdictions account for nine of the top twelve performers ranked by mean overall performance in the problem solving assessment. (The USA and Ireland lie outside the top twelve, while Japan, Macao and Estonia are the three jurisdictions that are in the top twelve but outside my sample.)

The post is divided into seven sections:

  • Background to the problem solving assessment: How PISA defines problem solving competence; how it defines performance at each of the six levels of proficiency; how it defines high achievement; the nature of the assessment and who undertook it.
  • Average performance, the performance of high achievers and the performance of low achievers (proficiency level 1) on the problem solving assessment. This comparison includes my own sample and all the other jurisdictions that score above the OECD average on the first of these measures.
  • Gender and socio-economic differences amongst high achievers on the problem solving assessment  in my sample of eleven jurisdictions.
  • The relative strengths and weaknesses of jurisdictions in this sample on different aspects of the problem solving assessment. (This treatment is generic rather than specific to high achievers.)
  • What proportion of high achievers on the problem-solving assessment in my sample of jurisdictions are also high achievers in reading, maths and science respectively.
  • What proportion of students in my sample of jurisdictions achieves highly in one or more of the four PISA 2012 assessments – and against the ‘all-rounder’ measure, which is based on high achievement in all of reading, maths and science (but not problem solving).
  • Implications for education policy makers seeking to improve problem solving performance in each of the sample jurisdictions.

Background to the Problem Solving Assessment

.

Definition of problem solving

PISA’s definition of problem-solving competence is:

‘…an individual’s capacity to engage in cognitive processing to understand and resolve problem situations where a method of solution is not immediately obvious. It includes the willingness to engage with such situations in order to achieve one’s potential as a constructive and reflective citizen.’

The commentary on this definition points out that:

  • Problem solving requires identification of the problem(s) to be solved, planning and applying a solution, and monitoring and evaluating progress.
  • A problem is ‘a situation in which the goal cannot be achieved by merely applying learned procedures’, so the problems encountered must be non-routine for 15 year-olds, although ‘knowledge of general strategies’ may be useful in solving them.
  • Motivational and affective factors are also in play.

The Report is rather coy about the role of creativity in problem solving, and hence the justification for the inclusion of this term in its title.

Perhaps the nearest it gets to an exposition is when commenting on the implications of its findings:

‘In some countries and economies, such as Finland, Shanghai-China and Sweden, students master the skills needed to solve static, analytical problems similar to those that textbooks and exam sheets typically contain as well or better than 15-year-olds, on average, across OECD countries. But the same 15-year-olds are less successful when not all information that is needed to solve the problem is disclosed, and the information provided must be completed by interacting with the problem situation. A specific difficulty with items that require students to be open to novelty, tolerate doubt and uncertainty, and dare to use intuitions (“hunches and feelings”) to initiate a solution suggests that opportunities to develop and exercise these traits, which are related to curiosity, perseverance and creativity, need to be prioritised.’

.

Assessment framework

PISA’s framework for assessing problem solving competence is set out in the following diagram

 

PISA problem solving framework Capture

 

In solving a particular problem it may not be necessary to apply all these steps, or to apply them in this order.

Proficiency levels

The proficiency scale was designed to have a mean score across OECD countries of 500. The six levels of proficiency applied in the assessment each have their own profile.

The lowest, level 1 proficiency is described thus:

‘At Level 1, students can explore a problem scenario only in a limited way, but tend to do so only when they have encountered very similar situations before. Based on their observations of familiar scenarios, these students are able only to partially describe the behaviour of a simple, everyday device. In general, students at Level 1 can solve straightforward problems provided there is a simple condition to be satisfied and there are only one or two steps to be performed to reach the goal. Level 1 students tend not to be able to plan ahead or set sub-goals.’

This level equates to a range of scores from 358 to 423. Across the OECD sample, 91.8% of participants are able to perform tasks at this level.

By comparison, level 5 proficiency is described in this manner:

‘At Level 5, students can systematically explore a complex problem scenario to gain an understanding of how relevant information is structured. When faced with unfamiliar, moderately complex devices, such as vending machines or home appliances, they respond quickly to feedback in order to control the device. In order to reach a solution, Level 5 problem solvers think ahead to find the best strategy that addresses all the given constraints. They can immediately adjust their plans or backtrack when they detect unexpected difficulties or when they make mistakes that take them off course.’

The associated range of scores is from 618 to 683 and 11.4% of all OECD students achieve at this level.

Finally, level 6 proficiency is described in this way:

‘At Level 6, students can develop complete, coherent mental models of diverse problem scenarios, enabling them to solve complex problems efficiently. They can explore a scenario in a highly strategic manner to understand all information pertaining to the problem. The information may be presented in different formats, requiring interpretation and integration of related parts. When confronted with very complex devices, such as home appliances that work in an unusual or unexpected manner, they quickly learn how to control the devices to achieve a goal in an optimal way. Level 6 problem solvers can set up general hypotheses about a system and thoroughly test them. They can follow a premise through to a logical conclusion or recognise when there is not enough information available to reach one. In order to reach a solution, these highly proficient problem solvers can create complex, flexible, multi-step plans that they continually monitor during execution. Where necessary, they modify their strategies, taking all constraints into account, both explicit and implicit.’

The range of level 6 scores is from 683 points upwards and 2.5% of all OECD participants score at this level.

PISA defines high achieving students as those securing proficiency level 5 or higher, so proficiency levels 5 and 6 together. The bulk of the analysis it supplies relates to this cohort, while relatively little attention is paid to the more exclusive group achieving proficiency level 6, even though almost 10% of students in Singapore reach this standard in problem solving.

 .

The sample

Sixty-five jurisdictions took part in PISA 2012, including all 34 OECD countries and 31 partners. But only 44 jurisdictions took part in the problem solving assessment, including 28 OECD countries and 16 partners. As noted above, that included all my original sample of twelve jurisdictions, with the exception of New Zealand.

I could find no stated reason why New Zealand chose not to take part. Press reports initially suggested that England would do likewise, but it was subsequently reported that this decision had been reversed.

The assessment was computer-based and comprised 16 units divided into 42 items. The units were organised into four clusters, each designed to take 20 minutes to complete. Participants completed one or two clusters, depending on whether they were also undertaking computer-based assessments of reading and maths.

In each jurisdiction a random sample of those who took part in the paper-based maths assessment was selected to undertake the problem solving assessment. About 85,000 students took part in all. The unweighted sample sizes in my selected jurisdictions are set out in Table 1 below, together with the total population of 15 year-olds in each jurisdiction.

 

Table 1: Sample sizes undertaking PISA 2012 problem solving assessment in selected jurisdictions

Country Unweighted Sample Total 15 year-olds
Australia 5,612 291,976
Canada 4,601 417,873
Finland 3,531 62,523
Hong Kong 1,325 84,200
Ireland 1,190 59,296
Shanghai 1,203 108,056
Singapore 1,394 53,637
South Korea 1,336 687,104
Taiwan 1,484 328,356
UK (England) 1,458 738,066
USA 1,273 3,985,714

Those taking the assessment were aged between 15 years and three months and 16 years and two months at the time of the assessment. All were enrolled at school and had completed at least six years of formal schooling.

Average performance compared with the performance of high and low achievers

The overall table of mean scores on the problem solving assessment is shown below

PISA problem solving raw scores Capture

 .

There are some familiar names at the top of the table, especially Singapore and South Korea, the two countries that comfortably lead the rankings. Japan is some ten points behind in third place but it in turn has a lead of twelve points over a cluster of four other Asian competitors: Macao, Hong Kong, Shanghai and Taiwan.

A slightly different picture emerges if we compare average performance with the proportion of learners who achieve the bottom proficiency level and the top two proficiency levels. Table 2 below compares these groups.

This table includes all the jurisdictions that exceeded the OECD average score. I have marked out in bold the countries in my sample of eleven which includes Ireland, the only one of them that did not exceed the OECD average.

Table 2: PISA Problem Solving 2012: Comparing Average Performance with Performance at Key Proficiency Levels

 

Jurisdiction Mean score Level 1 (%) Level 5 (%) Level 6 (%) Levels 5+6 (%)
Singapore 562 6.0 19.7 9.6 29.3
South Korea 561 4.8 20.0 7.6 27.6
Japan 552 5.3 16.9 5.3 22.2
Macao 540 6.0 13.8 2.8 16.6
Hong Kong 540 7.1 14.2 5.1 19.3
Shanghai 536 7.5 14.1 4.1 18.2
Taiwan 534 8.2 14.6 3.8 18.4
Canada 526 9.6 12.4 5.1 17.5
Australia 523 10.5 12.3 4.4 16.7
Finland 523 9.9 11.4 3.6 15.0
England (UK) 517 10.8 10.9 3.3 14.2
Estonia 515 11.1 9.5 2.2 11.7
France 511 9.8 9.9 2.1 12.0
Netherlands 511 11.2 10.9 2.7 13.6
Italy 510 11.2 8.9 1.8 10.7
Czech Republic 509 11.9 9.5 2.4 11.9
Germany 509 11.8 10.1 2.7 12.8
USA 508 12.5 8.9 2.7 11.6
Belgium 508 11.6 11.4 3.0 14.4
Austria 506 11.9 9.0 2.0 11.0
Norway 503 13.2 9.7 3.4 13.1
Ireland 498 13.3 7.3 2.1 9.4
OECD Ave. 500 13.2 8.9 2.5 11.4

 .

The jurisdictions at the top of the table also have a familiar profile, with a small ‘tail’ of low performance combined with high levels of performance at the top end.

Nine of the top ten have fewer than 10% of learners at proficiency level 1, though only South Korea pushes below 5%.

Five of the top ten have 5% or more of their learners at proficiency level 6, but only Singapore and South Korea have a higher percentage at level 6 than level 1 (with Japan managing the same percentage at both levels).

The top three performers – Singapore, South Korea and Japan – are the only three jurisdictions that have over 20% of their learners at proficiency levels 5 and 6 together.

South Korea slightly outscores Singapore at level 5 (20.0% against 19.7%). Japan is in third place, followed by Taiwan, Hong Kong and Shanghai.

But at level 6, Singapore has a clear lead, followed by South Korea, Japan, Hong Kong and Canada respectively.

England’s overall place in the table is relatively consistent on each of these measures, but the gaps between England and the top performers vary considerably.

The best have fewer than half England’s proportion of learners at proficiency level 1, almost twice as many learners at proficiency level 5 and more than twice as many at proficiency levels 5 and 6 together. But at proficiency level 6 they have almost three times as many learners as England.

Chart 1 below compares performance on these four measures across my sample of eleven jurisdictions.

All but Ireland are comfortably below the OECD average for the percentage of learners at proficiency level 1. The USA and Ireland are atypical in having a bigger tail (proficiency level 1) than their cadres of high achievers (levels 5 and 6 together).

At level 5 all but Ireland and the USA are above the OECD average, but USA leapfrogs the OECD average at level 6.

There is a fairly strong correlation between the proportions of learners achieving the highest proficiency thresholds and average performance in each jurisdiction. However, Canada stands out by having an atypically high proportion of students at level 6.

.

Chart 1: PISA 2012 Problem-solving: Comparing performance at specified proficiency levels

Problem solving chart 1

.

PISA’s Report discusses the variation in problem-solving performance within different jurisdictions. However it does so without reference to the proficiency levels, so we do not know to what extent these findings apply equally to high achievers.

Amongst those above the OECD average, those with least variation are Macao, Japan, Estonia, Shanghai, Taiwan, Korea, Hong Kong, USA, Finland, Ireland, Austria, Singapore and the Czech Republic respectively.

Perhaps surprisingly, the degree of variation in Finland is identical to that in the USA and Ireland, while Estonia has less variation than many of the Asian jurisdictions. Singapore, while top of the performance table, is only just above the OECD average in terms of variation.

The countries below the OECD average on this measure – listed in order of increasing variation – include England, Australia and Canada, though all three are relatively close to the OECD average. So these three countries and Singapore are all relatively close together.

Gender and socio-economic differences amongst high achievers

 .

Gender differences

On average across OECD jurisdictions, boys score seven points higher than girls on the problem solving assessment. There is also more variation amongst boys than girls.

Across the OECD participants, 3.1% of boys achieved proficiency level 6 but only 1.8% of girls did so. This imbalance was repeated at proficiency level 5, achieved by 10% of boys and 7.7% of girls.

The table and chart below show the variations within my sample of eleven countries. The performance of boys exceeds that of girls in all cases, except in Finland at proficiency level 5, and in that instance the gap in favour of girls is relatively small (0.4%).

 .

Table 3: PISA Problem-solving: Gender variation at top proficiency levels

Jurisdiction Level 5 (%) Level 6 (%) Levels 5+6 (%)
  Boys Girls Diff Boys Girls Diff Boys Girls Diff
Singapore 20.4 19.0 +1.4 12.0 7.1 +4.9 32.4 26.1 +6.3
South Korea 21.5 18.3 +3.2 9.4 5.5 +3.9 30.9 23.8 +7.1
Hong Kong 15.7 12.4 +3.3 6.1 3.9 +2.2 21.8 16.3 +5.5
Shanghai 17.0 11.4 +5.6 5.7 2.6 +3.1 22.7 14.0 +8.7
Taiwan 17.3 12.0 +5.3 5.0 2.5 +2.5 22.3 14.5 +7.8
Canada 13.1 11.8 +1.3 5.9 4.3 +1.6 19.0 16.1 +2.9
Australia 12.6 12.0 +0.6 5.1 3.7 +1.4 17.7 15.7 +2.0
Finland 11.2 11.6 -0.4 4.1 3.0 +1.1 15.3 14.6 +0.7
England (UK) 12.1 9.9 +2.2 3.6 3.0 +0.6 15.7 12.9 +2.8
USA 9.8 7.9 +1.9 3.2 2.3 +0.9 13.0 10.2 +2.8
Ireland 8.0 6.6 +1.4 3.0 1.1 +1.9 11.0 7.7 +3.3
OECD Average 10.0 7.7 +2.3 3.1 1.8 +1.3 13.1 9.5 +3.6

There is no consistent pattern in whether boys are more heavily over-represented at proficiency level 5 than proficiency level 6, or vice versa.

There is a bigger difference at level 6 than at level 5 in Singapore, South Korea, Canada, Australia, Finland and Ireland, but the reverse is true in the five remaining jurisdictions.

At level 5, boys are in the greatest ascendancy in Shanghai and Taiwan while, at level 6, this is true of Singapore and South Korea.

When proficiency levels 5 and 6 are combined, all five of the Asian tigers show a difference in favour of males of 5.5% or higher, significantly in advance of the six ‘Western’ countries in the sample and significantly ahead of the OECD average.

Amongst the six ‘Western’ representatives, boys have the biggest advantage at proficiency level 5 in England, while at level 6 boys in Ireland have the biggest advantage.

Within this group of jurisdictions, the gap between boys and girls at level 6 is comfortably the smallest in England. But, in terms of performance at proficiency levels 5 and 6 together, Finland is ahead.

 .

Chart 2: PISA Problem-solving: Gender variation at top proficiency levels

Problem solving chart 2

The Report includes a generic analysis of gender differences in performance for boys and girls with similar levels of performance in English, maths and science.

It concludes that girls perform significantly above their expected level in both England and Australia (though the difference is only statistically significant in the latter).

The Report comments:

‘It is not clear whether one should expect there to be a gender gap in problem solving. On the one hand, the questions posed in the PISA problem-solving assessment were not grounded in content knowledge, so boys’ or girls’ advantage in having mastered a particular subject area should not have influenced results. On the other hand… performance in problem solving is more closely related to performance in mathematics than to performance in reading. One could therefore expect the gender difference in performance to be closer to that observed in mathematics – a modest advantage for boys, in most countries – than to that observed in reading – a large advantage for girls.’

 .

Socio-economic differences

The Report considers variations in performance against PISA’s Index of Economic, Social and Cultural status (IESC), finding them weaker overall than for reading, maths and science.

It calculates that the overall percentage variation in performance attributable to these factors is about 10.6% (compared with 14.9% in maths, 14.0% in science and 13.2% in reading).

Amongst the eleven jurisdictions in my sample, the weakest correlations were found in Canada (4%), followed by Hong Kong (4.9%), South Korea (5.4%), Finland (6.5%), England (7.8%), Australia (8.5%), Taiwan (9.4%), the USA (10.1%) and Ireland (10.2%) in that order. All those jurisdictions had correlations below the OECD average.

Perhaps surprisingly, there were above average correlations in Shanghai (14.1%) and, to a lesser extent (and less surprisingly) in Singapore (11.1%).

The report suggests that students with parents working in semi-skilled and elementary occupations tend to perform above their expected level in problem-solving in Taiwan, England, Canada, the USA, Finland and Australia (in that order – with Australia closest to the OECD average).

The jurisdictions where these students tend to underperform their expected level are – in order of severity – Ireland, Shanghai, Singapore, Hong Kong and South Korea.

A parallel presentation on the Report provides some additional data about the performance in different countries of what the OECD calls ‘resilient’ students – those in the bottom quartile of the IESC but in the top quartile by perfromance, after accounting for socio-economic status.

It supplies the graph below, which shows all the Asian countries in my sample clustered at the top, but also with significant gaps between them. Canada is the highest-performing of the remainder in my sample, followed by Finland, Australia, England and the USA respectively. Ireland is some way below the OECD average.

.

PISA problem resolving resilience Capture

.

Unfortunately, I can find no analysis of how performance varies according to socio-economic variables at each proficiency level. It would be useful to see which jurisdictions have the smallest ‘excellence gaps’ at levels 5 and 6 respectively.

 .

How different jurisdictions perform on different aspects of problem-solving

The Report’s analysis of comparative strengths and weaknesses in different elements of problem-solving does not take account of variations at different proficiency levels

It explains that aspects of the assessment were found easier by students in different jurisdictions, employing a four-part distinction between:

‘Exploring and understanding. The objective is to build mental representations of each of the pieces of information presented in the problem. This involves:

  • exploring the problem situation: observing it, interacting with it, searching for information and finding limitations or obstacles; and
  • understanding given information and, in interactive problems, information discovered while interacting with the problem situation; and demonstrating understanding of relevant concepts.

Representing and formulating. The objective is to build a coherent mental representation of the problem situation (i.e. a situation model or a problem model). To do this, relevant information must be selected, mentally organised and integrated with relevant prior knowledge. This may involve:

  • representing the problem by constructing tabular, graphic, symbolic or verbal representations, and shifting between representational formats; and
  • formulating hypotheses by identifying the relevant factors in the problem and their inter-relationships; and organising and critically evaluating information.

Planning and executing. The objective is to use one’s knowledge about the problem situation to devise a plan and execute it. Tasks where “planning and executing” is the main cognitive demand do not require any substantial prior understanding or representation of the problem situation, either because the situation is straightforward or because these aspects were previously solved. “Planning and executing” includes:

  • planning, which consists of goal setting, including clarifying the overall goal, and setting subgoals, where necessary; and devising a plan or strategy to reach the goal state, including the steps to be undertaken; and
  • executing, which consists of carrying out a plan.

Monitoring and reflecting.The objective is to regulate the distinct processes involved in problem solving, and to critically evaluate the solution, the information provided with the problem, or the strategy adopted. This includes:

  • monitoring progress towards the goal at each stage, including checking intermediate and final results, detecting unexpected events, and taking remedial action when required; and
  • reflecting on solutions from different perspectives, critically evaluating assumptions and alternative solutions, identifying the need for additional information or clarification and communicating progress in a suitable manner.’

Amongst my sample of eleven jurisdictions:

  • ‘Exploring and understanding’ items were found easier by students in Singapore, Hong Kong, South Korea, Australia, Taiwan and Finland. 
  • ‘Representing and formulating’ items were found easier in Taiwan, Shanghai, South Korea, Singapore, Hong Kong, Canada and Australia. 
  • ‘Planning and executing’ items were found easier in Finland only. 
  • ‘Monitoring and reflecting’ items were found easier in Ireland, Singapore, the USA and England.

The Report concludes:

‘This analysis shows that, in general, what differentiates high-performing systems, and particularly East Asian education systems, such as those in Hong Kong-China, Japan, Korea [South Korea], Macao-China, Shanghai -China, Singapore and Chinese Taipei [Taiwan], from lower-performing ones, is their students’ high level of proficiency on “exploring and understanding” and “representing and formulating” tasks.’

It also distinguishes those jurisdictions that perform best on interactive problems, requiring students to discover some of the information required to solve the problem, rather than being presented with all the necessary information. This seems to be the nearest equivalent to a measure of creativity in problem solving

Comparative strengths and weaknesses in respect of interactive tasks are captured in the following diagram.

.

PISA problem solving strengths in different countries

.

One can see that several of my sample – Ireland, the USA, Canada, Australia, South Korea and Singapore – are placed in the top right-hand quarter of the diagram, indicating stronger than expected performance on both interactive and knowledge acquisition tasks.

England is stronger than expected on the former but not on the latter.

Jurisdictions that are weaker than inspected on interactive tasks only include Hong Kong, Taiwan and Shanghai, while Finland is weaker than expected on both.

We have no information about whether these distinctions were maintained at different proficiency levels.

.

Comparing jurisdictions’ performance at higher proficiency levels

Table 4 and Charts 3 and 4 below show variations in the performance of countries in my sample across the four different assessments at level 6, the highest proficiency level.

The charts in particular emphasise how far ahead the Asian Tigers are in maths at this level, compared with the cross-jurisdictional variation in the other three assessments.

In all five cases, each ‘Asian Tiger’s’ level 6 performance in maths also vastly exceeds its level 6 performance in the other three assessments. The proportion of students achieving level 6 proficiency in problem solving lags far behind, even though there is a fairly strong correlation between these two assessments (see below).

In contrast, all the ‘Western’ jurisdictions in the sample – with the sole exception of Ireland – achieve a higher percentage at proficiency level 6 in problem solving than they do in maths, although the difference is always less than a full percentage point. (Even in Ireland the difference is only 0.1 of a percentage point in favour of maths.)

Shanghai is the only jurisdiction in the sample which has more students achieving proficiency level 6 in science than in problem solving. It also has the narrowest gap between level 6 performance in problem solving and in reading.

Meanwhile, England, the USA, Finland and Australia all have broadly similar profiles across the four assessments, with the largest percentage of level 6 performers in problem solving, followed by maths, science and reading respectively.

The proximity of the lines marking level 6 performance in reading and science is also particularly evident in the second chart below.

.

Table 4: Percentage achieving proficiency Level 6 in each domain

  PS L6  Ma L6 Sci L6 Re L6
Singapore 9.6 19.0 5.8 5.0
South Korea 7.6 12.1 1.1 1.6
Hong Kong 5.1 12.3 1.8 1.9
Shanghai 4.1 30.8 4.2 3.8
Taiwan 3.8 18.0 0.6 1.4
Canada 5.1 4.3 1.8 2.1
Australia 4.4 4.3 2.6 1.9
Finland 3.6 3.5 3.2 2.2
England (UK) 3.3 3.1 1.9 1.3
USA 2.7 2.2 1.1 1.0
Ireland 2.1 2.2 1.5 1.3
OECD Average 2.5 3.3 1.2 1.1

 Charts 3 and 4: Percentage achieving proficiency level 6 in each domain

Problem solving chart 3

Problem solving chart 4

The pattern is materially different at proficiency levels 5 and above, as the table and chart below illustrate. These also include the proportion of all-rounders, who achieved proficiency level 5 or above in each of maths, science and reading (but not in problem-solving).

The lead enjoyed by the ‘Asian Tigers’ in maths is somewhat less pronounced. The gap between performance within these jurisdictions on the different assessments also tends to be less marked, although maths accounts for comfortably the largest proportion of level 5+ performance in all five cases.

Conversely, level 5+ performance on the different assessments is typically much closer in the ‘Western’ countries. Problem solving leads the way in Australia, Canada, England and the USA, but in Finland science is in the ascendant and reading is strongest in Ireland.

Some jurisdictions have a far ‘spikier’ profile than others. Ireland is closest to achieving equilibrium across all four assessments. Australia and England share very similar profiles, though Australia outscores England in each assessment.

The second chart in particular shows how Shanghai’s ‘spike’ applies in all the other three assessments but not in problem solving.

Table 5: Percentage achieving Proficiency level 5 and above in each domain

  PS L5+  Ma L5+ Sci L5+ Re L5+ Ma + Sci + Re L5+
Singapore 29.3 40.0 22.7 21.2 16.4
South Korea 27.6 30.9 11.7 14.2 8.1
Hong Kong 19.3 33.4 16.7 16.8 10.9
Shanghai 18.2 55.4 27.2 25.1 19.6
Taiwan 18.4 37.2 8.4 11.8 6.1
Canada 17.5 16.4 11.3 12.9 6.5
Australia 16.7 14.8 13.5 11.7 7.6
Finland 15.0 15.2 17.1 13.5 7.4
England (UK) 14.2 12.4 11.7 9.1 5.7* all UK
USA 11.6 9.0 7.4 7.9 4.7
Ireland 9.4 10.7 10.8 11.4 5.7
OECD Average 11.4 12.6 8.4 8.4 4.4

 .

Charts 5 and 6: Percentage Achieving Proficiency Level 5 and above in each domain

Problem solving chart 5Problem solving chart 6.

How high-achieving problem solvers perform in other assessments

.

Correlations between performance in different assessments

The Report provides an analysis of the proportion of students achieving proficiency levels 5 and 6 on problem solving who also achieved that outcome on one of the other three assessments: reading, maths and science.

It argues that problem solving is a distinct and separate domain. However:

‘On average, about 68% of the problem-solving score reflects skills that are also measured in one of the three regular assessment domains. The remaining 32% reflects skills that are uniquely captured by the assessment of problem solving. Of the 68% of variation that problem-solving performance shares with other domains, the overwhelming part is shared with all three regular assessment domains (62% of the total variation); about 5% is uniquely shared between problem solving and mathematics only; and about 1% of the variation in problem solving performance hinges on skills that are specifically measured in the assessments of reading or science.’

It discusses the correlation between these different assessments:

‘A key distinction between the PISA 2012 assessment of problem solving and the regular assessments of mathematics, reading and science is that the problem-solving assessment does not measure domain-specific knowledge; rather, it focuses as much as possible on the cognitive processes fundamental to problem solving. However, these processes can also be used and taught in the other subjects assessed. For this reason, problem-solving tasks are also included among the test units for mathematics, reading and science, where their solution requires expert knowledge specific to these domains, in addition to general problem-solving skills.

It is therefore expected that student performance in problem solving is positively correlated with student performance in mathematics, reading and science. This correlation hinges mostly on generic skills, and should thus be about the same magnitude as between any two regular assessment subjects.’

These overall correlations are set out in the table below, which shows that maths has a higher correlation with problem solving than either science or reading, but that this correlation is lower than those between the three subject-related assessments.

The correlation between maths and science (0.90) is comfortably the strongest (despite the relationship between reading and science at the top end of the distribution noted above).

PISA problem solving correlations capture

Correlations are broadly similar across jurisdictions, but the Report notes that the association is comparatively weak in some of these, including Hong Kong. Students here are more likely to perform poorly on problem solving and well on other assessments, or vice versa.

There is also broad consistency at different performance levels, but the Report identifies those jurisdictions where students with the same level of performance exceed expectations in relation to problem-solving performance. These include South Korea, the USA, England, Australia, Singapore and – to a lesser extent – Canada.

Those with lower than expected performance include Shanghai, Ireland, Hong Kong, Taiwan and Finland.

The Report notes:

‘In Shanghai-China, 86% of students perform below the expected level in problem solving, given their performance in mathematics, reading and science. Students in these countries/economies struggle to use all the skills that they demonstrate in the other domains when asked to perform problem-solving tasks.’

However, there is variation according to students’ maths proficiency:

  • Jurisdictions whose high scores on problem solving are mainly attributable to strong performers in maths include Australia, England and the USA. 
  • Jurisdictions whose high scores on problem solving are more attributable to weaker performers in maths include Ireland. 
  • Jurisdictions whose lower scores in problem solving are more attributable to weakness among strong performers in maths include Korea. 
  • Jurisdictions whose lower scores in problem solving are more attributable to weakness among weak performers in maths include Hong Kong and Taiwan. 
  • Jurisdictions whose weakness in problem solving is fairly consistent regardless of performance in maths include Shanghai and Singapore.

The Report adds:

‘In Italy, Japan and Korea, the good performance in problem solving is, to a large extent, due to the fact that lower performing students score beyond expectations in the problem-solving assessment….This may indicate that some of these students perform below their potential in mathematics; it may also indicate, more positively, that students at the bottom of the class who struggle with some subjects in school are remarkably resilient when it comes to confronting real-life challenges in non-curricular contexts…

In contrast, in Australia, England (United Kingdom) and the United States, the best students in mathematics also have excellent problem-solving skills. These countries’ good performance in problem solving is mainly due to strong performers in mathematics. This may suggest that in these countries, high performers in mathematics have access to – and take advantage of – the kinds of learning opportunities that are also useful for improving their problem-solving skills.’

What proportion of high performers in problem solving are also high performers in one of the other assessments?

The percentages of high achieving students (proficiency level 5 and above) in my sample of eleven jurisdictions who perform equally highly in each of the three domain-specific assessments are shown in Table 6 and Chart 7 below.

These show that Shanghai leads the way in each case, with 98.0% of all students who achieve proficiency level 5+ in problem solving also achieving the same outcome in maths. For science and reading the comparable figures are 75.1% and 71.7% respectively.

Taiwan is the nearest competitor in respect of problem solving plus maths, Finland in the case of problem solving plus science and Ireland in the case of problem solving plus reading.

South Korea, Taiwan and Canada are atypical of the rest in recording a higher proportion of problem solving plus reading at this level than problem solving plus science.

Singapore, Shanghai and Ireland are the only three jurisdictions that score above 50% on all three of these combinations. However, the only jurisdictions that exceed the OECD averages in all three cases are Singapore, Hong Kong, Shanghai and Finland.

Table 6: PISA problem-solving: Percentage of students achieving proficiency level 5+ in domain-specific assessments

  PS + Ma PS + Sci PS + Re
Singapore 84.1 57.0 50.2
South Korea 73.5 34.1 40.3
Hong Kong 79.8 49.4 48.9
Shanghai 98.0 75.1 71.7
Taiwan 93.0 35.3 43.7
Canada 57.7 43.9 44.5
Australia 61.3 54.9 47.1
Finland 66.1 65.4 49.5
England (UK) 59.0 52.8 41.7
USA 54.6 46.9 45.1
Ireland 59.0 57.2 52.0
OECD Average 63.5 45.7 41.0

Chart 7: PISA Problem-solving: Percentage of students achieving proficiency level 5+ in domain-specific assessments

Problem solving chart 7.

What proportion of students achieve highly in one or more assessments?

Table 7 and Chart 8 below show how many students in each of my sample achieved proficiency level 5 or higher in problem-solving only, in problem solving and one or more assessments, in one or more assessments but not problem solving and in at least one assessment (ie the total of the three preceding columns).

I have also repeated in the final column the percentage achieving this proficiency level in each of maths, science and reading. (PISA has not released information about the proportion of students who achieved this feat across all four assessments.)

These reveal that the percentages of students who achieve proficiency level 5+ only in problem solving are very small, ranging from 0.3% in Shanghai to 6.7% in South Korea.

Conversely, the percentages of students achieving proficiency level 5+ in any one of the other assessments but not in problem solving are typically significantly higher, ranging from 4.5% in the USA to 38.1% in Shanghai.

There is quite a bit of variation in terms of whether jurisdictions score more highly on ‘problem solving and at least one other’ (second column) and ‘at least one other excluding problem solving (third column).

More importantly, the fourth column shows that the jurisdiction with the most students achieving proficiency level 5 or higher in at least one assessment is clearly Shanghai, followed by Singapore, Hong Kong, South Korea and Taiwan in that order.

The proportion of students achieving this outcome in Shanghai is close to three times the OECD average, comfortably more than twice the rate achieved in any of the ‘Western’ countries and three times the rate achieved in the USA.

The same is true of the proportion of students achieving this level in the three domain-specific assessments.

On this measure, South Korea and Taiwan fall significantly behind their Asian competitors, and the latter is overtaken by Australia, Finland and Canada.

 .

Table 7: Percentage achieving proficiency level 5+ in different combinations of PISA assessments

  PS only% PS + 1 or more% 1+ butNot PS% L5+ in at least one % L5+ in Ma + Sci + Re %
Singapore 4.3 25.0 16.5 45.8 16.4
South Korea 6.7 20.9 11.3 38.9 8.1
Hong Kong 3.4 15.9 20.5 39.8 10.9
Shanghai 0.3 17.9 38.1 56.3 19.6
Taiwan 1.2 17.1 20.4 38.7 6.1
Canada 5.5 12.0 9.9 27.4 6.5
Australia 4.7 12.0 7.7 24.4 7.6
Finland 3.0 12.0 11.9 26.9 7.4
England (UK) 4.4 9.8 6.8 21.0 5.7* all UK
USA 4.1 7.5 4.5 16.1 4.7
Ireland 2.6 6.8 10.1 19.5 5.7
OECD Average 3.1 8.2 8.5 19.8 4.4

Chart 8: Percentage achieving proficiency level 5+ in different combinations of PISA assessments

Problem solving chart 8

The Report comments:

The proportion of students who reach the highest levels of proficiency in at least one domain (problem solving, mathematics, reading or science) can be considered a measure of the breadth of a country’s/economy’s pool of top performers. By this measure, the largest pool of top performers is found in Shanghai-China, where more than half of all students (56%) perform at the highest levels in at least one domain, followed by Singapore (46%), Hong  Kong-China (40%), Korea and Chinese  Taipei (39%)…Only one OECD country, Korea, is found among the five countries/economies with the largest proportion of top performers. On average across OECD countries, 20% of students are top performers in at least one assessment domain.

The proportion of students performing at the top in problem solving and in either mathematics, reading or science, too can be considered a measure of the depth of this pool. These are top performers who combine the mastery of a specific domain of knowledge with the ability to apply their unique skills flexibly, in a variety of contexts. By this measure, the deepest pools of top performers can be found in Singapore (25% of students), Korea (21%), Shanghai-China (18%) and Chinese Taipei (17%). On average across OECD countries, only 8% of students are top performers in both a core subject and in problem solving.’

There is no explanation of why proficiency level 5 should be equated by PISA with the breadth of a jurisdiction’s ‘pool of top performers’. The distinction between proficiency levels 5 and 6 in this respect requires further discussion.

In addition to updated ‘all-rounder’ data showing what proportion of students achieved this outcome across all four assessments, it would be really interesting to see the proportion of students achieving at proficiency level 6 across different combinations of these four assessments – and to see what proportion of students achieving that outcome in different jurisdictions are direct beneficiaries of targeted support, such as a gifted education programme.

In the light of this analysis, what are jurisdictions’ priorities for improving  problem solving performance?

Leaving aside strengths and weaknesses in different elements of problem solving discussed above, this analysis suggests that the eleven jurisdictions in my sample should address the following priorities:

Singapore has a clear lead at proficiency level 6, but falls behind South Korea at level 5 (though Singapore re-establishes its ascendancy when levels 5 and 6 are considered together). It also has more level 1 performers than South Korea. It should perhaps focus on reducing the size of this tail and pushing through more of its mid-range performers to level 5. There is a pronounced imbalance in favour of boys at level 6, so enabling more girls to achieve the highest level of performance is a clear priority. There may also be a case for prioritising the children of semi-skilled workers.

South Korea needs to focus on getting a larger proportion of its level 5 performers to level 6. This effort should be focused disproportionately on girls, who are significantly under-represented at both levels 5 and 6. South Korea has a very small tail to worry about – and may even be getting close to minimising this. It needs to concentrate on improving the problem solving skills of its stronger performers in maths.

Hong Kong has a slightly bigger tail than Singapore’s but is significantly behind at both proficiency levels 5 and 6. In the case of level 6 it is equalled by Canada. Hong Kong needs to focus simultaneously on reducing the tail and lifting performance across the top end, where girls and weaker performers in maths are a clear priority.

Shanghai has a similar profile to Hong Kong’s in all respects, though with somewhat fewer level 6 performers. It also needs to focus effort simultaneously at the top and the bottom of the distribution. Amongst this sample, Shanghai has the worst under-representation of girls at level 5 and levels 5 and 6 together, so addressing that imbalance is an obvious priority. It also demonstrated the largest variation in performance against PISA’s IESC index, which suggests that it should target young people from disadvantaged backgrounds, as well as the children of semi-skilled workers.

Taiwan is rather similar to Hong Kong and Shanghai, but its tail is slightly bigger and its level 6 cadre slightly smaller, while it does somewhat better at level 5. It may need to focus more at the very bottom, but also at the very top. Taiwan also has a problem with high-performing girls, second only to Shanghai as far as level 5 and levels 5 and 6 together are concerned. However, like Shanghai, it does comparatively better than the other ‘Asian Tigers’ in terms of girls at level 6. It also needs to consider the problem solving performance of its weaker performers in maths.

Canada is the closest western competitor to the ‘Asian Tigers’ in terms of the proportions of students at levels 1 and 5 – and it already outscores Shanghai and Taiwan at level 6. It needs to continue cutting down the tail without compromising achievement at the top end. Canada also has small but significant gender imbalances in favour of boys at the top end.

Australia by comparison is significantly worse than Canada at level 1, broadly comparable at level 5 and somewhat worse at level 6. It too needs to improve scores at the very bottom and the very top. Australia’s gender imbalance is more pronounced at level 6 than level 5.

Finland has the same mean score as Australia’s but a smaller tail (though not quite as small as Canada’s). It needs to improve across the piece but might benefit from concentrating rather more heavily at the top end. Finland has a slight gender imbalance in favour of girls at level 5, but boys are more in the ascendancy at level 6 than in either England or the USA. As in Australia, this latter point needs addressing.

England has a profile similar to Australia’s, but less effective at all three selected proficiency levels. It is further behind at the top than at the bottom of the distribution, but needs to work hard at both ends to catch up the strongest western performers and maintain its advantage over the USA and Ireland. Gender imbalances are small but nonetheless significant.

USA has a comparatively long tail of low achievement at proficiency level 1 and, with the exception of Ireland, the fewest high achievers. This profile is very close to the OECD average. As in England, the relatively small size of gender imbalances in favour of boys does not mean that these can be ignored.

Ireland has the longest tail of low achievement and the smallest proportion of students at proficiency levels 5, 6 and 5 and 6 combined. It needs to raise the bar at both ends of the achievement distribution. Ireland has a larger preponderance of boys at level 6 than its Western competitors and this needs addressing. The limited socio-economic evidence suggests that Ireland should also be targeting the offspring of parents with semi-skilled and elementary occupations.

So there is further scope for improvement in all eleven jurisdictions. Meanwhile the OECD could usefully provide a more in-depth analysis of high achievers on its assessments that features:

  • Proficiency level 6 performance across the board.
  • Socio-economic disparities in performance at proficiency levels 5 and 6.
  • ‘All-rounder’ achievement at these levels across all four assessments and
  • Correlations between success at these levels and specific educational provision for high achievers including gifted education programmes.

.

GP

April 2014

Unpacking the Primary Assessment and Accountability Reforms

This post examines the Government response to consultation on primary assessment and accountability.

pencil-145970_640It sets out exactly what is planned, what further steps will be necessary to make these plans viable and the implementation timetable.

It is part of a sequence of posts I have devoted to this topic, most recently:

Earlier posts in the series include The Removal of National Curriculum Levels and the Implications for Able Pupils’ Progression (June 2012) and Whither National Curriculum Assessment Without Levels? (February 2013).

The consultation response contrives to be both minimal and dense. It is necessary to unpick each element carefully, to consider its implications for the package as a whole and to reflect on how that package fits in the context of wider education reform.

I have organised the post so that it considers sequentially:

  • The case for change, including the aims and core principles, to establish the policy frame for the planned reforms.
  • The impact on the assessment experience of children aged 2-11 and how that is likely to change.
  • The introduction of baseline assessment in Year R.
  • The future shape of end of KS1 and end of KS2 assessment respectively.
  • How the new assessment outcomes will be derived, reported and published.
  • The impact on floor standards.

Towards the end of the post I have also provided a composite ‘to do’ list containing all the declared further steps necessary to make the plan viable, with a suggested deadline for each.

And the post concludes with an overall judgement on the plans, in the form of a summary of key issues and unanswered questions arising from the earlier commentary. Impatient readers may wish to jump straight to that section.

I am indebted to Warwick Mansell for his previous post on this topic. I shall try hard not to parrot the important points he has already made, though there is inevitably some overlap.

Readers should also look to Michael Tidd for more information about the shape and content of the new tests.

What has been published?

The original consultation document ‘Primary assessment and accountability under the new national curriculum’ was published on 17 July 2013 with a deadline for response of 17 October 2013. At that stage the Government’s response was due ‘in autumn 2013’.

The response was finally published on 27 March, some four months later than planned and only five months prior to the introduction of the revised national curriculum which these arrangements are designed to support.

It is likely that the Government will have decided that 31 March was the latest feasible date to issue the response, so they were right up against the wire.

It was accompanied by:

  • A press release which focused on the full range of assessment reforms – for primary, secondary and post-16.

Shortly before the response was published, the reply to a Parliamentary question asked on 17 March explained that test frameworks were expected to be included within it:

‘Guidance on the nature of the revised key stage 1 and key stage 2 tests, including mathematics, will be published by the Standards and Testing Agency in the form of test framework documents. The frameworks are due to be released as part of the Government’s response to the primary assessment and accountability consultation. In addition, some example test questions will be made available to schools this summer and a full sample test will be made available in the summer of 2015.’ (Col 383W)

.

.

In the event, these documents – seven in all – did not appear until 31 March and there was no reference to any of the three commitments above in what appeared on 27 March.

Finally, the Standards and Testing Agency published on 3 April a guidance page on national curriculum tests from 2016. At present it contains very little information but further material will be added as and when it is published.

Partly because the initial consultation document was extremely ‘drafty’, the reaction of many key external respondents to the consultation was largely negative. One imagines that much of the period since 17 October has been devoted to finding the common ground.

Policy makers will have had to do most of their work after the consultation document issued because they were not ready beforehand.

But the length of the delay in issuing the response would suggest that they also encountered significant dissent amongst internal stakeholders – and that the eventual outcome is likely to be a compromise of sorts between these competing interests.

Such compromises tend to have observable weaknesses and/or put off problematic issues for another day.

A brief summary of consultation responses is included within the Government’s response. I will refer to this at relevant points during the discussion below.

 .

The Case for Change

 .

Aims

The consultation response begins – as did the original consultation document – with a section setting out the case for reform.

It provides a framework of aims and principles intended to underpin the changes that are being set in place.

The aims are:

  • The most important outcome of primary education is to ‘give as many pupils as possible the knowledge and skills to flourish in the later phases of education’. This is a broader restatement of the ‘secondary ready’ concept adopted in the original consultation document.
  • The primary national curriculum and accountability reforms ‘set high expectations so that all children can reach their potential and are well prepared for secondary school’. Here the ‘secondary ready’ hurdle is more baldly stated. The parallel notion is that all children should do as well as they can – and that they may well achieve different levels of performance. (‘Reach their potential’ is disliked by some because it is considered to imply a fixed ceiling for each child and fixed mindset thinking.)
  • To raise current threshold expectations. These are set too low, since too few learners (47%) with KS2 level 4C in both English and maths go on to achieve five or more GCSE grades A*-C including English and maths, while 72% of those with KS2 level 4B do so. So the new KS2 bar will be set at this higher level, but with the expectation that 85% of learners per school will jump it, 13% more than the current national figure. Meanwhile the KS4 outcome will also change, to achievement across eight GCSEs rather than five, quite probably at a more demanding level than the present C grade. In the true sense, this is a moving target.
  • No child should be allowed to fall behind’. This is a reference to the notion of ‘mastery’ in its crudest sense, though the model proposed will not deliver this outcome. We have noted already a reference to ‘as many children as possible’ and the school-level target – initially at least – will be set at 85%. In reality, a significant minority of learners will progress more slowly and will fall short of the threshold at the end of KS2.
  • The new system ‘will set a higher bar’ but ‘almost all pupils should leave primary school well-placed to succeed in the next phase of their education’. Another nuanced version of ‘secondary ready’ is introduced. This marks a recognition that some learners will not jump over the higher bar. In the light of subsequent references to 85%, ‘almost all’ is rather over-optimistic.
  • We also want to celebrate the progress that pupils make in schools with more challenging intakes’. Getting ‘nearly all pupils to meet this standard…’ (the standard of secondary readiness?) ‘…is very demanding, at least in the short term’. There will therefore be recognition of progress ‘from a low starting point’ – even though these learners have, by definition, been allowed to fall behind and will continue to do so.

So there is something of a muddle here, no doubt engendered by a spirit of compromise.

The black and white distinction of ‘secondary-readiness’ has been replaced by various verbal approximations, but the bottom line is that there will be a defined threshold denoting preparedness that is pitched higher than the current threshold.

And the proportion likely to fall short is downplayed – there is apparent unwillingness at this stage to acknowledge the norm that up to 15% of learners in each school will undershoot the threshold – substantially more in schools with ‘challenging intakes’.

What this boils down to is a desire that all will achieve the new higher hurdle – and that all will be encouraged to exceed it if they can – tempered by recognition that this is presently impossible. No child should be allowed to fall behind but many inevitably will do so.

It might have been better to express these aims in the form of future aspirations – and our collective efforts to bridge the gap between present reality and those ambitious aspirations.

Principles

The section concludes with a new set of principles governing pedagogy, assessment and accountability:

  • ‘Ongoing, teacher-led assessment is a crucial part of effective teaching;
  • Schools should have the freedom to decide how to teach their curriculum and how to track the progress that pupils make;
  • Both summative teacher assessment and external testing are important;
  • Accountability is key to a successful school system, and therefore must be fair and transparent;
  • Measures of both progress and attainment are important for understanding school performance; and
  • A broad range of information should be published to help parents and the wider public know how well schools are performing.’

These are generic ‘motherhood and apple pie’ statements and so largely uncontroversial. I might have added a seventh – that schools’ in-house assessment and reporting systems must complement summative assessment and testing, including by predicting for parents the anticipated outcomes of the latter.

Perhaps interestingly, there is no repetition of the defence for the removal of national curriculum levels. Instead, the response concentrates on the support available to schools.

It mentions discussion with an ‘expert group on assessment’ about ‘how to support schools to make best use of the new assessment freedoms’. We are not told the membership of this group (which, as far as I know, has not been made public) or the nature of its remit.

There is also a link to information about the Assessment Innovation Fund, which will provide up to 10 grants of up to £10,000 which schools and organisations can use to develop packages that share their innovative practice with others.

 

Children’s experience of assessment up to the end of KS2

The response mentions the full range of national assessments that will impact on children between the ages of two and 11:

  • The statutory progress check at two years of age.
  • A new baseline assessment undertaken within a few weeks of the start of Year R, introduced from September 2015.
  • An Early Years Foundation Stage Profile undertaken in the final term of the year in which children reach the age of five. A revised profile was introduced from September 2012. It is currently compulsory but will be optional from September 2016. The original consultation document said that the profile would no longer be moderated and data would no longer be collected. Neither of those commitments is repeated here.
  • The Phonics Screening Check, normally undertaken in Year 1. The possibility of making these assessments non-statutory for all-through primary schools, suggested in the consultation document, has not been pursued: 53% of respondents opposed this idea, whereas 32% supported it.
  • End of KS1 assessment and
  • End of KS2 assessment.

So a total of six assessments are in place between the ages of two and 11. At least four – and possibly five – will be undertaken between ages two and seven.

It is likely that early years’ professionals will baulk at this amount of assessment, no matter how sensitively it is designed. But the cost and inefficiency of the model is also open to criticism.

The Reception Baseline

Approach

The original consultation document asked whether:

  • KS1 assessment should be retained as a baseline – 45% supported this and 41% were opposed.
  • A baseline check should be introduced at the start of Reception – 51% supported this and 34% were opposed.
  • Such a baseline check should be optional – 68% agreed and 19% disagreed.
  • Schools should be allowed to choose from a range of commercially available materials for this baseline check – 73% said no and only 15% said yes.

So, whereas views were mixed on where the baseline should be set, there were substantial majorities in favour of any Year R baseline check being optional and following a single, standard national format.

The response argues that Year R is the most sensible point at which to position the baseline since that is:

‘…the earliest point that nearly all children are in school’.

What happens in respect of children who are not in school at this point is not discussed.

There is no explanation of why the Government has disregarded the clear majority of respondents by choosing to permit a range of assessment approaches, so this decision must be ideologically motivated.

The response says ‘most’ are likely to be administered by teaching staff, leaving open the possibility that some options will be administered externally.

Design

Such assessments will need to be:

‘…strong predictors of key stage 1 and key stage 2 attainment, whilst reflecting the age and abilities of children in Reception’.

Presumably this means predictors of attainment in each of the three core subjects – English, maths and science – rather than any broader notion of attainment. The challenge inherent in securing a reasonable predictor of attainment across these domains seven years further on in a child’s development should not be under-estimated.

The response points out that such assessment tools are already available for use in Year R, some are used widely and some schools have long experience of using them. But there is no information about how many of these are deemed to meet already the description above.

In any case, new criteria need to be devised which all such assessments must meet. Some degree of modification will be necessary for all existing products and new products will be launched to compete in the market.

There is an opportunity to use this process to ratchet up the Year R Baseline beyond current expectations, so matching the corresponding process at the end of KS2. The consultation response says nothing about whether this is on the cards.

Interestingly, in his subsequent ‘Unsure start’ speech about early years inspection, HMCI refers to:

‘…the government’s announcement last week that they will be introducing a readiness-for-school test at age four. This is an ideal opportunity to improve accountability. But I think it should go further.

I hope that the published outcomes of these tests will be detailed enough to show parents how their own child has performed. I fear that an overall school grade will fail to illuminate the progress of poor children. I ask government to think again about this issue.’

The terminology – ‘readiness for school’ is markedly blunter than the references to a reception baseline in the consultation response. There is nothing in the response about the outcomes of these tests being published, nor anything about ‘an overall school grade’.

Does this suggest that decisions have already been made that were not communicated in the consultation response?

.

Timeline, options, questions

Several pieces of further work are required in short order to inform schools and providers about what will be required – and to enable both to prepare for introduction of the assessments from September 2015. All these should feature in the ‘to do’ list below.

One might reasonably have hoped that – especially given the long delay – some attempt might have been made to publish suggested draft criteria for the baseline alongside the consultation response. The fact that even preliminary research into existing practice has not been undertaken is a cause for concern.

Although the baseline will be introduced from September 2015, there is a one-year interim measure which can only apply to all-through primary schools:

  • They can opt out of the Year R baseline measure entirely, relying instead on KS1 outcomes as their baseline; or
  • They can use an approved Year R baseline assessment and have this cohort’s progress measured at the end of KS2 (which will be in 2022) by either the Year R or the KS1 baseline, whichever demonstrates the most progress.

In the period up to and including 2021, progress will continue to be measured from the end of KS1. So learners who complete KS2 in 2021 for example will be assessed on progress since their KS1 tests in 2017.

Junior and middle schools will also continue to use a KS1 baseline.

Arrangements for infant and first schools are still to be determined, another rather worrying omission at this stage in proceedings.

It is also clear that all-through primary schools (and infant/first schools?) will continue to be able to opt out from the Year R baseline from September 2016 onwards, since the response says:

‘Schools that choose not to use an approved baseline assessment from 2016 will be judged on an attainment floor standard alone’.

Hence the Year R baseline check is entirely optional and a majority of schools could choose not to undertake it.

However, they would need to be confident of meeting the demanding 85% attainment threshold in the floor standard.

They might be wise to postpone that decision until the pitch of the progress expectation is determined. For neither the Year R baseline nor the amount of progress that learners are expected to make from their starting point in Year R is yet defined.

This latter point applies at the average school level (for the purposes of the floor standard) and in respect of the individual learner. For example, if a four year-old is particularly precocious in, say, maths, what scaled scores must they register seven years later to be judged to have made sufficient progress?

There are several associated questions that follow on from this.

Will it be in schools’ interests to acknowledge that they have precocious four year-olds at all? Will the Year R baseline reinforce the tendency to use Reception to bring all children to the same starting point in readiness for Year 1, regardless of their precocity?

Will the moderation arrangements be hard-edged enough to stop all-through primary schools gaming the system by artificially depressing their baseline outcomes?

Who will undertake this moderation and how much will it cost? Will not the decision to permit schools to choose from a range of measures unnecessarily complicate the moderation process and add to the expense?

The consultation response neither poses these questions nor supplies answers.

The future shape of end KS1 and end KS2 assessment

.

What assessment will take place?

At KS1 learners will be assessed in:

  • Reading – test plus teacher assessment
  • Writing – test (of grammar, punctuation and spelling) plus teacher assessment
  • Speaking and listening – teacher assessment
  • Maths – test plus teacher assessment
  • Science  - teacher assessment

The new test of grammar, punctuation and spelling did not feature in the original consultation and has presumably been introduced to strengthen the marker of progress to which four year-olds should aspire at age seven.

The draft test specifications for the KS1 tests in reading, GPS and maths outline the requirements placed on the test developers, so it is straightforward to compare the specifications for reading and maths with the current tests.

The GPS test will include a 20 minute written grammar and punctuation task; a 20 minute test comprising short grammar, punctuation and vocabulary questions; and a 15 minute spelling task.

There is a passing reference to further work on KS1 moderation which is included in the ‘to do’ list below.

At KS2 learners will be assessed in

  • Reading – test plus teacher assessment
  • Writing – test (of grammar spelling and punctuation) plus teacher assessment
  • Maths – test plus teacher assessment
  • Science  - teacher assessment plus a science sampling test.

Once again, the draft test specifications – reading, GPS, maths and science sampling – describe the shape of each test and the content they are expected to assess.

I will leave it to experts to comment on the content of the tests.

 .

Academies and free schools

It is important to note that the framing of this content – by means of detailed ‘performance descriptors’ – means that the freedom academies and free schools enjoy in departing from the national curriculum will be largely illusory.

I raised this issue back in February 2013:

  • ‘We know that there will be a new grading system in the core subjects at the end of KS2. If this were to be based on the ATs as drafted, it could only reflect whether or not learners can demonstrate that they know, can apply and understand ‘the matters, skills and processes specified’ in the PoS as a whole. Since there is no provision for ATs that reflect sub-elements of the PoS – such as reading, writing, spelling – grades will have to be awarded on the basis of separate syllabuses for end of KS2 tests associated with these sub-elements.
  • This grading system must anyway be applied universally if it is to inform the publication of performance tables. Since some schools are exempt from National Curriculum requirements, it follows that grading cannot be derived directly from the ATs and/or the PoS, but must be independent of them. So this once more points to end of KS2 tests based on entirely separate syllabuses which nevertheless reflect the relevant part of the draft PoS. The KS2 arrangements are therefore very similar to those planned at KS4.’

I have more to say about the ‘performance descriptors’ below.

 .

Single tests for all learners

A critical point I want to emphasise at this juncture – not mentioned at all in the consultation document or the response – is the test development challenge inherent in producing single papers suitable for all learners, regardless of their attainment.

We know from the response that the P-scales will be retained for those who are unable to access the end of key stage tests. (Incidentally, the content of the P-scales will remain unchanged so they will not be aligned with the revised national curriculum, as suggested in the consultation document.)

There will also be provision for pupils who are working ‘above the P-scales but below the level of the test’.

Now the P-scales are for learners working below level 1 (in old currency). This is the first indication I have seen that the tests may not cater for the full range from Level 1-equivalent to Level 6-equivalent and above. But no further information is provided.

It may be that this is a reference to learners who are working towards level 1 (in old currency) but do not have SEN.

The 2014 KS2 ARA booklet notes:

‘Children working towards level 1 of the national curriculum who do not have a special educational need should be reported to STA as ‘W’ (Working below the level). This includes children who are working towards level 1 solely because they have English as an additional language. Schools should use the code ‘NOTSEN’ to explain why a child working towards level 1 does not have P scales reported. ‘NOTSEN’ replaces the code ‘EAL’ that was used in previous years.’

The consultation document said:

‘We do not propose to develop an equivalent to the current level 6 tests, which are used to challenge the highest-attaining pupils. Key stage 2 national curriculum tests will include challenging material (at least of the standard of the current level 6 test) which all pupils will have the opportunity to answer, without the need for a separate test’.

The draft test specifications make it clear that the tests should:

‘provide a suitable challenge for all children and give every child the opportunity to achieve as high a standard…as possible.’

Moreover:

‘In order to improve general accessibility for all children, where possible, questions will be placed in order of difficulty.’

The development of single tests covering this span of attainment – from level 1 to above level 6 – tests in which the questions are posed in order of difficulty and even the highest attainers must answer all questions – seem to me to be a very tall order, especially in maths.

More than that, I urgently need persuading that this is not a waste of high attainers’ time and poor assessment practice.

 .

How assessment outcomes will be derived, reported and published

Deriving assessment outcomes

One of the reasons cited for replacing national curriculum levels was the complexity of the system and the difficulty parents experienced in understanding it.

The Ministerial response to the original report from the National Curriculum Expert Panel said:

‘As you rightly identified, the current system is confusing for parents and restrictive for teachers. I agree with your recommendation that there should be a direct relationship between what children are taught and what is assessed. We will therefore describe subject content in a way which makes clear both what should be taught and what pupils should know and be able to do as a result.’

The consultation document glossed the same point thus:

‘Schools will be able to focus their teaching, assessment and reporting not on a set of opaque level descriptions, but on the essential knowledge that all pupils should learn.’

However, the consultation response introduces for the first time the concept of a ‘performance descriptor’.

This term is defined in the glossaries at the end of each draft test specification:

Description of the typical characteristics of children working at a particular standard. For these tests, the performance descriptor will characterise the minimum performance required to be working at the appropriate standard for the end of the key stage.’

Essentially this is a collective term for something very similar to old-style level descriptions.

Except that, in the case of the tests, they are all describing the same level of performance.

They have been rendered necessary by the odd decision to provide only a single generic attainment target for each programme of study. But, as noted back in February 2013, the test developers need a more sophisticated framework on which to base their assessments.

According to the draft test specifications they will also be used

‘By a panel of teachers to set the standards on the new tests following their first administration in May 2016’.

When it comes to teacher assessment, the consultation response says:

‘New performance descriptors will be introduced to inform the statutory teacher assessments at the end of key stage one [and]…key stage two.’

But there are two models in play simultaneously.

In four cases – science at KS1 and reading, maths and science at KS2 – there will be ‘a single performance descriptor of the new expected standard’, in the same way as there are in the test specifications.

But in five cases – reading, writing, speaking and listening and maths at KS1; and writing at KS2 :

‘teachers will assess pupils as meeting one of several performance descriptors’.

These are old-style level descriptors by another name. They perform exactly the same function.

The response says that the KS1 teacher assessment performance descriptors will be drafted by an expert group for introduction in autumn 2014. It does not mention whether KS2 teacher assessment performance descriptors will be devised in the same way and to the same timetable.

 .

Reporting assessment outcomes to parents

When it comes to reporting to parents, there will be three different arrangements in play at both KS1 and KS2:

  • Test results will be reported by means of scaled scores (of which more in a moment).
  • One set of teacher assessments will be reported by selecting from a set of differentiated performance descriptors.
  • A second set of teacher assessments will be reported according to whether learners have achieved a single threshold performance descriptor.

This is already significantly more complex than the previous system, which applied the same framework of national curriculum levels across the piece.

It seems that KS1 test outcomes will be reported as straightforward scaled scores (though this is only mentioned on page 8 of the main text of the response and not in Annex B, which compares the new arrangements with those currently in place).

But, in the case of KS2:

‘Parents will be provided with their child’s score alongside the average for their school, the local area and nationally. In the light of the consultation responses, we will not give parents a decile ranking for their child due to concerns about whether decile rankings are meaningful and their reliability at individual pupil level.’

The consultation document proposed a tripartite reporting system comprising:

  • A scaled score for each KS2 test, derived from raw test marks and built around a ‘secondary readiness standard’. This standard would be set at a scaled score of 100, which would remain unchanged. It was suggested for illustrative purposes that a scale based on the current national curriculum tests might run from 80 to 130.
  • An average scaled score in each test for other pupils nationally with the same prior attainment at the baseline. Comparison of a learner’s scaled score with the average scaled score would show whether they had made more or less progress than the national average.
  • A national ranking in each test – expressed in terms of deciles – showing how a learner’s scaled score compared with the range of performance nationally.

The latter has been dispensed with, given that 35% of consultation respondents disagreed with it, but there were clearly technical reservations too.

In its place, the ‘value added’ progress measure has been expanded so that there is a comparison with other pupils in the learner’s own school and the ‘local area’ (which presumably means local authority). This beefs up the progression element in reporting at the expense of information about the attainment level achieved.

So at the end of KS2 parents will receive scaled scores and three average scaled scores for each of reading, writing and maths – twelve scores in all – plus four performance descriptors, of which three will be singleton threshold descriptors (reading, maths and science) and one will be selected from a differentiated series (writing). That makes sixteen assessment outcomes altogether, provided in four different formats.

The consultation response tells us nothing more about the range of the scale that will be used to provide scaled scores. We do not even know if it will be the same for each test.

The draft test specifications say that:

‘The exact scale for the scaled scores will be determined following further analysis of trialling data. This will include a full review of the reporting of confidence intervals for scaled scores.’

But they also contain this worrying statement:

‘The provision of a scaled score will aid in the interpretation of children’s performance over time as the scaled score which represents the expected standard will be the same year on year. However, at the extremes of the scaled score distribution, as is standard practice, the scores will be truncated such that above and below a certain point, all children will be awarded the same scaled score in order to minimise the effect for children at the ends of the distribution where the test is not measuring optimally.’

This appears to suggest that scaled scores will not accurately describe performance at the extremes of the distribution, because the tests will not accurately measure such performance. This might be describing a statistical truism, but it again begs the question whether the highest attainers are being short-changed by the selected approach.

.

Publication of assessment outcomes

The response introduces the idea that ‘a suite of indicators’ will be published on each school’s own website in a standard format. These are:

  • The average progress made by pupils in reading, writing and maths. (This is presumably relevant to both KS1 and KS2 and to both tests and teacher assessment.)
  • The percentage of pupils reaching the expected standard in reading, writing and mathematics at the end of key stage 2. (This is presumably relevant to both tests and teacher assessment.)
  • The average score of pupils in their end of key stage 2 assessments. (The final word suggests teacher assessment as well as tests, even though there will not be a score from the former.)
  • The percentage of pupils who achieve a high score in all areas at the end of key stage 2. (Does ‘all areas’ imply something more than statutory tests and teacher assessments? Does it mean treating each area separately, or providing details only of those who have achieved high scores across all areas?)

The latter is the only reference to high attainers in the entire response. It does not give any indication of what will count as a high score for these purposes. Will it be designed to catch the top-third of attainers or something more demanding, perhaps equivalent to the top decile?

A decision has been taken not to report the outcomes of assessment against the P-scales because the need to contextualise such information is perceived to be relatively greater.

And, as noted above, HMCI let slip the fact that the outcomes of reception baselines would also be published, but apparently in the form of a single overall grade.

We are not told when these requirements will be introduced, but presumably they must be in place to report the outcomes of assessments undertaken in spring 2016.

Additionally:

‘So that parents can make comparisons between schools, we would like to show each school’s position in the country on these measures and present these results in a manner that is clear for all audiences to understand. We will discuss how best to do so with stakeholders, to ensure that the presentation of the data is clear, fair and statistically robust.’

This suggests inclusion in the 2016 School Performance Tables, but this is not stated explicitly.

Indeed, apart from references to the publication of progress measures in the 2022 Performance Tables, there is no explicit coverage of their contribution in the response, nor any reference to the planned supporting data portal, or how data will be distributed between the Tables and the portal.

The original consultation document gave several commitments on the future content of performance tables. They included:

  • How many of a school’s pupils are amongst the highest attaining nationally, by showing the percentage of pupils achieving a high scaled score in each subject.
  • Measures to show the attainment and progress of learners attracting the Pupil Premium.
  • Comparison of each school’s performance with that of schools with similar intakes.

None are mentioned here, nor are any of the suggestions advanced by respondents taken up.

Floor standards

Changes are proposed to the floor standards with effect from September 2016.

This section of the response begins by committing to:

‘…a new floor standard that holds schools to account both on the progress that they make and on how well their pupils achieve.’

But the plans set out subsequently do not meet this description.

The progress element of the current floor standard relates to any of reading, writing or mathematics but, under the new floor standard, it will relate to all three of these together.

An all-though primary school must demonstrate that:

‘…pupils make sufficient progress at key stage 2 from their starting point…’

As we have noted above, all-through primaries can opt to use the KS1 baseline or the Year R baseline in 2015. Moreover, from 2016 they can choose not to use the Year R baseline and be assessed solely on the attainment measure in the floor standards (see below).

Junior and middle schools obviously apply the KS1 baseline, while arrangements for infant and first schools have yet to be finalised.

What constitutes ‘sufficient progress’ is not defined. Annex C of the response says:

‘For 2016 we will set the precise extent of progress required once key stage 2 tests have been sat for the first time.’

Presumably this will be progress from KS1 to KS2, since progress from the Year R baseline will not be introduced until 2023.

The attainment element of the new floor standards is for schools to have 85% or more of pupils meeting the new, higher threshold standard at the end of KS2 in all of reading, writing and maths. The text says explicitly that this threshold is ‘similar to a level 4b under the current system’.

Annex C clarifies that this will be judged by the achievement of a scaled score of 100 or more in each of the reading and maths tests, plus teacher assessment that learners have reached the expected standard in writing (so the GPS test does not count in the same way, simply informing the teacher assessment).

As noted above, this a far bigger ask than the current reference to 65% of learners meeting the expected (and lower 4c) standard. The summary at the beginning of the response refers to it as ‘a challenging aspiration’:

‘Over time we expect more and more schools to achieve this standard.’

The statement in the first paragraph of this section of the response led us to believe that these two requirements – for progress and attainment respectively – would be combined, so that schools would be held account for both (unless, presumably, they exercised their right to opt out of the Year R baseline assessment).

But this is not the case. Schools need only achieve one or the other.

It follows that schools with a very high performing intake may exceed the floor standards on the basis of all-round high attainment alone, regardless of the progress made by their learners.

The reason for this provision is unclear, though one suspects that schools with an extremely high attaining intake, whether at Reception or Year 3, will be harder pressed to achieve sufficient progress, presumably because some ceiling effects come into play at the end of KS2.

This in turn might suggest that the planned tests do not have sufficient headroom for the highest attainers, even though they are supposed to provide similar challenge to level 6 and potentially extend beyond it.

Meanwhile, schools with less than stellar attainment results will be obliged to follow the progress route to jump the floor standard. This too will be demanding because all three domains will be in play.

There will have been some internal modelling undertaken to judge how many schools would be likely to fall short of the floor standards given these arrangements and it would be very useful to know these estimates, however unreliable they prove to be.

In their absence, one suspects that the majority of schools will be below the floor standards, at least initially. That of course materially changes the nature and purpose of the standards.

To Do List

The response and the draft specifications together contain a long list of work to be carried out over the next two years or so. I have included below my best guess as to the latest possible date for each decision to be completed and communicated:

  • Decide how progress will be measured for infants and first schools between the Year R baseline and the end of KS1 (April 2014)
  • Make available to schools a ‘small number’ of sample test questions for each key stage and subject (Summer 2014)
  • Work with experts to establish the criteria for the Year R baseline (September 2014)
  • KS1 [and KS2?] teacher assessment performance descriptors to be drafted by an expert group (September 2014)
  • Complete and report outcomes of a study with schools that already use Year R baseline assessments (December 2014)
  • Decide how Year R baseline assessments will be moderated (December 2014)
  • Publish a list of assessments that meet the Year R baseline criteria (March 2015)
  • Decide how Year R baseline results will be communicated to parents and to Ofsted (March 2015)
  • Make available to schools a full set of sample materials including tests and mark schemes for all KS1 and KS2 tests (September 2015)
  • Complete work with Ofsted and Teachers to improve KS1 moderation (September 2015)
  • Provide further information to enable teachers to assess pupils at the end of KS1 and KS2 who are ‘working above the P-scales but below the level of the test’ (September 2015)
  • Decide whether to move to external moderation of P-scale teacher assessment (September 2015)
  • Agree with stakeholders how to compare schools’ performance on a suite of assessment outcomes published in a standard format (September 2015)
  • Publish all final test frameworks (Autumn 2015)
  • Introduce new requirements for schools to publish a suite of assessment outcomes in a standard format (Spring 2016)
  • Panels of teacher use level descriptors to set the standards on the new tests following their first administration in May 2016 (Summer 2016)
  • Define what counts as sufficient progress from the Year R baseline to end KS1 and end KS2 respectively (Summer 2016)

Conclusion

Overall the response is rather more cogent and coherent than the original consultation document, though there are several inconsistencies and many sins of omission.

Drawing together the key issues emerging from the commentary above, I would highlight twelve key points:

  • The declared aims express the policy direction clumsily and without conviction. The ultimate aspirations are universal ‘secondary readiness’ (though expressed in broader terms), ‘no child left behind’ and ‘every child fulfilling their potential’ but there is no real effort to reconcile these potentially conflicting notions into a consensual vision of what primary education is for. Moreover, an inconvenient truth lurks behind these statements. By raising expectations so significantly – 4b equivalent rather than 4c; 85% over the attainment threshold rather than 65%; ‘sufficient progress’ rather than median progress and across three domains rather than one – there will be much more failure in the short to medium term. More learners will fall behind and fall short of the thresholds; many more schools are likely to undershoot the floor standards. It may also prove harder for some learners to demonstrate their potential. It might have been better to acknowledge this reality and to frame the vision in terms of creating the conditions necessary for subsequent progress towards the ultimate aspirations.
  • Younger children are increasingly caught in the crossbeam from the twin searchlights of assessment and accountability. HMCI’s subsequent intervention has raised the stakes still further. This creates obvious tensions in the sector which can be traced back to disagreements over the respective purposes of early years and primary provision and how they relate to each other. (HMCI’s notion of ‘school readiness’ is no doubt as narrow to early years practitioners as ‘secondary readiness’ is to primary educators.) But this is not just a theoretical point. Additional demands for focused inspection, moderation and publication of outcomes all carry a significant price tag. It must be open to question whether the sheer weight of assessment activity is optimal and delivers value for money. Should a radical future Government – probably with a cost-cutting remit – have rationalisation in mind?
  • Giving schools the freedom to choose from a range of Year R baseline assessment tools also seems inherently inefficient and flies in the face of the clear majority of consultation responses. We are told nothing of the perceived quality of existing services, none of which can – by definition – satisfy these new expectations without significant adjustment. It will not be straightforward to construct a universal and child-friendly instrument that is a sufficiently strong predictor of Level 4b-equivalent performance in KS2 reading, writing and maths assessments undertaken seven years later. Moreover, there will be a strong temptation for the Government to pitch the baseline higher than current expectations, so matching the  realignment at the other end of the process. Making the Reception baseline assessment optional – albeit with strings attached – seems rather half-hearted, almost an insurance against failure. Effective (and expensive) moderation may protect against widespread gaming, but the risk remains that Reception teachers will be even more predisposed to prioritise universal school readiness over stretching their more precocious four year-olds.
  • The task of designing an effective test for all levels of prior attainment at the end of key stage 2 is equally fraught with difficulty. The P-scales will be retained (in their existing format, unaligned with the revised national curriculum) for learners with special needs working below the equivalent of what is currently level 1. There will also be undefined provision ‘for those working above the level of the P-scales but below the level of the test’, even though the draft test development frameworks say:

‘All eligible children who are registered at maintained schools, special schools, or academies (including free schools) in England and are at the end of key stage 2 will be required to take the…test, unless they have taken it in the past.’

And this applies to all learners other than those in the exempted categories set out in the ARA booklets. The draft specifications add that test questions will be placed in order of difficulty. I have grave difficulty in understanding how such assessments can be optimal for high attainers and fear that this is bad assessment practice.

  • On top of this there is the worrying statement in the test development frameworks that scaled scores will be ‘truncated’ at the extremes of the distribution’. This does not fill one with confidence that the highest and lowest attainers will have their test performance properly recognised and reported.
  • The necessary invention of ‘performance descriptors’ removes any lingering illusion that academies and free schools have significant freedom to depart from the national curriculum, at least as far as the core subjects are concerned. It is hard to understand why these descriptors could not have been published alongside the programmes of study within the national curriculum.
  • The ‘performance descriptors’ in the draft test specifications carry all sorts of health warnings that they are inappropriate for teacher assessment because they cover only material that can be assessed in a written test. But there will be significant overlap between the test and teacher assessment versions, particularly in those that describe threshold performance at the equivalent of level 4b. For we know now that there will also be hierarchies of performance descriptors – aka level descriptors – for KS1 teacher assessment in reading, writing, speaking and listening and maths, as well as for KS2 teacher assessment in writing. Levels were so problematic that it has been necessary to reinvent them!
  • What with scaled scores, average scaled scores, threshold performance descriptors and ‘levelled’ performance descriptors, schools face an uphill battle in convincing parents that the reporting of test outcomes under this system will be simpler and more understandable. At the end of KS2 they will receive 16 different assessments in four different formats. (Remember that parents will also need to cope with schools’ approaches to internal assessment, which may or may not align with these arrangements.)
  • We are told about new requirements to be placed on schools to publish assessment outcomes, but the description is infuriatingly vague. We do not know whether certain requirements apply to both KS1 and 2, and/or to both tests and teacher assessment. The reference to ‘the percentage of pupils who achieve a high score in all areas at the end of key stage 2’ is additionally vague because it is unclear whether it applies to performance in each assessment, or across all assessments combined. Nor is the pitch of the high score explained. This is the only reference to high attainers in the entire response and it raises more questions than it answers.
  • We also have negligible information about what will appear in the school performance tables and what will be relegated to the accompanying data portal. We know there is an intention to compare schools’ performance on the measures they are required to publish and that is all. Much of the further detail in the original consultation document may or may not have fallen by the wayside.
  • The new floor standards have all the characteristics of a last-minute compromise hastily stitched together. The consultation document was explicit that floor standards would:

‘…focus on threshold attainment measures and value-added progress measures’

It anticipated that the progress measure would require average scaled scores of between 98.5 and 99.0 adding:

‘Our modelling suggests that a progress measure set at this level, combined with the 85% threshold attainment measure, would result in a similar number of schools falling below the floor as at present.’

But the analysis of responses fails to report at all on the question ‘Do you have any comments about these proposals for the Department’s floor standards?’ It does include the response to a subsequent question about including an average point score attainment measure in the floor standards (39% of respondents were in favour of this against 31% against). But the main text does not discuss this option at all. It begins by stating that both an attainment and a progress dimension are in play, but then describes a system in which schools can choose one or the other. There is no attempt to quantify ‘sufficient progress’ and no revised modelling of the impact of standards set at this level. We are left with the suspicion that a very significant proportion of schools will not exceed the floor. There is also a potential perverse incentive for schools with very high attaining intakes not to bother about progress at all.

  • Finally, the ‘to do’ list is substantial. Several of those with the tightest deadlines ought really to have been completed ahead of the consultation response, especially given the significant delay. There is nothing about the interaction between this work programme and that proposed by NAHT’s Commission on Assessment. Much of this work would need to take place on the other side of a General Election, while the lead time for assessing KS2 progress against a Year R baseline is a full nine years. This makes the project as a whole particularly vulnerable to the whims of future governments.

I’m struggling to find the right description for the overall package. I don’t think it’s quite substantial or messy enough to count as a dog’s breakfast. But, like a poorly airbrushed portrait, it flatters to deceive. Seen from a distance it appears convincing but, on closer inspection, there are too many wrinkles that have not been properly smoothed out

GP

April 2014

 

 

Challenging NAHT’s Commission on Assessment

.

This post reviews the Report of the NAHT’s National Commission on Assessment, published on 13 February 2014.

pencil-145970_640Since I previously subjected the Government’s consultation document on primary assessment and accountability to a forensic examination, I thought it only fair that I should apply the same high standards to this document.

I conclude that the Report is broadly helpful, but there are several internal inconsistencies and a few serious flaws.

Impatient readers may wish to skip the detailed analysis and jump straight to the summary at the end of the post which sets out my reservations in the form of 23 recommendations addressed to the Commission and the NAHT.

.

Other perspectives

Immediate reaction to the Report was almost entirely positive.

The TES included a brief Ministerial statement in its coverage, attributed to Michael Gove:

‘The NAHT’s report gives practical, helpful ideas to schools preparing for the removal of levels. It also encourages them to make the most of the freedom they now have to develop innovative approaches to assessment that meet the needs of pupils and give far more useful information to parents.’

ASCL and ATL both welcomed the Report, as did the National Governors’ Association, though there was no substantive comment from NASUWT or NUT.

The Blogosphere exhibited relatively little interest, although a smattering of posts began to expose some issues:

  • LKMco supported the key recommendations, but wondered whether the Commission might not be guilty of reinventing National Curriculum levels;
  • Mr Thomas Maths was more critical, identifying three key shortcomings, one being the proposed approach to differentiation within assessment;
  • Warwick Mansell, probably because he blogs for NAHT, confined himself largely to summarising the Report, which he found ‘impressive’, though he did raise two key points – the cost of implementing these proposals and how the recommendations relate to the as yet uncertain position of teacher assessment in the Government’s primary assessment and accountability reforms.

All of these points – and others – are fleshed out in the critique below.

.

Background

.

Remit, Membership and Evidence Base

The Commission was first announced in July 2013, when it was described as:

‘a commission of practitioners to shape the future of assessment in a system without levels.’

By September, Lord Sutherland had agreed to Chair the body and its broad remit had been established:

‘To:

  • establish a set of principles to underpin national approaches to assessment and create consistency;
  • identify and highlight examples of good practice; and
  • build confidence in the assessment system by securing the trust and support of officials and inspectors.’

Written evidence was requested by 16 October.

The first meeting took place on 21 October and five more were scheduled before the end of November.

Members’ names were not included at this stage (beyond the fact that NAHT’s President – a Staffordshire primary head – was involved) though membership was now described as ‘drawn from across education’.

Several members had in fact been named in an early October blog post from NAHT and a November press release from the Chartered Institute of Educational Assessors (CIEA) named all but one – NAHT’s Director of Education. This list was confirmed in the published Report.

The Commission had 14 members but only six of them – four primary heads one primary deputy and one secondary deputy – could be described as practitioners.

The others included two NAHT officials in addition to the secretariat, one being General Secretary Russell Hobby, and one from ASCL;  John Dunford, a consultant with several other strings to his bow, one of those being Chairmanship of the CIEA; Gordon Stobart an academic specialist in assessment with a long pedigree in the field; Hilary Emery, the outgoing Chief Executive of the National Children’s Bureau; and Sam Freedman of Teach First.

There were also unnamed observers from DfE, Ofqual and Ofsted.

The Report says the Commission took oral evidence from a wide range of sources. A list of 25 sources is provided but it does not indicate how much of their evidence was written and how much oral.

Three of these sources are bodies represented on the Commission, two of them schools. Overall seven are from schools. One source is Tim Oates, the former Chair of the National Curriculum Review Expert Panel.

The written evidence is not published and I could find only a handful of responses online, from:

Overall one has to say that the response to the call for evidence was rather limited. Nevertheless, it would be helpful for NAHT to publish all the evidence it received. It might be helpful for NAHT to consult formally on key provisions in its Report.

 .

Structure of the Report and Further Stages Proposed

The main body of the Report is sandwiched between a foreword by the Chair and a series of Annexes containing case studies, historical and international background.  This analysis concentrates almost entirely on the main body.

The 21 Recommendations are presented twice, first as a list within the Executive Summary and subsequently interspersed within a thematic commentary that summarises the evidence received and also conveys the Commission’s views.

The Executive Summary also sets out a series of Underpinning Principles for Assessment and a Design Checklist for assessment in schools, the latter accompanied by a set of five explanatory notes.

It offers a slightly different version of the Commission’s Remit:

‘In carrying out its task, the Commission was asked to achieve three distinct elements:

  • A set of agreed principles for good assessment
  • Examples of current best practice in assessment that meet these principles
  • Buy-in to the principles by those who hold schools to account.’

These are markedly less ambitious than their predecessors, having dropped the reference to ‘national approaches’ and any aspiration to secure support from officials and inspectors for anything beyond the Principles.

Significantly, the Report is presented as only the first stage in a longer process, an urgent response to schools’ need for guidance in the short term.

It recommends that further work should comprise:

  • ‘A set of model assessment criteria based on the new National Curriculum.’ (NAHT is called upon to develop and promote these. The text says that a model document is being  commissioned but doesn’t reveal the timescale or who is preparing it);
  • ‘A full model assessment policy and procedures, backed by appropriate professional development’ that would expand upon the Principles and Design Checklist. (NAHT is called upon to take the lead in this, but there is no indication that they plan to do so. No timescale is attached)
  • ‘A system-wide review of assessment’ covering ages 2-19. It is not explicitly stated, but one assumes that this recommendation is directed towards the Government. Again no timescale is attached.

The analysis below looks first at the assessment Principles, then the Design Checklist and finally the recommendations plus associated commentary. It concludes with an overall assessment of the Report as a whole.

.

Assessment Principles

As noted above, it seems that national level commitment is only sought in respect of these Principles, but there is no indication in the Report – or elsewhere for that matter – that DfE, Ofsted and Ofqual have indeed signed up to them.

Certainly the Ministerial statement quoted above stops well short of doing so.

The consultation document on primary assessment and accountability also sought comments on a set of core principles to underpin schools’ curriculum and assessment frameworks. It remains to be seen whether the version set out in the consultation response will match those advanced by the Commission.

The Report recommends that schools should review their own assessment practice against the Principles and Checklist together, and that all schools should have their own clear assessment principles, presumably derived or adjusted in the light of this process.

Many of the principles are unexceptionable, but there are a few interesting features that are directly relevant to the commentary below.

For it is of course critical to the internal coherence of the Report that the Design Checklist and recommendations are entirely consistent with these Principles.

I want to highlight three in particular:

  • ‘Assessment is inclusive of all abilities…Assessment embodies, through objective criteria, a pathway of progress and development for every child…Assessment objectives set high expectations for learners’.

One assumes that ‘abilities’ is intended to stand proxy for both attainment and potential, so that there should be ‘high expectations’ and a ‘pathway of progress and development’ for the lowest and highest attainers alike.

  • ‘Assessment places achievement in context against nationally standardised criteria and expected standards’.

This begs the question whether the ‘model document’ containing assessment criteria commissioned by NAHT will be ‘nationally standardised’ and, if so, what standardisation process will be applied.

  • ‘Assessment is consistent…The results are readily understandable by third parties…A school’s results are capable of comparison with other schools, both locally and nationally’.

The implication behind these statements must be that results of assessment in each school are transparent and comparable through the accountability regime, presumably by means of the performance tables (and the data portal that we expect to be introduced to support them).

This cannot be taken as confined to statutory tests, since the text later points out that:

‘The remit did not extend to KS2 tests, floor standards and other related issues of formal accountability.’

It isn’t clear, from the Principles at least, whether the Commission believes that teacher assessment outcomes should also be comparable. Here, as elsewhere, the Report does a poor job of distinguishing between statutory teacher assessment and assessment internal to the school.

.

Design Checklist.

 

Approach to Assessment and Use of Assessment

The Design Checklist is described as:

‘an evaluation checklist for schools seeking to develop or acquire an assessment system. They could also form the seed of a revised assessment policy.’

It is addressed explicitly to schools and comprises three sections covering, respectively, a school’s approach to assessment, method of assessment and use of assessment.

The middle section is by far the most significant and also the most complex, requiring five explanatory notes.

I have taken the more straightforward first and third sections first.

‘Our approach to assessment’ simply makes the point that assessment is integral to teaching and learning, while also setting expectations for regular, universal professional development and ‘a senior leader who is responsible for assessment’.

It is not clear whether this individual is the same as, or additional to, the ‘trained assessment lead’ mentioned in the Report’s recommendations.

I can find no justification in the Report for the requirement that this person must be a senior leader.

A more flexible approach would be preferable, in which the functions to be undertaken are outlined and schools are given flexibility over how those are distributed between staff. There is more on this below.

The final section ‘Our use of assessment’ refers to staff:

  • Summarising and analysing attainment and progress;
  • Planning pupils’ learning to ensure every pupil meets or exceeds expectations (Either this is a counsel of perfection, or expectations for some learners are pitched below the level required to satisfy the assessment criteria for the subject and year in question. The latter is much more likely, but this is confusing since satisfying the assessment criteria is also described in the Checklist in terms of ‘meeting…expectations’.)
  • Analysing data across the school to ensure all pupils are stretched while the vulnerable and those at risk make appropriate progress (‘appropriate’ is not defined within the Checklist itself but an explanatory note appended to the central section  – see below – glosses this phrase);
  • Communicating assessment information each term to pupils and parents through ‘a structured conversation’ and the provision of ‘rich, qualitative profiles of what has been achieved and indications of what they [ie parents as well as pupils] need to do next’; and
  • Celebrating a broad range of achievements, extending across the full school curriculum and encompassing social, emotional and behavioural development.

.

Method of Assessment: Purposes

‘Our method of assessment’ is by far the longest section, containing 11 separate bullet points. It could be further subdivided for clarity’s sake.

The first three bullets are devoted principally to some purposes of assessment. Some of this material might be placed more logically in the ‘Our Use of Assessment’ section, so that the central section is shortened and restricted to methodology.

The main purpose is stipulated as ‘to help teachers, parents and pupils plan their next steps in learning’.

So the phrasing suggests that assessment should help to drive forward the learning of parents and teachers, as well as to the learning of pupils. I’m not sure if this is deliberate or accidental.

Two subsidiary purposes are mentioned: providing a check on teaching standards and support for their improvement; and providing a comparator with other schools via collaboration and the use of ‘external tests and assessments’.

It is not clear why these three purposes are singled out. There is some overlap with the Principles but also a degree of inconsistency between the two pieces of documentation. It might have been better to cross-reference them more carefully.

In short, the internal logic of the Checklist and its relationship with the Principles could both do with some attention.

The real meat of the section is incorporated in the eight remaining bullet points. The first four are about what pupils are assessed against and when that assessment takes place. The last four explain how assessment judgements are differentiated, evidenced and moderated.

.

Method of Assessment: What Learners Are Assessed Against – and When

The next four bullets specify that learners are to be assessed against ‘assessment criteria which are short, discrete, qualitative and concrete descriptions of what a pupil is expected to know and be able to do.’

These are derived from the school curriculum ‘which is composed of the National Curriculum and our own local design’ (Of course that is not strictly the position in academies, as another section of the Report subsequently points out.)

The criteria ‘for periodic assessment are arranged into a hierarchy setting out what children are normally expected to have mastered by the end of each year’.

Each learner’s achievement ‘is assessed against all the relevant criteria at appropriate times of the school year’.

.

The Span of the Assessment Criteria

The first explanatory note (A) clarifies that the assessment criteria are ‘discrete, tangible descriptive statements of attainment’ derived from ‘the National Curriculum (and any school curricula)’.

There is no repetition of the provision in the Principles that they should be ‘nationally standardised’ but ‘there is little room for meaningful variety’, even though academies are not obliged to follow the National Curriculum and schools have complete flexibility over the remainder of the school curriculum.

The Recommendations have a different emphasis, saying that NAHT’s model criteria should be ‘based on the new National Curriculum’ (Recommendation 6), but the clear impression here is that they will encompass the National Curriculum ‘and any school curricula’ alike.

This inconsistency needs to be resolved. NAHT might be better off confining its model criteria to the National Curriculum only – and making it clear that even these may not be relevant to academies.

.

The Hierarchy of Assessment Criteria

The second explanatory note (B) relates to the arrangement of the assessment criteria

‘…into a hierarchy, setting out what children are normally expected to have mastered by the end of each year’.

This note is rather muddled.

It begins by suggesting that a hierarchy divided chronologically by school year is the most natural choice, because:

‘The curriculum is usually organised into years and terms for planned delivery’

That may be true, but only the Programmes of Study for the three core subjects are organised by year, and each clearly states that:

‘Schools are…only required to teach the relevant programme of study by the end of the key stage. Within each key stage, schools therefore have the flexibility to introduce content earlier or later than set out in the programme of study. In addition, schools can introduce key stage content during an earlier key stage if appropriate.’

All schools – academies and non-academies alike – therefore enjoy considerable flexibility over the distribution of the Programmes of Study between academic years.

(Later in the Report – in the commentary preceding the first six recommendations – the text mistakenly suggests that the entirety of ‘the revised curriculum is presented in a model of year-by-year progress’ (page 14) It does not mention the provision above).

The note goes on to suggest that the Commission has chosen a different route, not because of this flexibility, but because ‘children’s progress may not fit neatly into school years’:

‘…we have chosen the language of a hierarchy of expectations to avoid misunderstandings. Children may be working above or below their school year…’

But this is not an absolute hierarchy of expectations – in the sense that learners are free to progress entirely according to ability (or, more accurately, their prior attainment) rather than in age-related lock steps.

In a true hierarchy of expectations, learners would be able to progress as fast or as slowly as they are able to, within the boundaries set by:

  • On one hand, high expectations, commensurate challenge and progression;
  • On the other hand, protection against excessive pressure and hot-housing and a judicious blending of faster pace with more breadth and depth (of which more below).

This is no more than a hierarchy by school year with some limited flexibility at the margins.

.

The timing of assessment against the criteria

The third explanatory note (C) confirms the Commission’s assumption that formal assessments will be conducted at least termly – and possibly more frequently than that.

It adds:

‘It will take time before schools develop a sense of how many criteria from each year’s expectations are normally met in the autumn, spring and summer terms, and this will also vary by subject’.

This is again unclear. It could mean that a future aspiration is to judge progress termly, by breaking down the assessment criteria still further – so that a learner who met the assessment criteria for, say, the autumn term is deemed to be meeting the criteria for the year as a whole at that point.

Without this additional layer of lock-stepping, presumably the default position for the assessments conducted in the autumn and spring terms is that learners will still be working towards the assessment criteria for the year in question.

The note also mentions in passing that:

‘For some years to come, it will be hard to make predictions from outcomes of these assessments to the results in KS2 tests. Such data may emerge over time, although there are question marks over how reliable predictions may be if schools are using incompatible approaches and applying differing standards of performance and therefore cannot pool data to form large samples.’

This is one of very few places where the Report picks up on the problems that are likely to emerge from the dissonance between internal and external statutory assessment.

But it avoids the central issue, this being that the approach to internal assessment it advocates may not be entirely compatible with predicting future achievement in the KS2 tests. If so, its value is seriously diminished, both for parents and teachers, let alone the learners themselves.  This issue also reappears below.

.

Method of Assessment: How Assessment Judgements are Differentiated, Evidenced and Moderated

The four final bullet points in this section of the Design Checklist explain that all learners will be assessed as either ‘developing’, ‘meeting’, or ‘exceeding’ each relevant criterion for that year’.

Learners deemed to be exceeding the relevant criteria in a subject for a given year ‘will also be assessed against the criteria in that subject for the next year.’

Assessment judgements are supported by evidence comprising observations, records of work and test outcomes and are subject to moderation by teachers in the same school and in other schools to ensure they are fair, reliable and valid.

I will set moderation to one side until later in the post, since that too lies outside the scope of methodology.

.

Differentiation against the hierarchy of assessment criteria

The fourth explanatory note (D) addresses the vexed question of differentiation.

As readers may recall, the Report by the National Curriculum Review Expert Panel failed abjectly to explain how they would provide stretch and challenge in a system that focused exclusively on universal mastery and ‘readiness to progress’, saying only that further work was required to address the issue.

Paragraph 8.21 implied that they favoured what might be termed an ‘enrichment and extension’ model:

‘There are issues regarding ‘stretch and challenge’ for those pupils who, for a particular body of content, grasp material more swiftly than others. There are different responses to this in different national settings, but frequently there is a focus on additional activities that allow greater application and practice, additional topic study within the same area of content, and engagement in demonstration and discussion with others…These systems achieve comparatively low spread at the end of primary education, a factor vital in a high proportion of pupils being well positioned to make good use of more intensive subject-based provision in secondary schooling.’

Meanwhile, something akin to the P Scales might come into play for those children with learning difficulties.

On this latter point, the primary assessment and accountability consultation document said DfE would:

‘…explore whether P-scales should be reviewed so that they align with the revised national curriculum and provide a clear route to progress to higher attainment.’

We do not yet know whether this will happen, but Explanatory Note B to the Design Checklist conveys the clear message that the P-Scales need to be retained:

‘…must ensure we value the progress of children with special needs as much as any other group. The use of P scales here is important to ensure appropriate challenge and progression for pupils with SEN.’

By contrast, for high attainers, the Commission favours what might be called a ‘mildly accelerative’ model whereby learners who ‘exceed’ the assessment criteria applying to a subject for their year group may be given work that enables them to demonstrate progress against the criteria for the year above.

I describe it as mildly accelerative because there is no provision for learners to be assessed more than one year ahead of their chronological year group. This is a fairly low ceiling to impose on such accelerative progress.

It is also unclear whether the NAHT’s model assessment criteria will cover Year 7, the first year of the KS3 Programmes of Study, to enable this provision to extend into Year 6.

The optimal approach for high attainers would combine the ‘enrichment and extension’ approach apparently favoured by the Expert Panel with an accelerative approach that provides a higher ceiling, to accommodate those learners furthest ahead of their peers.

High attaining learners could then access a customised blend of enrichment (more breadth), extension (greater depth) and acceleration (faster pace) according to their needs.

This is good curricular practice and it should be reflected in assessment practice too, otherwise the risk is that a mildly accelerative assessment process will have an undesirable wash-back effect on teaching and learning.

Elsewhere, the Report advocates the important principle that curriculum, assessment and pedagogy should be developed in parallel, otherwise there is a risk that one – typically assessment – has an undesirable effect on the others. This would be an excellent exemplar of that statement.

The judgement whether a learner is exceeding the assessment criteria for their chronological year would be evidenced by enrichment and extension activity as well as by pre-empting the assessment criteria for the year ahead. Exceeding the criteria in terms of greater breadth or more depth should be equally valued.

This more rounded approach, incorporating a higher ceiling, should also be supported by the addition of a fourth ‘far exceeded’ judgement, otherwise the ‘exceeded’ judgement has to cover far too wide a span of attainment, from those who are marginally beyond their peers to those who are streets ahead.

These concerns need urgently to be addressed, before NAHT gets much further with its model criteria.

.

The aggregation of criteria

In order to make the overall judgement for each subject, learners’ performance against individual assessment criteria has to be combined to give an aggregate measure.

The note says:

‘The criteria themselves can be combined to provide the qualitative statement of a pupil’s achievements, although teachers and schools may need a quantitative summary. Few schools appear to favour a pure binary approach of yes/no. The most popular choice seems to be a three phase judgement of working towards (or emerging, developing), meeting (or mastered, confident, secure, expected) and exceeded. Where a student has exceeded a criterion, it may make sense to assess them also against the criteria for the next year.’

This, too, begs some questions. The statement above is consistent with one of the Report’s central recommendations:

‘Pupil progress and achievement should be communicated in terms of descriptive profiles rather than condensed to numerical summaries (although schools may wish to use numerical data for internal purposes).’

Frankly it seems unlikely that such ‘condensed numerical summaries’ can be kept hidden from parents. Indeed, one might argue that they have a reasonable right to know them.

These aggregations – whether qualitative or quantitative – will be differentiated at three levels, according to whether the learner best fits a ‘working towards’, ‘meeting’ or ‘exceeding’ judgement for the criteria relating to the appropriate year in each programme of study.

I have just recommended that there needs to be an additional level at the top end, to remove undesirable ceiling effects that lower expectations and are inconsistent with the Principles set out in the Report. I leave it to others to judge whether, if this was accepted, a fifth level is also required at the lower end to preserve the symmetry of the scale.

There is also a ‘chicken and egg’ issue here. It is not clear whether a learner must already be meeting some of the criteria for the succeeding year in order to show they are exceeding the criteria for their own year – or whether assessment against the criteria for the succeeding year is one potential consequence of a judgement that they are exceeding the criteria for their own year.

This confusion is reinforced by a difference of emphasis between the checklist – which says clearly that learners will be assessed against the criteria for the succeeding year if they exceeded the criteria for their own – and the explanatory note, which says only that this may happen.

Moreover, the note suggests that this applies criterion by criterion – ‘where a student has exceeded a criterion’ – rather than after the criteria have been aggregated, which is the logical assumption from the wording in the checklist – ‘exceeded the relevant criteria’.

This too needs clarifying.

.

.

Recommendations and Commentary

I will try not to repeat in this section material already covered above.

I found that the recommendations did not always sit logically with the preceding commentary, so I have departed from the subsections used in the Report, grouping the material into four broad sections: further methodological issues; in-school and school-to school support; national support; and phased implementation.

Each section leads with the relevant Recommendations and folds in additional relevant material from different sections of the commentary. I have repeated recommendations where they are relevant to more than one section.

.

Further methodological issues

Recommendation 4: Pupils should be assessed against objective criteria rather than ranked against each other

Recommendation 5: Pupil progress and achievements should be communicated in terms of descriptive profiles rather than condensed to numerical summaries (although schools may wish to use numerical data for internal purposes.

Recommendation 6: In respect of the National Curriculum, we believe it is valuable – to aid communication and comparison – for schools to be using consistent criteria for assessment. To this end, we call upon NAHT to develop and promote a set of model assessment criteria based on the new National Curriculum.

The commentary discusses the evolution of National Curriculum levels, including the use of sub-levels and their application to progress as well as achievement. In doing so, it summarises the arguments for and against the retention of levels.

In favour of retention:

  • The system of levels provides a common language used by schools to summarise attainment and progress;
  • It is argued (by some professionals) that parents have grown up with levels and have an adequate grasp of what they mean;
  • The numerical basis of levels was useful to schools in analysing and tracking the performance of large numbers of pupils;
  • The decision to remove levels was unexpected and caused concern within the profession, especially as it was also announced that being ‘secondary ready’ was to be associated with the achievement of Level 4B;
  • If levels are removed, they must be replaced by a different common language, or at least ‘an element of compatibility or common understanding’ should several different assessment systems emerge.

In favour of removal:

  • It is argued (by the Government) that levels are not understood by parents and other stakeholders;
  • The numerical basis of levels does not have the richness of a more rounded description of achievement. The important narrative behind the headline number is often lost through over-simplification.
  • There are adverse effects from labelling learners with levels.

The Commission is also clear that the Government places too great a reliance on tests, particularly for accountability purposes. This has narrowed the curriculum and resulted in ‘teaching to the test’.

It also creates other perverse incentives, including the inflation of assessment outcomes for performance management purposes or, conversely, the deflation of assessment outcomes to increase the rate of progress during the subsequent key stage.

Moreover, curriculum, assessment and pedagogy must be mutually supportive. Although the Government has not allowed the assessment tail to wag the curricular dog:

‘…curriculum and assessment should be developed in tandem.’

Self-evidently, this has not happened, since the National Curriculum was finalised way ahead of the associated assessment arrangements which, in the primary sector, are still unconfirmed.

There is a strong argument that such assessment criteria should have been developed by the Government and made integral to the National Curriculum.

Indeed, in Chapter 7 of its Report on ‘The Framework for the National Curriculum’, the National Curriculum Expert Panel proposed that attainment targets should be retained, not in the form of level descriptors but as ‘statements of specific learning outcomes related to essential knowledge’ that  would be ’both detailed and precise’. They might be presented alongside the Programmes of Study.

The Government ignored this, opting for a very broad single, standard attainment target in each programme of study:

‘By the end of each key stage, pupils are expected to know, apply and understand the matters, skills and processes specified in the relevant programme of study.’

As I pointed out in a previous post, one particularly glaring omission from the Consultation Document on Primary Assessment and Accountability was any explanation of how Key Stage Two tests and statutory teacher assessments would be developed from these singleton ‘lowest common denominator’ attainment targets, especially in a context where academies, while not obliged to follow the National Curriculum, would undertake the associated tests.

We must await the long-delayed response to the consultation to see if it throws any light on this matter.

Will it commit the Government to producing a framework, at least for statutory tests in the core subjects, or will it throw its weight behind the NAHT’s model criteria instead?

I have summarised this section of the Report in some detail as it is the nearest it gets to providing a rational justification for the approach set out in the recommendations above.

The model criteria appear confined to the National Curriculum at this point, though we have already noted that is not the case elsewhere in the Report.

I have also discussed briefly the inconsistency in permitting the translation of descriptive profiles into numerical data ‘for internal purposes’, but undertook to develop that further, for there is a wider case that the Report does not entertain.

We know that there will be scores attached to KS2 tests, since those are needed to inform parents and for accountability purposes.

The Primary Assessment and Accountability consultation document proposed a tripartite approach:

  • Scaled scores to show attainment, built around a new ‘secondary-ready’ standard, broadly comparable with the current Level 4B;
  • Allocation to a decile within the range of scaled scores achieved nationally, showing attainment compared with one’s peers; and
  • Comparison with the average scaled score of those nationally with the same prior attainment at the baseline, to show relative progress.

Crudely speaking, the first of these measures is criterion-referenced while the second and third are norm-referenced.

We do not yet know whether these proposals will proceed – there has been some suggestion that deciles at least will be dropped – but parents will undoubtedly want schools to be able to tell them what scaled scores their children are on target to achieve, and how those compare with the average for those with similar prior attainment.

It will be exceptionally difficult for schools to convey that information within the descriptive profiles, insofar as they relate to English and maths, without adopting the same numerical measures.

It might be more helpful to schools if the NAHT’s recommendations recognised that fact. For the brutal truth is that, if schools’ internal assessment processes do not respond to this need, they will have to set up parallel processes that do so.

In order to derive descriptive profiles, there must be objective assessment criteria that supply the building blocks, hence the first part of Recommendation 4. But I can find nothing in the Report that explains explicitly why pupils cannot also be ranked against each other. This can only be a veiled and unsubstantiated objection to deciles.

Of course it would be quite possible to rank pupils at school level and, in effect, that is what schools will do when they condense the descriptive profiles into numerical summaries.

The real position here is that such rankings would exist, but would not be communicated to parents, for fear of ‘labelling’. But the labelling has already occurred, so the resistance is attributable solely to communicating these numerical outcomes to parents. That is not a sustainable position.

.

In-school and school-to-school support

Recommendation 1: Schools should review their assessment practice against the principles and checklist set out in this report. Staff should be involved in the evaluation of existing practice and the development of a new, rigorous assessment system and procedures to enable the school to promote high quality teaching and learning.

Recommendation 2: All schools should have clear assessment principles and practices to which all staff are committed and which are implemented. These principles should be supported by school governors and accessible to parents, other stakeholders and the wider school community.

Recommendation 3: Assessment should be part of all school development plans and should be reviewed regularly. This review process should involve every school identifying its own learning and development needs for assessment. Schools should allocate specific time and resources for professional development in this area and should monitor how the identified needs are being met.

Recommendation 7 (part): Schools should work in collaboration, for example in clusters, to ensure a consistent approach to assessment. Furthermore, excellent practice in assessment should be identified and publicised…

Recommendation 9: Schools should identify a trained assessment lead, who will work with other local leads and nationally accredited assessment experts on moderation activities.

Recommendation 16: All those responsible for children’s learning should undertake rigorous training in formative, diagnostic and summative assessment, which covers how assessment can be used to support teaching and learning for all pupils, including those with special educational needs. The government should provide support and resources for accredited training for school assessment leads and schools should make assessment training a priority.

Recommendation 20: Schools should be asked to publish their principles of assessment from September 2014, rather than being required to publish a detailed assessment framework, which instead should be published by 2016. The development of the full framework should be outlined in the school development plan with appropriate milestones that allow the school sufficient time to develop an effective model.

All these recommendations are perfectly reasonable in themselves, but it is worth reflecting for a while on the likely cost and workload implications, particularly for smaller primary schools:

Each school must have a ‘trained assessment lead’ who may or may not be the same as the ‘senior leader who is responsible for assessment’ mentioned in the Design Checklist. There is no list of responsibilities for that person, but it would presumably include:

  • Leading the review of assessment practice and developing a new assessment system;
  • Leading the definition of the school’s assessment principles and practices and communicating these to governors, parents, stakeholders and the wider community;
  • Lead responsibility for the coverage of assessment within the school’s development plan and the regular review of that coverage;
  • Leading the identification and monitoring of the school’s learning and development needs for assessment;
  • Ensuring that all staff receive appropriate professional development – including ‘rigorous training in formative diagnostic and summative assessment’;
  • Leading the provision of in-school and school-to-school professional development relating to assessment;
  • Allocating time and resources for all assessment-related professional development and monitoring its impact;
  • Leading collaborative work with other schools to ensure a consistent approach to assessment;
  • Dissemination of effective practice;
  • Working with other local assessment leads and external assessment experts on moderation activities.

And, on top of this, there is a range of unspecified additional responsibilities associated with the statutory tests.

It is highly unlikely that this range of responsibilities could be undertaken effectively by a single person in less than half a day a week, as a bare minimum. There will also be periods of more intense pressure when a substantially larger time allocation is essential.

The corresponding salary cost for a ‘senior leader’ might be £3,000-£4,000 per year, not to mention the cost of undertaking the other responsibilities displaced.

There will also need to be a sizeable school budget and time allocation for staff to undertake reviews, professional development and moderation activities.

Moderation itself will bear a significant cost. Internal moderation may have a bigger opportunity cost but external moderation will otherwise be more expensive.

Explanatory note (E), attached to the Design Checklist, says:

‘The exact form of moderation will vary from school to school and from subject to subject. The majority of moderation (in schools large enough to support it) will be internal but all schools should undertake a proportion of external moderation each year, working with partner schools and local agencies.’

Hence the cost of external moderation will fall disproportionately on smaller schools with smaller budgets.

It would be wrong to suggest that this workload is completely new. To some extent these various responsibilities will be undertaken already, but the Commission’s recommendations are effectively a ratcheting up of the demand on schools.

Rather than insisting on these responsibilities being allocated to a single individual with other senior management responsibilities, it might be preferable to set out the responsibilities in more detail and give schools greater flexibility over how they should be distributed between staff.

Some of these tasks might require senior management input, but others could be handled by other staff, including paraprofessionals.

.

National support

Recommendation 7 (part): Furthermore, excellent practice in assessment should be identified and publicised, with the Department for Education responsible for ensuring that this is undertaken.

Recommendation 8 (part): Schools should be prepared to submit their assessment to external moderators, who should have the right to provide a written report to the head teacher and governors setting out a judgement on the quality and reliability of assessment in the school, on which the school should act. The Commission is of the view that at least some external moderation should be undertaken by moderators with no vested interest in the outcomes of the school’s assessment. This will avoid any conflicts of interest and provide objective scrutiny and broader alignment of standards across schools.

Recommendation 9: Schools should identify a trained assessment lead, who will work with other local leads and nationally accredited assessment experts on moderation activities.

Recommendation 11: The Ofsted school inspection framework should explore whether schools have effective assessment systems in place and consider how effectively schools are using pupil assessment information and data to improve learning in the classroom and at key points of transition between key stages and schools.

Recommendation 14: Further work should be undertaken to improve training for assessment within initial teacher training (ITT), the newly qualified teacher (NQT) induction year and on-going professional development. This will help to build assessment capacity and support a process of continual strengthening of practice within the school system.

Recommendation 15: The Universities’ Council for the Education of Teachers (UCET) should build provision in initial teacher training for delivery of the essential assessment knowledge.

Recommendation 16: All those responsible for children’s learning should undertake rigorous training in formative, diagnostic and summative assessment, which covers how assessment can be used to support teaching and learning for all pupils, including those with special educational needs. The government should provide support and resources for accredited training for school assessment leads and schools should make assessment training a priority.

Recommendation 17: A number of pilot studies should be undertaken to look at the use of information technology (IT) to support and broaden understanding and application of assessment practice.

Recommendation 19: To assist schools in developing a robust framework and language for assessment, we call upon the NAHT to take the lead in expanding the principles and design checklist contained in this report into a full model assessment policy and procedures, backed by appropriate professional development.

There are also several additional proposals in the commentary that do not make it into the formal recommendations:

  • Schools should be held accountable for the quality of their assessment practice as well as their assessment results, with headteachers also appraising teachers on their use of assessment. (The first part of this formulation appears in Recommendation 11 but not the second.) (p17);
  • It could be useful for the teaching standards to reflect further assessment knowledge, skills and understanding (p17);
  • A national standard in assessment practice for teachers would be a useful addition (p18);
  • The Commission also favoured the approach of having a lead assessor to work with each school or possibly a group of schools, helping to embed good practice across the profession (p18).

We need to take stock of the sheer scale of the infrastructure that is being proposed and its likely cost.

In respect of moderation alone, the Report is calling for sufficient external moderators, ‘nationally accredited assessment experts’ and possibly lead assessors to service some 17,000 primary schools.

Even if we assume that these roles are combined in the same person and that each person can service, say, 25 schools, that still demands something approaching a cadre of 700 people who also need to be supported, managed and trained.

If they are serving teachers there is an obvious opportunity cost. Providing a service of this scale would cost tens of millions of pounds a year.

Turning to training and professional development, the Commission is proposing:

  • Accredited training for some 17,000 school assessment leads (with an ongoing requirement to train new appointees and refresh the training of those who undertook it too far in the past);
  • ‘Rigorous training in formative, diagnostic and summative assessment, which covers how assessment can be used to support teaching and learning for all pupils, including those with special educational needs’ for everyone deemed responsible for children’s learning, so not just teachers. This will include hundreds of thousands of people in the primary sector alone.
  • Revitalised coverage of assessment in ITE and induction, on top of the requisite professional development package.

The Report says nothing of the cost of developing, providing and managing this huge training programme, which would cost some more tens of millions of pounds a year.

I am plucking a figure out of the air, but it would be reasonable to suggest that moderation and training costs combined might require an annual budget of some £50 million – and quite possibly double that. 

Unless one argues that the testing regime should be replaced by a national sampling process – and while the Report says some of the Commission’s members supported that, it stops short of recommending it – there are no obvious offsetting savings.

It is disappointing that the Commission made no effort at all to quantify the cost of its proposals.

These recommendations provide an excellent marketing opportunity for some of the bodies represented on the Commission.

For example, the CIEA press release welcoming the Report says:

‘One of the challenges, and one that schools will need to meet, is in working together, and with local and national assessment experts, to moderate their judgements and ensure they are working to common standards across the country. The CIEA has an important role to play in training these experts.’

Responsibility for undertaking pilot studies on the role of IT in assessment is not allocated, but one assumes it would be overseen by central government and also funded by the taxpayer.

Any rollout from the pilots would have additional costs attached and would more than likely create additional demand for professional development.

The reference to DfE taking responsibility for sharing excellent practice is already a commitment in the consultation document:

‘…we will provide examples of good practice which schools may wish to follow. We will work with professional associations, subject experts, education publishers and external test developers to signpost schools to a range of potential approaches.’ (paragraph 3.8).

Revision of the School Inspection Framework will require schools to give due priority to the quality of their assessment practice, though Ofsted might reasonably argue that it is already there.

Paragraph 116 of the School Inspection Handbook says:

‘Evidence gathered by inspectors during the course of the inspection should include… the quality and rigour of assessment, particularly in nursery, reception and Key Stage 1.’

We do not yet know whether NAHT will respond positively to the recommendation that it should go beyond the model assessment criteria it has already commissioned by leading work to expand the Principles and Design Checklist into ‘a full model assessment policy and procedures backed by appropriate professional development’.

There was no reference to such plans in the press release accompanying the Report.

Maybe the decision could not be ratified in time by the Association’s decision-making machinery – but this did not prevent the immediate commissioning of the model criteria.

.

Phased Implementation

Recommendation 10: Ofsted should articulate clearly how inspectors will take account of assessment practice in making judgements and ensure both guidance and training for inspectors is consistent with this.

Recommendation 12: The Department for Education should make a clear and unambiguous statement on the teacher assessment data that schools will be required to report to parents and submit to the Department for Education. Local authorities and other employers should provide similar clarity about requirements in their area of accountability.

Recommendation 13: The education system is entering a period of significant change in curriculum and assessment, where schools will be creating, testing and revising their policies and procedures. The government should make clear how they will take this into consideration when reviewing the way they hold schools accountable as new national assessment arrangements are introduced during 2014/15. Conclusions about trends in performance may not be robust.

Recommendation 18: The use by schools of suitably modified National Curriculum levels as an interim measure in 2014 should be supported by the government. However, schools need to be clear that any use of levels in relation to the new curriculum can only be a temporary arrangement to enable them to develop, implement and embed a robust new framework for assessment. Schools need to be conscious that the new curriculum is not in alignment with the old National Curriculum levels.

Recommendation 20: Schools should be asked to publish their principles of assessment from September 2014, rather than being required to publish a detailed assessment framework, which instead should be published by 2016. The development of the full framework should be outlined in the school development plan with appropriate milestones that allow the school sufficient time to develop an effective model.

Recommendation 21: A system wide review of assessment should be undertaken. This would help to repair the disjointed nature of assessment through all ages, 2-19.

The Commission quite rightly identifies a number of issues caused by the implementation timetable, combined with continuing uncertainty over aspects of the Government’s plans.

At the time of writing, the response to the consultation document has still not been published (though it was due in autumn 2013) yet schools will be implementing the new National Curriculum from this September.

The Report says:

‘There was strong concern expressed about the requirement for schools to publish their detailed curriculum and assessment framework in September 2014.’

This is repeated in Recommendation 20, together with the suggestion that this timeline should be amended so that only a school’s principles for assessment need be published by this September.

I have been trying to pin down the source of this requirement.

Schedule 4 of The School Information (England) (Amendment) Regulations 2012 do not require the publication of a detailed assessment framework, referring only to

‘The following information about the school curriculum—

(a)  in relation to each academic year, the content of the curriculum followed by the school for each subject and details as to how additional information relating to the curriculum may be obtained;

(b)  in relation to key stage 1, the names of any phonics or reading schemes in operation; and

(c)  in relation to key stage 4—

(i)            a list of the courses provided which lead to a GCSE qualification,

(ii)          a list of other courses offered at key stage 4 and the qualifications that may be acquired.’

I could find no Government guidance stating unequivocally that this requires schools to carve up all the National Curriculum programmes of study into year-by-year chunks.  (Though there is no additional burden attached to publication if they have already undertaken this task for planning purposes.)

There are references to the publication of Key Stage 2 results (which will presumably need updating to reflect the removal of levels), but nothing on the assessment framework.

Moreover, the DfE mandatory timeline says that from the Spring Term of 2014:

‘All schools must publish their school curriculum by subject and academic year, including their provision of personal, social, health and economic education (PSHE).’

(The hyperlink returns one to the Regulations quoted above.)

There is no requirement for publication of further information in September.

I wonder therefore if this is a misunderstanding. I stand to be corrected if readers can point me to the source.

It may arise from the primary assessment and accountability consultation document, which discusses publication of curricular details and then proceeds immediately to discuss the relationship between curriculum and assessment:

‘Schools are required to publish this curriculum on their website…In turn schools will be free to design their approaches to assessment, to support pupil attainment and progression. The assessment framework must be built into the school curriculum, so that schools can check what pupils have learned and whether they are on track to meet expectations at the end of the key stage, and so that they can report regularly to parents.’ (paras 3.4-3.5)

But this conflation isn’t supported by the evidence above and, anyway, these are merely proposals.

That said, it must be assumed that the Commission consulted its DfE observer on this point before basing recommendations on this interpretation.

If the observer’s response was consistent with the Commission’s interpretation, then it is apparently inconsistent with all the material so far published by the Department!

It may be necessary for NAHT to obtain clarification of this point given the evidence cited above.

That aside, there are issues associated with the transition from the current system to the future system.

The DfE’s January 2014 ‘myths and facts’ publication says:

‘As part of our reforms to the national curriculum, the current system of “levels” used to report children’s attainment and progress will be removed from September 2014. Levels are not being banned, but will not be updated to reflect the new national curriculum and will not be used to report the results of national curriculum tests. Key Stage 1 and Key Stage KS2 [sic] tests taken in the 2014 to 2015 academic year will be against the previous national curriculum, and will continue to use levels for reporting purposes

Schools will be expected to have in place approaches to formative assessment that support pupil attainment and progression. The assessment framework should be built into the school curriculum, so that schools can check what pupils have learned and whether they are on track to meet expectations at the end of the key stage, and so that they can report regularly to parents. Schools will have the flexibility to use approaches that work for their pupils and circumstances, without being constrained by a single national approach.’

The reference here to having approaches in place – rather than the publication of a ‘detailed curriculum and assessment framework’ – would not seem wildly inconsistent with the Commission’s idea that schools should establish their principles by September 2014, and develop their detailed assessment frameworks iteratively over the two succeeding years. However, the Government needs to clarify the position.

Since Key Stage 2 tests will not dispense with levels until May 2016 (and they will be published in the December 2015 Performance Tables), there will be an extended interregnum in which National Curriculum Levels will continue to have official currency.

Moreover, levels may still be used in schools – they are not being banned – though they will not be aligned to the new National Curriculum.

The Report says:

‘…it is important to recognise that, even if schools decide to continue with some form of levels, the new National Curriculum does not align to the existing levels and level descriptors and this alignment is a piece of work that needs to be undertaken now.’ (p19).

However, the undertaking of this work does not feature in the Recommendations, unless it is implicit in the production by NAHT of ‘a full model assessment policy and procedures’, which seems unlikely.

One suspects that the Government would be unwilling to endorse such a process, even as a temporary arrangement, since what is to stop schools from continuing to use this new improved levels structure more permanently?

The Commission would appear to be on stronger ground in asking Ofsted to make allowances during the interregnum (which is what I think Recommendation 10 is about) especially given that, as Recommendation 13 points out, evidence of ‘trends in performance may not be robust’.

The point about clarity over teacher assessment is well made – and one hopes it will form part of the response to the primary assessment and accountability consultation document when that is eventually published.

The Report itself could have made progress in this direction by establishing and maintaining a clearer distinction between statutory and internal teacher assessment.

The consultation document itself made clear that KS2 writing would continue to be assessed via teacher assessment rather than a test, and, moreover:

‘At the end of each key stage schools are required to report teacher assessment judgements in all national curriculum subjects to parents. Teachers will judge whether each pupil has met the expectations set out in the new national curriculum. We propose to continue publishing this teacher assessment in English, mathematics and science, as Lord Bew recommended.’ (para 3.9)

But what it does not say is what requirements will be imposed to ensure consistency across this data. Aside from KS2 writing, will they also be subject to the new scaled scores, and potentially deciles too?

Until schools have answers to that question, they cannot consider the overall shape of their assessment processes.

The final recommendation, for a system-wide review of assessment from 2-19 is whistling in the wind, especially given the level of disruption already caused by the decision to remove levels.

Neither this Government nor the next is likely to act upon it.

 

Conclusion

The Commission’s Report moves us forward in broadly the right direction.

The Principles, Design Checklist and wider recommendations help to fill some of the void created by the decision to remove National Curriculum levels, the limited nature of the primary assessment and accountability consultation document and the inordinate delay in the Government’s response to that consultation.

We are in a significantly better place as a consequence of this work being undertaken.

But there are some worrying inconsistencies in the Report as well as some significant shortcomings to the proposals it contains. There are also several unanswered questions.

Not to be outdone, I have bound these up into a series of recommendations directed at NAHT and its Commission. There are 23 in all and I have given mine letters rather than numerals, to distinguish them from the Commission’s own recommendations.

  • Recommendation A: The Commission should publish all the written evidence it received.
  • Recommendation B: The Commission should consult on key provisions within the Report, seeking explicit commitment to the Principles from DfE, Ofqual and Ofsted.
  •  Recommendation C: The Commission should ensure that its Design Checklist is fully consistent with the Principles in all respects. It should also revisit the internal logic of the Design Checklist.
  • Recommendation D: So far as possible, ahead of the primary assessment and accountability consultation response, the Commission should distinguish clearly how its proposals relate to statutory teacher assessment, alongside schools’ internal assessment processes.
  • Recommendation E: NAHT should confirm who it has commissioned to produce model assessment criteria and to what timetable. It should also explain how these criteria will be ‘nationally standardised’.
  • Recommendation F: The Commission should clarify whether the trained assessment lead mentioned in Recommendation 9 is the same or different to the ‘senior leader who is responsible for assessment’ mentioned in the Design Checklist.
  • Recommendation G: The Commission should set out more fully the responsibilities allocated to this role or roles and clarify that schools have flexibility over how they distribute those responsibilities between staff.
  • Recommendation H:  NAHT should clarify how the model criteria under development apply – if at all – to the wider school curriculum in all schools and to academies not following the National Curriculum.
  • Recommendation I: NAHT should clarify how the model criteria under development will allow for the fact that in all subjects all schools enjoy flexibility over the positioning of content in different years within the same key stage – and can also anticipate parts of the subsequent key stage.
  • Recommendation J: NAHT should clarify whether the intention is that the model criteria should reflect the allocation of content to specific terms as well as to specific years.
  • Recommendation K: The Commission should explain how its approach to internal assessment will help predict future performance in end of Key Stage tests.
  • Recommendation L: The Commission should shift from its narrow and ‘mildly accelerative’ view of high attainment to accommodate a richer concept that combines enrichment (breadth), extension (depth) and acceleration (faster pace) according to learners’ individual needs.
  • Recommendation M: The Commission should incorporate a fourth ‘far exceeded’ assessment judgement, since the ‘exceeded’ judgement covers too wide a span of attainment.
  • Recommendation N: NAHT should clarify whether its model criteria will extend into KS3, to accommodate assessment against the criteria for at least year 7, and ideally beyond.
  • Recommendation O: The Commission should clarify whether anticipating criteria for a subsequent year is a cause or a consequence of being judged to be ‘exceeding’ expectations in the learner’s own chronological year.
  • Recommendation P: The Commission should confirm that numerical summaries of assessment criteria – as well as any associated ranking positions – should be made available to parents who request them.
  • Recommendation Q: The Commission should explain why schools should be forbidden from ranking learners against each other (or allocating them to deciles).
  • Recommendation R: The Commission should assess the financial impact of its proposals on schools of different sizes.
  • Recommendation S: The Commission should cost its proposals for training and moderation, identifying the burden on the taxpayer and any offsetting savings.
  • Recommendation T: NAHT should clarify its response to Recommendation 19, that it should lead the development of a full model assessment policy and procedures.
  • Recommendation U: The Commission should clarify with DfE its understanding that schools are required to publish a detailed curriculum and assessment framework by September 2014.
  • Recommendation V: The Commission should clarify with DfE the expectation that it should have in place ‘approaches to formative assessment’ and whether the proposed assessment principles satisfy this requirement.
  • Recommendation W: The commission should clarify whether it is proposing that work is undertaken to align National Curriculum levels with the new National Curriculum and, if so, who it proposes should undertake this.

So – good overall – subject to these 23 reservations!

Some are more significant than others. Given my area of specialism, I feel particularly strongly about those that relate directly to high attainers, especially L and M above.

Those are the two I would nail to the door of 1 Heath Square.

.

GP

March 2014

What Becomes of Schools That Fail Their High Attainers?*

.

This post reviews the performance and subsequent history of schools with particularly poor results for high attainers in the Secondary School Performance Tables over the last three years.

P1010120

Seahorse in Perth Aquarium by Gifted Phoenix

It establishes a high attainer ‘floor target’ so as to draw a manageable sample of poor performers and, having done so:

  • Analyses the characteristics of this sample;
  • Explores whether these schools typically record poor performance in subsequent years or manage to rectify matters;
  • Examines the impact of various interventions, including falling below the official floor targets, being placed in special measures or deemed to have serious weaknesses following inspection, becoming an academy and receiving a pre-warning and/or warning notice;
  • Considers whether the most recent Ofsted reports on these schools do full justice to this issue, including those undertaken after September 2013 when new emphasis was placed on the performance of the ‘most able’.

The post builds on my previous analysis of high attainment in the 2013 School Performance Tables (January 2014). It applies the broad definition of high attainers used in the Tables, which I discussed in that post and have not repeated here.

I must emphasise at the outset that factors other than poor performance may partially explain particularly low scores in the Tables.

There may be several extenuating circumstances that are not reflected in the results. Sometimes these may surface in Ofsted inspection reports, but the accountability and school improvement regime typically imposes a degree of rough justice, and I have followed its lead.

It is also worth noting that the Performance Tables do not provide data for schools where the number of high attainers is five or fewer, because of the risk that individuals may be identifiable even though the data is anonymised.

This is unfortunate since the chances are that schools with very few high attainers will find it more difficult than others to address their needs. We may never know, but there is more on the impact of cohort size below.

Finally please accept my customary apology for any transcription errors. Do let me know if you notice any and I will correct them.

.

Drawing the Sample

The obvious solution would be to apply the existing floor targets to high attainers.

So it would include all schools recording:

  • Fewer than 35% (2011) or 40% (2012 and 2013) of high attainers achieving five or more GCSEs at grades A*-C or equivalent including GCSEs in English and mathematics and
  • Below median scores for the percentage of high attainers making at least the expected three levels of progress between Key Stages 2 and 4 in English and maths respectively.

But the first element is far too undemanding a threshold to apply for high attaining learners and the overall target generates a tiny sample.

The only school failing to achieve it in 2013 was Ark Kings Academy in Birmingham, which recorded just six high attainers, forming 9% of the cohort (so only just above the level at which results would have been suppressed).

In 2012 two schools were in the same boat:

  • The Rushden Community College in Northamptonshire, with 35 high attainers (26% of the cohort), which became a sponsored academy with the same name on 1 December 2012; and
  • Culverhay School in Bath and North East Somerset, with 10 high attainers (19% of the cohort), which became Bath Community Academy on 1 September 2012.

No schools at all performed at this level in 2011.

A sample of just three schools is rather too unrepresentative, so it is necessary to set a more demanding benchmark which combines the same threshold and progress elements.

The problem is not with the progress measure. Far too many schools fail to meet the median level of performance – around 70% each year in both English and maths – even with their cadres of high attainers. Hence I need to lower the pitch of this element to create a manageable sample.

I plumped for 60% or fewer high attainers making at least the expected progress between KS2 and KS4 in both English and maths. This captured 22 state-funded schools in 2013, 31 in 2012 and 38 in 2011. (It also enabled Ark King’s Academy to escape, by virtue of the fact that 67% of its high attainers learners achieved the requisite progress in English.)

For the threshold element I opted for 70% or fewer high attainers achieving five or more GCSEs at grades A*-C or equivalent including GCSEs in English and maths. This captured 19 state-funded schools in 2013, 29 in 2012 and 13 in 2011.

.

Venn 2.

The numbers of state-funded schools that met both criteria were seven in 2013, eight  in 2012 and five in 2011, so 20 in all.

I decided to feature this small group of schools in the present post while also keeping in mind the schools occupying each side of the Venn Diagram. I particularly wanted to see whether schools which emerged from the central sample in subsequent years continued to fall short on one or other of the constituent elements.

The 20 schools in the main sample are:

Table 1 below provides more detail about these 20 schools.

.

Table 1: Schools Falling Below Illustrative High Attainer Floor Targets 2011-2013

Name Type LA Status/Sponsor Subsequent History
2011
Carter Community School 12-16 mixed modern Poole Community Sponsored academy (ULT) 1/4/13
Hadden Park High School 11-16 mixed comp Nottingham Foundation Sponsored Academy (Bluecoat School) 1/1/14
Merchants Academy 11-18 mixed comp Bristol Sponsored Academy (Merchant Venturers/ University of Bristol
The Robert Napier School 11-18 mixed modern Medway Foundation Sponsored Academy (Fort Pitt Grammar School)  1/9/12
Bishop of Rochester Academy 11-18 mixed comp Kent Sponsored Academy (Medway Council/ Canterbury Christ Church University/ Diocese of Rochester)
2012
The Rushden Community College 11-18 mixed comp Northants Community Sponsored Academy (The Education Fellowship) 12/12
Culverhay School 11-18 boys comp Bath and NE Somerset Community Bath Community Academy – mixed (Cabot Learning) 1/9/12
Raincliffe School 11-16 mixed comp N Yorks Community Closed 8/12 (merged with Graham School)
The Coseley School 11-16 mixed comp Dudley Foundation
Fleetwood High School 11-18 mixed comp Lancs Foundation
John Spendluffe Foundation Technology College 11-16 mixed modern Lincs Academy converter
Parklands High School 11-18 mixed Liverpool Foundation Discussing academy sponsorship (Bright Tribe)
Frank F Harrison Engineering College 11-18 mixed comp Walsall Foundation Mirus Academy (sponsored by Walsall College) 1/1/12
2013
Gloucester Academy 11-19 mixed comp Glos Sponsored Academy (Prospect Education/ Gloucestershire College)
Christ the King Catholic and Church of England VA School 11-16 mixed comp Knowsley VA Closed 31/8/13
Aireville School 11-16 mixed modern N Yorks Community
Manchester Creative and Media Academy for Boys 11-19 boys comp Manchester Sponsored Academy (Manchester College/ Manchester Council/ Microsoft)
Fearns Community Sports College 11-16 mixed comp Lancs Community
Unity College Blackpool 5-16 mixed comp Blackpool Community Unity Academy Blackpool (sponsored by Fylde Coast Academies)
The Mirus Academy 3-19 mixed comp Walsall Sponsored Academy (Walsall College)

 .

Only one school appears twice over the three-year period albeit in two separate guises – Frank F Harrison/Mirus.

Of the 20 in the sample, seven were recorded in the relevant year’s Performance Tables as community schools, six as foundation schools, one was VA, one was an academy converter and the five remaining were sponsored academies.

Of the 14 that were not originally academies, seven have since become sponsored academies and one is discussing the prospect. Two more have closed, so just five – 25% of the sample – remain outside the academies sector.

All but two of the schools are mixed (the other two are boys’ schools). Four are modern schools and the remainder comprehensive.

Geographically they are concentrated in the Midlands and the North, with a few in the South-West and the extreme South-East. There are no representatives from London, the East or the North-East.

.

Performance of the Core Sample

Table 2 below looks at key Performance Table results for these schools. I have retained the separation by year and the order in which the schools appear, which reflects their performance on the GCSE threshold measure, with the poorest performing at the top of each section.

.

Table 2: Performance of schools falling below proposed high attainer floor targets 2011-2013

Name No of HA % HA 5+ A*-C incl E+M 3+ LoP En 3+ LoP Ma APS (GCSE)
2011
Carter Community School 9 13 56 56 44 304.9
Hadden Park High School 15 13 60 40 20 144.3
Merchants Academy 19 19 68 58 42 251.6
The Robert Napier School 28 12 68 39 46 292.8
Bishop of Rochester Academy 10 5 70 50 60 298.8
2012
The Rushden Community College 35 26 3 0 54 326.5
Culverhay School 10 19 30 40 20 199.3
Raincliffe School 6 11 50 50 33 211.5
The Coseley School 35 20 60 51 60 262.7
Fleetwood High School 34 22 62 38 24 272.9
John Spendluffe Foundation Technology College 14 12 64 50 43 283.6
Parklands High School 13 18 69 23 8 143.7
Frank F Harrison Engineering College 20 12 70 35 60 188.3
2013
Gloucester Academy 18 13 44 28 50 226.8
Christ the King Catholic and Church of England VA School 22 22 55 32 41 256.5
Aireville School 23 23 61 35 57 267.9
Manchester Creative and Media Academy for Boys 16 19 63 50 50 244.9
Fearns Community Sports College 22 13 64 36 59 306.0
Unity College Blackpool 21 18 67 57 52 277.1
The Mirus Academy 23 13 70 57 52 201.4

.

The size of the high attainer population in these schools varies between 6 (the minimum for which statistics are published) and 35, with an average of just under 20.

The percentage of high attainers within each school’s cohort ranges from 5% to 26% with an average of slightly over 16%.

This compares with a national average in 2013 for all state-funded schools of 32.4%, almost twice the size of the average cohort in this sample. All 20 schools here record a high attainer population significantly below this national average.

This correlation may be significant – tending to support the case that high attainers are more likely to struggle in schools where they are less strongly concentrated – but it does not prove the relationship.

Achievement against the GCSE threshold measure falls as low as 3% (Rushden in 2012) but this was reportedly attributable to the school selecting ineligible English specifications.

Otherwise the poorest result is 30% at Culverhay, also in 2012, followed by Gloucester Academy (44% in 2013) and Raincliffe (50% in 2012). Only these four schools have recorded performance at or below 50%.

Indeed there is a very wide span of performance even amongst these small samples, especially in 2012 when it reaches an amazing 67 percentage points (40 percentage points excluding Rushden). In 2013 there was a span of 26 percentage points and in 2011 a span of 14 percentage points.

The overall average amongst the 20 schools is almost 58%. This varies by year. In 2011 it was 64%, in 2012 it was significantly lower at 51% (but rose to 58% if Rushden is excluded) and in 2013 it was 61%.

This compares with a national average for high attainers in state-funded schools of 94.7% in 2013. The extent to which some of these outlier schools are undershooting the national average is truly eye-watering.

Turning to the progress measures, one might expect even greater variance, given that so many more schools fail to clear this element of the official floor targets with their high attainers.

The overall average across these 20 schools is 41% in English and 44% in maths, suggesting that performance is slightly stronger in maths than English.

But in 2011 the averages were 49% in English and 42% in maths, reversing this general pattern and producing a much wider gap in favour of English.

In 2012 they were 36% in English and 38% in maths, but the English average improves to 41% if Rushden’s result is excluded. This again bucks the overall trend.

The overall average is cemented by the 2013 figures when the average for maths stood at 53% compared with 42% for English.

Hence, over the three years, we can see that the sharp drop in English in 2012 – most probably attributable to the notorious marking issue – was barely recovered in 2013. Conversely, a drop in maths in 2012 was followed by a sharp recovery in 2013.

The small sample size calls into question the significance of these patterns, but they are interesting nevertheless.

The comparable national averages among all state-funded schools in 2013 were 86.2% in English and 87.8% in maths. So the schools in this sample are typically operating at around half the national average levels. This is indeed worse than the comparable record on the threshold measure.

That said, the variation in these results is again huge – 35 percentage points in English (excluding Rushden) and as much as 52 percentage points in maths.

There is no obvious pattern in these schools’ comparative performance in English and maths. Ten schools scored more highly in English and nine in maths, with one school recording equally in both. English was in the ascendancy in 2011 and 2012, but maths supplanted it in 2013.

The final column in Table 2 shows the average point score (APS) for high attainers’ best eight GCSE results. There is once more a very big range, from 144.3 to 326.5 – over 180 points – compared with a 2013 national average for high attainers in state-funded schools of 377.6.

The schools at the bottom of the distribution are almost certainly relying heavily on GCSE-equivalent qualifications, rather than pushing their high attainers towards GCSEs.

Those schools that record relatively high APS alongside relatively low progress scores are most probably taking their high attaining learners with L5 at KS2 to GCSE grade C, but no further.

.

Changes in Performance from 2011 to 2013

Table 3, below, shows how the performance of the 2011 sample changed in 2012 and 2013, while Table 4 shows how the 2012 sample performed in 2013.

The numbers in green show improvements compared with the schools’ 2011 baselines and those in bold are above my illustrative high attainer floor target. The numbers in red are those which are lower than the schools’ 2011 baselines.

.

Table 3: Performance of the 2011 Sample in 2012 and 2013

Name             % HA  5+ A*-C incl E+M      3+ LOP E    3+ LOP M
11 12 13 11 12 13 11 12 13 11 12 13
Carter Community School 13 14 13 56 100 92 56 80 75 44 80 33
Hadden Park High School 13 15 8 60 87 75 40 80 75 20 53 50
Merchants Academy 19 16 20 68 79 96 58 79 88 42 47 71
The Robert Napier School 12 12 11 68 83 96 39 59 92 46 62 80
Bishop of Rochester Academy 5 7 8 70 83 73 50 67 47 60 75 53

.

All but one of the five schools showed little variation in the relative size of their high attainer populations over the three years in question.

More importantly, all five schools made radical improvements in 2012.

Indeed, all five exceeded the 5+ GCSE threshold element of my illustrative floor target in both 2012 and 2013 though, more worryingly, three of the five fell back somewhat in 2013 compared with 2012, which might suggest that short term improvement is not being fully sustained.

Four of the five exceeded the English progress element of the illustrative floor target in 2012 while the fifth – Robert Napier – missed by only 1%.

Four of the five also exceeded the floor in 2013, including Robert Napier which made a 43 percentage point improvement compared with 2012. On this occasion, Bishop of Rochester was the exception, having fallen back even below its 2011 level.

In the maths progress element, all five schools made an improvement in 2012, three of the five exceeding the floor target, the exceptions being Hadden Park and Merchants Academy

But by 2013, only three schools remained above their 2011 baseline and only two – Merchants and Robert Napier – remained above the floor target.

None of the five schools would have remained below my floor target in either 2012 or 2013, by virtue of their improved performance on the 5+ GCSE threshold element, but there was significantly greater insecurity in the progress elements, especially in maths.

There is also evidence of huge swings in performance on the progress measures. Hadden Park improved progression in English by 40 percentage points between 2011 and 2012. Carter Community School almost matched this in maths, improving by 36 percentage points, only to fall back by a huge 47 percentage points in the following year.

Overall this would appear to suggest that this small sample of schools made every effort to improve against the threshold and progress measures in 2012 but, while most were able to sustain improvement – or at least control their decline – on the threshold measure into 2013, this was not always possible with the progress elements.

There is more than a hint of two markedly different trajectories, with one group of schools managing to sustain initial improvements from a very low base and the other group falling back after an initial drive.

Is the same pattern emerging amongst the group of schools that fell below my high attainer floor target in 2012?

.

Table 4: Performance of the 2012 Sample in 2013

Name   % HA  5+ A*-C incl E+M 3+ LOP E 3+ LOP M
12 13  12 13 12 13 12 13
The Rushden Community College 26 23 3 90 0 74 54 87
Culverhay School 19 12 30 67 40 67 20 67
Raincliffe School 11 - 50 - 50 - 33 -
The Coseley School 20 26 60 88 51 82 60 78
Fleetwood High School 22 24 62 84 38 36 24 67
John Spendluffe Foundation Technology College 12 15 64 100 50 61 43 83
Parklands High School 18 11 69 78 23 56 8 56
Frank F Harrison Engineering College 12 13 70 70 35 57 60 52

.

We must rule out Raincliffe, which closed, leaving seven schools under consideration.

Some of these schools experienced slightly more fluctuation in the size of their high attainer populations – and over the shorter period of two years rather than three.

Six of the seven managed significant improvements in the 5+ GCSE threshold with the remaining school – Frank F Harrison – maintaining its 2012 performance.

Two schools – Frank F Harrison and Culverhay did not exceed the illustrative floor on this element.  Meanwhile John Spendluffe achieved a highly creditable perfect score, comfortably exceeding the national average for state-funded schools. Rushden was not too far behind.

There was greater variability with the progress measures. In English, three schools remained below the illustrative floor in 2013 with one – Fleetwood High – falling back compared with its 2012 performance.

Conversely, Coseley improved by 31 percentage points to not far below the national average for state-funded schools.

In maths two schools failed to make it over the floor. Parklands made a 48 percentage point improvement but still fell short, while Frank F Harrison fell back eight percentage points compared with its 2012 performance.

On the other hand, Rushden and John Spendluffe are closing in on national average performance for state-funded schools. Both have made improvements of over 30 percentage points.

Of the seven, only Frank F Harrison would remain below my overall illustrative floor target on the basis of its 2013 performance.

Taking the two samples together, the good news is that many struggling schools are capable of making radical improvements in their performance with high attainers.

But question marks remain over the capacity of some schools to sustain initial  improvements over subsequent years.

 .

What Interventions Have Impacted on these Schools?

Table 5 below reveals how different accountability and school improvement interventions have been brought to bear on this sample of 20 schools since 2011.

.

Table 5: Interventions Impacting on Sample Schools 2011-2014

Name Floor Targets Most recent Inspection Ofsted Rating (Pre-) warning notice Academised
2011
Carter Community School  .FT 2011. FT 2013  .29/11/12. NYI as academy 2 Sponsored
Hadden Park High School  .FT 2011.FT 2012

.FT 2013

 .13/11/13 .NYI as academy SM Sponsored
Merchants Academy  .FT 2011 .FT 2012  .9/6/11 2
The Robert Napier School  .FT 2011.FT 2012  .17/09/09.NYI as academy 3 Sponsored
Bishop of Rochester Academy  .FT 2011.FT 2013  .28/6/13 3 PWN 3/1/12
2012
The Rushden Community College FT 2012  .10/11/10.NYI as academy 3 Sponsored
Culverhay School  .FT 2011 .FT 2012

.(FT 2013)

 .11/1/12 .NYI as academy SM Sponsored
Raincliffe School  .FT 2012  .19/10/10 3 Closed
The Coseley School  .FT 2012  .13/9/12 SM
Fleetwood High School  .FT 2012 .FT 2013  .20/3/13 SWK
John Spendluffe Foundation Technology College  .FT 2012  .3/3/10 .As academy    18/9/13 .1.2 Academy converter 9/11
Parklands High School  .FT 2011.FT 2012

.FT 2013

 .5/12/13 SM Discussing sponsorship
Frank F Harrison Engineering College  .FT 2011.FT 2012

.(FT 2013)

 .5/7/11.See Mirus Academy below 3 Now Mirus Academy (see below)
2013
Gloucester Academy  .FT 2011.FT 2012

 .FT 2013

 .4/10/12 SWK  .PWN 16/9/13.WN 16/12/13
Christ the King RC and CofE VA School  .FT 2011.FT 2012

.FT 2013

 .18/9/12 SM Closed
Aireville School  .FT 2012.FT 2013  .15/5/13 SM
Manchester Creative and Media Academy for Boys  .FT 2011.FT 2012

.FT 2013

 .13/6/13 SWK PWN 3/1/12
Fearns Community Sports College  .FT 2011.FT 2013  .28/6/12 3
Unity College Blackpool .  .FT 2011 .FT 2012

.FT 2013

 .9/11/11.NYI as academy 3 Sponsored
The Mirus Academy  .FT 2013  .7/11/13 SM

 .

Floor Targets

The first and obvious point to note is that every single school in this list fell below the official floor targets in the year in which they also undershot my illustrative high attainers’ targets.

It is extremely reassuring that none of the schools returning particularly poor outcomes with high attainers are deemed acceptable performers in generic terms. I had feared that a few schools at least would achieve this feat.

In fact, three-quarters of these schools have fallen below the floor targets in at least two of the three years in question, while eight have done so in all three years, two having changed their status by becoming academies in the final year (which, strictly speaking, prevents them from scoring the hat-trick). One has since closed.

Some schools appear to have been spared intervention by receiving a relatively positive Ofsted inspection grade despite their floor target records. For example, Carter Community School had a ‘good’ rating sandwiched between two floor target appearances, while Merchants Academy presumably received its good rating before subsequently dropping below the floor.

John Spendluffe managed an outstanding rating two years before it dropped below the floor target and was rated good – in its new guise as an academy – a year afterwards.

The consequences of falling below the floor targets are surprisingly unclear, as indeed are the complex rules governing the wider business of intervention in underperforming schools.

DfE press notices typically say something like:

Schools below the floor and with a history of underperformance face being taken over by a sponsor with a track record of improving weak schools.’

But of course that can only apply to schools that are not already academies.

Moreover, LA-maintained schools may appeal to Ofsted against standards and performance warning notices issued by their local authorities; and schools and LAs may also challenge forced academisation in the courts, arguing that they have sufficient capacity to drive improvement.

As far as I can establish, it is nowhere clearly explained what exactly constitutes a ‘history of underperformance’, so there is inevitably a degree of subjectivity in the application of this criterion.

Advice elsewhere suggests that a school’s inspection outcomes and ‘the local authority’s position in terms of securing improvement as a maintained school’ should also be taken into account alongside achievement against the floor targets.

We do not know what weighting is given to these different sources of evidence, nor can we rule out the possibility that other factors – tangible or intangible – are also weighed in the balance.

Some might argue that this gives politicians the necessary flexibility to decide each case on its merits, taking careful account of the unique circumstances that apply rather than imposing a standard set of cookie-cutter judgements.

Others might counter that the absence of standard criteria, imposed rigorously but with flexibility to take additional special circumstances in to account, lays such decisions unnecessarily open to dispute and is likely to generate costly and time-consuming legal challenge

.

Academy Warning Notices

When it comes to academies:

‘In cases of sustained poor academic performance at an academy, ministers may issue a pre-warning notice to the relevant trust, demanding urgent action to bring about substantial improvements, or they will receive a warning notice. If improvement does not follow after that, further action – which could ultimately lead to a change of sponsor – can be taken. In cases where there are concerns about the performance of a number of a trust’s schools, the trust has been stopped from taking on new projects.’

‘Sustained poor academic performance’ may or may not be different from a ‘history of underperformance’ and it too escapes definition.

One cannot but conclude that it would be very helpful indeed to have some authoritative guidance, so that there is much greater transparency in the processes through which these various provisions are being applied, to academies and LA-maintained schools alike.

In the absence of such guidance, it seems rather surprising that only three of the academies in this sample – Bishop of Rochester, Gloucester and Manchester Creative and Media – have received pre-warning letters to date, while only Gloucester’s has been superseded by a full-blown warning notice. None of these mention specifically the underperformance of high attainers.

  • Bishop of Rochester received its notice in January 2012, but subsequently fell below the floor targets in both 2012 and 2013 and – betweentimes – received an Ofsted inspection rating of 3 (‘requires improvement’).
  • Manchester Creative and Media also received its pre-warning notice in January 2012. It too has been below the floor targets in both 2012 and 2013 and was deemed to have serious weaknesses in a June 2013 inspection.
  • Gloucester received its pre-warning notice much more recently, in September 2013, followed by a full warning notice just three months later.

These pre-warning letters invite the relevant Trusts to set out within 15 days what action they will take to improve matters, whereas the warning notices demand a series of specific improvements with a tight deadline. (In the case of Gloucester Academy the notice issued on 16 December 2013 imposing a deadline of 15 January 2014. We do not yet know the outcome.)

Other schools in my sample have presumably been spared a pre-warning letter because of their relatively recent acquisition of academy status, although several other 2012 openers have already received them. One anticipates that more will attract such attention in due course.

 .

Ofsted Inspection

The relevant columns of Table 5 reveal that, of the 12 schools that are now academies (taking care to count Harrison/Mirus as one rather than two), half have not yet been inspected in their new guise.

As noted above, it is strictly the case that, when schools become academies – whether sponsored or via conversion – they are formally closed and replaced by successor schools, so the old inspection reports no longer apply to the new school.

However, this does not prevent many academies from referring to such reports on their websites – and they do have a certain currency when one wishes to see whether or not a recently converted academy has been making progress.

But, if we accept the orthodox position, there are only six academies with bona fide inspection reports: Merchants, Bishop of Rochester, John Spendluffe, Gloucester, Manchester Creative and Media and Mirus.

All five of the LA-maintained schools still open have been inspected fairly recently: Coseley, Fleetwood, Parklands, Aireville and Fearns.

This gives us a sample of 11 schools with valid inspection reports:

  • Two academies are rated ‘good’ (2)  – Merchants and John Spendluffe;
  • One academy – Bishop of Rochester – and one LA-maintained school –  Fearns – ‘require improvement’ (3);
  • Two academies – Gloucester and Manchester – and one LA-maintained school – Fleetwood – are inadequate (4) having serious weaknesses and
  • One academy – Mirus – and three LA-maintained schools – Parklands, Coseley and Aireville – are inadequate (4) and in Special Measures.

The School Inspection Handbook explains the distinction between these two  variants of ‘inadequate’:

‘A school is judged to require significant improvement where it has serious weaknesses because one or more of the key areas is ‘inadequate’ (grade 4) and/or there are important weaknesses in the provision for pupils’ spiritual, moral, social and cultural development. However, leaders, managers and governors have been assessed as having the capacity to secure improvement

…A school requires special measures if:

  • it is failing to give its pupils an acceptable standard of education and
  • the persons responsible for leading, managing or governing are not demonstrating the capacity to secure the necessary improvement in the school.’

Schools in each of these categories are subject to more frequent monitoring reports. Those with serious weaknesses are typically re-inspected within 18 months, while, for those in special measures, the timing of re-inspection depends on the school’s rate of improvement.

It may be a surprise to some that only seven of the 11 are currently deemed inadequate given the weight of evidence stacked against them.

There is some support for the contention that Ofsted inspection ratings, floor target assessments and pre-warning notices do not always link together as seamlessly as one might imagine, although apparent inconsistencies may sometimes arise from the chronological sequence of these different judgements.

But what do these 11 reports say, if anything, about the performance of high attainers? Is there substantive evidence of a stronger focus on ‘the most able’ in those reports that have issued since September 2013?

.

The Content of Ofsted Inspection Reports

Table 6, below, sets out what each report contains on this topic, presenting the schools in the order of their most recent inspection.

One might therefore expect the judgements to be more specific and explicit in the three reports at the foot of the table, which should reflect the new guidance introduced last September. I discussed that guidance at length in this October 2013 post.

.

Table 6: Specific references to high attainers/more able/most able in inspection reports

Name Date Outcome Comments
Merchants Academy 29/6/11 Good (2) In Year 9… an impressive proportion of higher-attaining students…have been entered early for the GCSE examinations in mathematics and science. Given their exceptionally low starting points on entry into the academy, this indicates that these students are making outstanding progress in their learning and their achievement is exceptional.More-able students are fast-tracked to early GCSE entry and prepared well to follow the InternationalBaccalaureate route.
Fearns Community Sports College 28/6/12 Requires improvement (3) Setting has been introduced across all year groups to ensure that students are appropriately challenged and supported, especially more-able students. This is now beginning to increase the number of students achieving higher levels earlier in Key Stage 3.
The Coseley School 13/9/12 Special Measures (4) Teaching is inadequate because it does not always extend students, particularly the more able.What does the school need to do to improve further?Raise achievement, particularly for the most able, by ensuring that:

  • work consistently challenges and engages all students so that they make good progress in lessons
  • challenging targets are set as a minimum expectation
  • students do not end studies in English language and mathematics early without having the chance to achieve the best possible grade
  • GCSE results in all subjects are at least in line with national expectations.

Target setting is not challenging enough for all ability groups, particularly for the more-able students who do not make sufficient progress by the end of Key Stage 4.

Gloucester Academy 4/10/12 Serious Weaknesses (4) No specific reference
Fleetwood High School 20/3/13 Serious Weaknesses(4) No specific reference
Aireville School 15/5/13 Special Measures(4) Teachers tend to give the same task to all students despite a wide range of ability within the class. Consequently, many students will complete their work and wait politely until the teacher has ensured the weaker students complete at least part of the task. This limits the achievement of the more-able students and undermines the confidence of the least-able.There is now a good range of subjects and qualifications that meet the diverse needs and aspirations of the students, particularly the more-able students.
Manchester Creative and Media Academy for Boys 13/6/13 Serious Weaknesses(4) The most-able boys are not consistently challenged to attain at the highest levels. In some lessons they work independently and make rapid progress, whereas on other occasions their work is undemanding.What does the academy need to do to improve further?Improve the quality of teaching in Key Stages 3 and 4 so that it is at least good leading to rapid progress and raised attainment for all groups of boys, especially in English, mathematics and science by…  ensuring that tasks are engaging and challenge all students, including the most-able.The most-able boys receive insufficient challenge to enable them to excel. Too many lessons donot require them to solve problems or link their learning to real-life contexts.In some lessons teachers’ planning indicates that they intend different students to achieve different outcomes, but they provide them all with the same tasks and do not adjust the pace or nature of work for higher- or lower-attaining students. This results in a slow pace of learning and some boys becoming frustrated.
Bishop of Rochester Academy 28/6/13 Requires improvement (3) No specific reference
John Spendluffe Foundation Technology College 18/9/13 Good (2) Not enough lessons are outstanding in providing a strong pace, challenge and opportunities for independent learning, particularly for the most able.The 2013 results show a leap forward in attainment and progress, although the most able could still make better progress.Leadership and management are not outstanding because the achievement of pupils, though improving quickly, has not been maintained at a high level over a period of time, and a small number of more-able students are still not achieving their full potential.
The Mirus Academy 7/11/13 Special Measures (4) The academy’s early entry policy for GCSE has made no discernible difference to pupils’ achievement, including that of more able pupils.
Parklands High School 5/12/13 Special Measures (4) The achievement of students supported by the pupil premium generally lags behind that of their classmates. All groups, including themost able students and those who have special educational needs, achieve poorly.Students who join the school having achieved Level 5 in national Key Stage 2 tests in primary school fare less well than middle attainers, in part due to early GCSE entry. They did a little better in 2013 than in 2012.

.

There is inconsistency within both parts of the sample – the first eight reports that pre-date the new guidance and the three produced subsequently.

Three of the eleven reports make no specific reference to high attainers/most able learners, all of them undertaken before the new guidance came into effect.

In three more cases the references are confined to early entry or setting, one of those published since September 2013.

Only four of the eleven make what I judge to be substantive comments:

  • The Coseley School (special measures) – where the needs of the most able are explicitly marked out as an area requiring improvement;
  • The Manchester Creative and Media Academy for Boys (serious weaknesses) – where attention is paid to the most able throughout the report;
  • John Spendluffe Foundation Technology College (good) – which includes some commentary on the performance of the most able; and
  • Parklands High School (special measures) – which also provides little more than the essential minimum coverage.

The first two predate the new emphasis on the most able, but they are comfortably the most thorough. It is worrying that not all reports published since September are taking the needs of the most able as seriously as they might.

One might expect that, unconsciously or otherwise, inspectors are less ready to single out the performance of the most able when a school is inadequate across the board, but the small sample above does not support this hypothesis. Some of the most substantive comments relate to inadequate schools.

It therefore seems more likely that the variance is attributable to the differing capacity of inspection teams to respond to the new emphases in their inspection guidance. This would support the case made in my previous post for inspectors to receive additional guidance on how they should interpret the new requirement.

.

Conclusion

This post established an illustrative floor target to identify a small sample of 20 schools that have demonstrated particularly poor performance with high attainers in the Performance Tables for 2011, 2012 or 2013.

It:

  • Compared the performance of these schools in the year in which they fell below the floor, noting significant variance by year and between institutions, but also highlighting the fact that the proportion of high attainers attending these schools is significantly lower than the national average for state-funded schools.
  • Examined the subsequent performance of schools below the illustrative floor in 2011 and 2012, finding that almost all made significant improvements in the year immediately following, but that some of the 2011 cohort experienced difficulty in sustaining this improvement across all elements into a second year. It seems that progress in English, maths or both are more vulnerable to slippage than the 5+ A*-C GCSE threshold measure.
  • Confirmed – most reassuringly – that every school in the sample fell below the official, generic floor targets in the year in which they also undershot my illustrative high attainer floor targets.
  • Reviewed the combination of assessments and interventions applied to the sample of schools since 2011, specifically the interaction between academisation, floor targets, Ofsted inspection and (pre)warning notices for academies. These do not always point in the same direction, although chronology can be an extenuating factor. New guidance about how these and other provisions apply and interact would radically improve transparency in a complex and politically charged field.
  • Analysed the coverage of high attainers/most able students in recent inspection reports on 11 schools from amongst the sample of 20, including three published after September 2013 when new emphasis on the most able came into effect. This exposed grave inconsistency in the scope and quality of the coverage, both before and after September 2013, which did not correlate with the grade of the inspection. Inspectors would benefit from succinct additional guidance.

In the process of determining which schools fell below my high attainers floor target, I also identified the schools that undershot one or other of the elements but not both. This wider group included 46 schools in 2011, 52 schools in 2012 and 34 schools in 2013.

Several of these schools reappear in two or more of the three years, either in their existing form or following conversion to academy status.

Together they constitute a ‘watch list’ of more than 100 institutions, the substantial majority of which remain vulnerable to continued underperformance with their high attainers for the duration of the current accountability regime.

The chances are that many will also continue to struggle following the introduction of the new ‘progress 8’ floor measure from 2015.

Perhaps unsurprisingly, the significant majority are now sponsored academies.

I plan to monitor their progress.

.

*Apologies for this rather tabloid title!

.

GP

February 2014

The 2013 Transition Matrices and High Attainers’ Performance

.

Data Overload courtesy of opensourceway

Data Overload courtesy of opensourceway

Since last year’s post on the Secondary Transition Matrices attracted considerable interest, I thought I’d provide a short commentary on what the 2013 Matrices – primary and secondary – tell us about the national performance of high attainers.

This note is a postscript to my recent offerings on:

and completes the set of benchmarking resources that I planned to make available.

I am using the national matrices rather than the interactive matrices (which, at the time of writing, are not yet available for 2013 results). I have included a few figures from the 2012 national matrices for comparative purposes.

According to Raise Online, the national matrices are derived from the data for ‘maintained mainstream and maintained and non-maintained special schools’.

They utilise KS2 fine points scores as set out below.

Sub Level Points Fine points range
6 39 36-41.99
5A 34-35.99
5B 33 32-33.99
5C 30-31.99
4A 28-29.99
4B 27 26-27.99
4C 24-25.99
3A 22-23.99
3B 21 20-21.99
3C 18-19.99

.

2013 Primary Transition Matrices

The Primary Matrices track back the KS1 performance of learners completing KS2 tests in 2013.

.

Reading

.

2013 primary reading TM.

This table shows that:

  • 12% of KS1 learners with L4 reading secured the minimum expected 2 levels of progress by securing L6 at the end of KS2. It is not possible for such learners to make more than 2 levels of progress. Almost all the remaining 88% of Level 4 learners made a single level of progress to Level 5.
  • By comparison, just 1% of learners achieving Level 3 in KS1 made 3 levels of progress to Level 6 (the same percentage as in 2012).
  • 87% of KS1 learners achieving L3 in reading secured the expected 2 or more levels of progress, 85% of them making 2 levels of progress to L5. However, some 13% made only 1 level of progress to L4. (In 2012, 89% of those with L3 at secured L5 and 10% reached L4.)
  • The proportion of learners with L3 in reading at KS1 who made the expected 2 levels of progress was lower than the proportions of learners with L2 overall, L2A, or L2B doing so. The proportion exceeding 2 levels of progress was far higher for every other level of KS1 achievement. (This was also true in 2012.)

.

Writing

.

2013 primary writing TM.

This table shows that:

  • 61% of learners achieving L4 in writing at KS1 made the requisite 2 levels of progress to L6 at KS2. Such learners are unable to make more than 2 levels of progress. The remaining 39% of L4 learners made a single level of progress to L5.
  • This compares with 9% of learners with L3 at KS1 who made 3 levels of progress to L6 (up from 6% in 2012). A further 2% of learners with L2A made 4 levels of progress to L6.
  • 89% of learners with L3 in KS1 writing made the expected 2 or more levels of progress, 80% of them making 2 levels of progress to L5. But 11% made only a single level of progress to L4. (In 2012, 79% of those with L3 at KS1 reached L5 and 15% made only L4.)
  • The proportion of learners with L3 at KS1 in writing achieving the expected 2 levels of progress was lower than the proportions of learners with L2 overall, L2A or L2B, or even L1 doing so. The proportion exceeding 2 levels of progress was far higher for every other level of KS1 achievement with the exception of L2C. (A similar pattern was evident in 2012.)

.

Maths

.

2013 primary maths TM.

This table shows that:

  • 89% of those achieving L4 in maths at KS1 made the requisite 2 levels of progress to L6 in KS2. These learners are unable to make more than 2 levels of progress. But the remaining 11% of those with L4 at KS1 made only a single level of progress to KS2 L5.
  • This compares with 26% of learners at L3 in KS1 maths who made 3 levels of progress to KS2 L6 (up significantly from 14% in 2012). In addition, 4% of those at KS1 L2A and 1% of those at 2B also managed 4 levels of progress to KS2 L6.
  • 90% of learners with L3 in KS1 maths made the expected 2 or more levels of progress to L5, 64% making 2 levels of progress to L5. But a further 10% made only a single level of progress to KS2 L4. (In 2012, 74% of those with L3 at KS1 made it to KS2 L5 and 11% secured L4.)
  • The proportion of learners with L3 at KS1 in maths who achieved the expected 2 levels of progress was lower than the proportions of those with KS1 L2A or L2B doing so. The proportion of learners exceeding 2 levels of progress was significantly higher for those with KS1 L2 overall, those with L2A, and even those with L1, but it was lower for those with L2B and especially L2C. (In 2012 the pattern was similar, but the gap between the proportions with L2B and L3 exceeding 2 levels of progress has narrowed significantly.)

.

Key Challenges

The key challenges in respect of high attainers in the primary sector are to:

  • Enable a higher proportion of learners with L4 at KS1 to make the expected 2 levels of progress to KS2 L6. There is a particular problem in reading where 88% of these learners are making a single level of progress.
  • Enable a higher proportion of learners with L3 at KS1 to make 3 levels of progress to KS2 L6. Reading is again the least advanced, but there is huge scope for improvement across the board. Efforts should be made to close the gaps between L2A and L3 making three levels of progress, which currently stand at 55 percentage points (reading), 49 percentage points (writing) and 30 percentage points (maths). For the duration of their existence, increasing take-up of KS2 L6 tests should secure further improvement.
  • Increase the proportions of learners with L3 at KS1 making 2 levels of progress so they are comparable with what is achieved by those with L2A and L2B at KS1. There are currently gaps of 11 percentage points (reading), 10 percentage points (writing) and 9 percentage points (maths) between those with L3 and those with L2A. The gaps between those with L3 and those with L2B are 5 percentage points (reading), 8 percentage points (writing) and 1 percentage point (maths).
  • Ensure that far fewer learners with L3 at KS1 manage only a single level of progress across KS2. The current levels – 13% in reading, 11% in writing and 10% in maths – are unacceptable.

.

Secondary

The Secondary Matrices track back the KS2 performance of learners completing GCSEs in 2013.

.

English

.

2013 secondary Englsih sublevels TM.

The table shows:

  • 97% of KS2 learners achieving 5A in English secured at least 3 levels of progress from KS2 to KS4 in 2013. This compares with 92% of learners achieving 5B and 74% of learners achieving 5C. (The comparable figures in 2012 were 98%, 92% and 70% respectively.)
  • 89% of KS2 learners achieving 5A in English achieved 4 or more levels of progress from KS2 to KS4 in 2013, so achieving an A* or A grade at GCSE, compared with 66% of those achieving 5B and 33% of those achieving 5C. (The comparable figures in 2012 were 87%, 64% and 29% respectively.)
  • The percentages of learners with 4A in English at KS2 who completed 3 and 4 or more levels of progress – 87% and 46% respectively – were significantly higher than the comparable percentages for learners achieving 5C.
  • 53% of KS2 learners achieving 5A in English made 5 levels of progress by achieving A* at GCSE, compared with 23% of those achieving 5B and 6% of those achieving 5C. (These are significantly higher than the comparable figures for 2012, which were 47%, 20% and 4% respectively).
  • 1% of KS2 learners achieving 5A at KS2 made only two levels of progress to GCSE grade C, compared with 6% of those with 5B and 22% of those with 5C. (These percentages have fallen significantly compared with 2012, when they were 3%, 13% and 30% respectively.)

.

Maths

.2013 secondary maths sublevels TM.

This table shows:

  • 97% of those achieving 5A in maths secured at least 3 levels of progress from KS2 to KS4, whereas 88% of learners achieving 5B did so and 70% of learners at 5C. (The comparable 2012 figures were 96%, 86% and 67% respectively.)
  • 85% of KS2 learners achieving 5A in maths made 4 or more levels of progress in 2013 to GCSE A* or A grades, compared with 59% of those at 5B and 31% of those at 5C. (The comparable 2012 figures were 84%, 57% and 30%.)
  • The percentage of learners achieving 4A in maths at KS2 who completed 3 and 4 or more levels of progress – 91% and 43% respectively – were significantly higher than the percentages of those with 5C who did so.
  • 53% of KS2 learners with 5A in maths made 5 levels of progress to achieve an A* grade in maths, compared with 22% of those with 5B and 6% of those with 5C. (The comparable figures for 2012 were 50%, 20% and 6% respectively).
  • 3% of learners with 5A at KS2 made only two levels of progress to GCSE grade C, compared with 11% of those with 5B and 27% of those with 5C. (These percentages were 3%, 13% and 30% in 2012.)

.

Key challenges

The key challenges in respect of high attainers in the secondary sector are to:

  • Ensure that, so far as possible, all learners with L5 at KS2 make at least 3 levels of progress to at least GCSE grade B. Currently more than 1 in 5 students with Level 5C fail to achieve this in English and more than 1 in 4 fail to do so in maths. Moreover, more than 1 in 10 of those with 5B at KS2 fall short of 3 levels of progress in maths. This is disappointing.
  • Ensure that a higher proportion of learners with L5 at KS2 make 4 and 5 levels of progress. The default expectation for those with L5A at KS2 should be an A* Grade at GCSE (5 levels of progress) while the default for those with L5B at KS2 should be at least Grade A at GCSE (4 levels of progress). Currently 47% of those with L5A are falling short of A* in both English and maths, while 34% of those with L5B are falling short of A*/A in English while 41% are doing so in maths.
  • Narrow the gaps between the performance of those with L5C at KS2 and those with L4A. Currently there are 13 percentage point gaps between the proportions making the expected 3 levels of progress and between the proportions exceeding 3 levels of progress in English, while in maths there are gaps of 21 percentage points between those making 3 levels of progress and of 12 percentage points between those exceeding 3 levels of progress.

.

GP

January 2014

High Attainment in the 2013 Secondary and 16-18 Performance Tables

.

.

This post reviews high attainment and high attaining student data in the 2013 Secondary and 16-18 Performance Tables relating

Data Overload courtesy of opensourceway

to GCSE and A level respectively. It compares key outcomes with those reported in last year’s tables.

It also draws extensively on two accompanying statistical publications:

and derives three year trends, from these and the comparable 2011 and 2012 publications, focused primarily on variations by sector and school admission arrangements.

This post complements a briefer analysis of High Attainment in the 2013 Primary School Performance Tables published on 12 December 2013 and updates last year’s High Attaining Students in the 2012 Secondary School Performance Tables (January 2013).

This year’s secondary/post-16 analysis is presented in a somewhat different format, organised into sections relating to key measures, beginning with GCSE and moving on to A level.

A few preliminaries:

There are sometimes discrepancies between the figures given in the Tables and those in the supporting statistical publications that I cannot explain.

The commentary highlights results – some extraordinarily good, others correspondingly poor – from specific institutions identified in the Tables. This adds some richness and colour to what might otherwise have been a rather dry post.

But there may of course be extenuating circumstances to justify particularly poor results which are not allowed for in the Tables. Equally, strong results may not always be solely attributable to the quality of education provided in the institution that secures them.

As always, I apologise in advance for any transcription errors and urge you to report them through the comments facility provided.

Those who prefer not to read the full post will find a headline summary immediately below. The main text provides additional detail but is intended primarily for reference purposes.

.

Headlines

.

Media Coverage

There has been relatively little media coverage of what the Performance Tables reveal about the achievement of high attainers, though one article appeared in the Daily Telegraph.

It said that the high attainer population comprised some 175,800 students of which:

  • about 9,300 ‘failed to gain five good GCSEs at grades A* to C, including English and maths’;
  • around 48% per cent (over 84,200) did not pass the EBacc and 35% [approaching 62,000] did not enter all the necessary subjects;
  • in English almost 14% [24,000] ‘effectively went backwards in English by gaining lower scores at GCSE level than comparable tests taken at 11, while 12% [21,000] did so in maths’.

The figures in square brackets are my own, derived from the percentages provided in the article.

The final point suggests that sizeable minorities of high attainers achieved the equivalent of Level 4 in English and maths GCSEs, but this is incorrect.

These figures relate to the proportion of high attainers who did not make at least three levels of progress from KS2 to KS4 in English and maths (see below) – quite a different matter.

.

Headlines from this analysis

The following extended bullet points summarise the key findings from my own analysis:

  • The high attainer population constitutes almost exactly one third of the population of mainstream state-funded schools. A gender gap that had almost closed in 2012 has widened again in favour of girls. There are significant variations between school types – for example just over 20% of students attending sponsored academies are high attainers compared with just under 40% in academy converters. The population in free schools, UTCs and studio schools has fallen by 11.5% since 2012, presumably as a consequence of the sector’s rapid expansion. Only 90% of the selective school cohort constitutes high attainers, which suggests 10% of their intake are middle attainers who perform well on ability-based 11+ assessments. The selective school high attainer population has fallen by 1.4% since 2011. Ten selective schools record that their cohort consists entirely of high attainers, but some selective schools register a cohort in which two-thirds or fewer students are high attainers. This is comfortably lower than some non-selective schools, raising awkward questions about the nature of selective education. Although there are no schools with no high attainers, two schools recorded 3% and 99 have fewer than 10% (down from 110 in 2012). Academies in coastal towns are well-represented. The schools containing these small high attaining groups demonstrate huge variations in high attainer performance. This warrants further investigation.
  • 60.6% of all students in state-funded schools achieved 5 or more GCSEs at grades A*-C (or equivalent) including GCSEs in English and maths, a 1.8% improvement compared with 2012. But the success rate for high attainers improved by only 0.7% to 94.7%. This is better than the 0.2% fall in the success rate amongst low attainers but falls well short of the 2. 3% improvement for middle attainers. One in every twenty high attainers continues to miss this essential benchmark. But 50 more schools recorded 100% returns than in 2012 – and 19 fewer schools were at 75% or below. Apart from selective schools falling foul of the ineligibility of some IGCSEs, Ark King’s Academy in Birmingham was the lowest performer at 33%. Trends vary considerably according to school type. Free schools, UTCs and studio schools have improved by 4.2% since 2012, which must be partly a consequence of the growth of that sector. Meanwhile high attainers in selective schools have fallen back by 2.0% (and selective schools overall by 2.3%) since 2011. It is unlikely that idiosyncratic IGCSE choices are solely responsible. The profiles of sponsored and converter academies are still markedly different, though the gap between their high attainers’ performance has halved since 2011, from 5.3 percentage points to 2.7 percentage points.
  • There were big increases in the percentage of all students entered for all EBacc subjects in state-funded schools – up 12.4% to 35.5% – and the percentage successful – up 6.6% to 22.8%. The comparable entry and success rates for high attainers were 65.0% and 52.1% respectively. The entry rate for 2012 was 46.3%, so that has improved by almost 19 percentage points, a much faster rate of improvement than the headline figure. The success rate has improved from 38.5% last year, so by 13.6 percentage points, more than double the improvement in the headline figure. The EBacc is clearly catching on for high attainers following a relatively slow start. That said, one could make a case that the high attainer success rate in particular remains rather disappointing, since something like one in five high attainers entered for the EBacc fail to convert entry into achievement. Forty-seven schools entered all of their high attainers but only four recorded 100% success, two selective (Chelmsford County High for Girls and Queen Elizabeth’s Barnet) and two comprehensive (St Ursula’s Convent School in Greenwich and The Urswick School in Hackney). Only 55 schools entered no high attainers for the EBacc, compared with 186 in 2012. Seventy-nine schools recorded 0% of high attainers achieving the EBacc, also down significantly, from 235 in 2012. Seven of these were grammar schools, presumably all falling foul of IGCSE restrictions.
  • 70.4% of all students in state-funded schools made at least the expected three levels of progress in English and 70.7% did so in maths. These constitute improvements of 2.4% and 2.0% respectively. High attainers registered 86.2% success in English and 87.8% in maths. Their rates of improvement were broadly comparable with the headline figures, though slightly stronger in English. It remains disturbing that one in seven high attainers fail to make the expected progress in English and 1 in 8 fail to do so in maths. More schools achieved 100% success amongst their high attainers on each measure than in 2012 – 108 in English and 120 in maths. Forty-four schools were at or below 50% on this measure in English, some IGCSE-favouring grammar schools amongst them. Apart from those, the worst performer was Gloucester Academy at 28%. In maths 31 schools were at or below this 50% benchmark and the worst performer was Stafford Sports College at 29%. Six schools managed 50% or below in both English and maths, several of them academies. Amongst those at 50% or below in English, 11 had better rates of performance for both their middle and their low attainers than for their high attainers. Amongst those at 50% or below in maths, only one school achieved this feat – St Peter’s Catholic College of Maths and Computing (!) in Redcar and Cleveland. It is a cause for concern that high attainers in English attending selective schools continue to fall back on this measure and that one in five high attainers in sponsored academies, free schools, UTCs and studios is failing to make three levels of progress in English, while the same is true of maths in sponsored academies.
  • 7.5% of students in state-funded schools and colleges achieved grades of AAB or higher at A level with all three in facilitating subjects, an improvement of 0.1% compared with 2012. But the comparable percentage for students who achieved these grades with at least two in facilitating subjects shot up to 12.1%, an improvement of 4.3% on 2012. There are big variations between sectors, with the percentage achieving the former measure ranging from 3.5% (FE colleges) to 10.4% (converter academies. The figure for selective schools is 21.1%. Turning to the latter measure, percentages vary from 5.4% in mainstream sponsored academies to 16.4% in mainstream converter academies, while selective schools stand at 32.4%. Across all sectors, more students achieve grades AAA or higher in any A level subjects than achieve AAB or higher in three facilitating subjects. The proportion of students achieving AAA or higher in any A levels is falling in most sectors and institutional types, except in free schools, UTCs and studios and in FE colleges. The proportion achieving AAB or higher in any subjects is falling except in sponsored academies and FE colleges. Conversely there are improvements for AAB or higher with all three in facilitating subjects in LA-maintained mainstream schools, sponsored academies, sixth form colleges and FE colleges (and also across all comprehensive schools).  Across all state-funded mainstream schools, the percentage of A level A* grades has fallen back by 0.5% since 2011 while the percentage of A*/A grades has declined by 0.1%.

The full commentary below names 22 schools which perform particularly badly on one or more GCSE high attainer measures (leaving aside selective schools that have adopted ineligible GCSEs).

Of those 22, only nine are below the floor targets and, of those nine, only four are not already academies. Hence the floor targets regime leaves the vast majority of these schools untouched.

The only hope is that these schools will be caught by Ofsted’s renewed emphasis on the attainment and progress of the ‘most able’ learners (though that provision could do with further clarification as this previous post explained).

 

Definitions

The analysis of GCSE performance is focused primarily on high attainers, while the A level analysis is confined to high attainment.

This is a consequence of the way the two sets of performance tables are constructed (such distinctions were brought out more fully in this October 2013 post.)

There is no coverage of A*/A performance at GCSE within the Secondary Tables so we must necessarily rely on performance against standard measures, such as 5+ GCSEs at A*-C including English and maths and the English Baccalaureate (EBacc).

The Government response to the consultation on secondary accountability reform suggests that this will remain the case, with material about achievement of top GCSE grades confined to the supporting Data Portal. It remains to be seen whether this arrangement will give high attainment the prominence it needs and deserves.

The current definition of high attainers is based on prior performance at the end of KS2. Most learners will have taken these KS2 tests five years previously, in 2008:

  • High attainers are those who achieved above Level 4 in KS2 tests – ie their average point score (APS) in English, maths and science tests was 30 or higher.
  • Middle attainers are those who achieved at the expected Level 4 in KS2 tests – ie their APS in these tests was between 24 and 29.99 – and
  • Low attainers are those who achieved below Level 4 in KS2 tests – ie their APS in these tests was under 24.

Since high attainers are determined on the basis of APS across three subjects, the definition will include all-rounders who achieve good (if not outstanding) results across all three tests, as well as some with a relatively spiky achievement profile who compensate for middling performance in one area through very high attainment in another.

Conversely, learners who are exceptionally strong in one subject but relatively poor in the other two are unlikely to pass the APS 30 threshold.

Both the Secondary Tables and the associated statistical publications remain bereft of data about the performance of high attainers from disadvantaged backgrounds and how that compares with the performance of their more advantaged high attaining peers.

This is unfortunate, since schools that are bucking the trend in this respect – achieving a negligible ‘excellence gap’ between their high attainers from advantaged and disadvantaged backgrounds – richly deserve to be celebrated and emulated.

At A level a variety of high attainment measures are reported in the statistical publications, but the Performance Tables focus on the achievement of AAB+ grades in the so-called ‘facilitating subjects’.

The Statement of Intent for last year’s Tables confirmed the intention to introduce:

‘Percentages of students achieving three A levels at grades AAB or higher in facilitating subjects, reflecting the subjects and grades most commonly required by Russell Group and other top universities.’

These subjects are listed as ‘biology, chemistry, physics, mathematics, geography, history, English literature, modern and classical languages.’

Such measures have been widely criticised for their narrowness, the Russell Group itself asserting that:

‘It would be wrong to use this simple indicator as a measure of the number of pupils in a school who are qualified to apply successfully to a Russell Group university.’

Nevertheless, they support one of the Government’s preferred Social Mobility Indicators which compares the percentage of students attending state and independent schools who achieve this measure. (In 2012 the gap was 15.1%, a full percentage point smaller than in 2011.)

There is nothing in the 16-18 Tables about high attainers, although the consultation document on 16-19 accountability reform includes a commitment to:

‘Consider how we can report the results of low, middle and high attainers similarly [to KS4] in the expanded 16-19 performance tables’.

At the time of writing, the response to this consultation has not been published.

.

GCSE Achievement

 .

The High Attainer Population

Before examining the performance data it is important to review the size of the high attaining population and how this varies between genders, sectors and types of school.

Tables 1A, B and C below show that the population has remained relatively stable since 2011. It accounts consistently for almost exactly one third of students in state-funded mainstream schools.

The gender gap amongst high attainers has changed slightly since 2011. The percentage of high attaining girls has fallen back, slightly but consistently, while the percentage of high attaining boys increased in 2012, only to fall back again in 2013.

A gender gap that had almost been eliminated in 2012 has now widened again to a full percentage point. The percentages of high attaining learners of both genders are the lowest they have been over the three year period.

There are significant variations according to sector and school type, since the high attainer population in converter academies is almost double that in sponsored academies, where it constitutes barely a fifth of the student body. This is a strikingly similar proportion to that found in modern schools.

The percentage of high attainers in comprehensive schools is only very slightly lower than the overall figure.

At the other end of the spectrum, the high attaining cohort constitutes around 90% of the selective school population, which begs interesting questions about the nature of the other 10% and possible discrepancies between KS2 results and ability-focused 11+ assessment.

It cannot be the case that the majority of the missing 10% attended independent preparatory schools and did not take KS2 tests, since those without test results are excluded from the calculations.

The underlying trend is downward in all types of school. There has been a huge 11.5% fall in the proportion of high attainers in free schools, UTCs and studio schools. This is presumably consequent upon the expansion of that sector and brings it much more into line with the figures for all maintained mainstream and other comprehensive schools.

Otherwise the most substantial reduction has been in converter academies. The percentage in selective schools has fallen by 1.4% since 2011, twice the rate of decline in comprehensive schools.

.

Table 1A: Percentage of high attainers by sector 2011, 2012 and 2013

All maintained mainstream LA maintained mainstream Sponsored academies Converter academies Free schools, UTCs and studio schools
2013 32.8 30.6 20.5 39.3 31.4
2012 33.6 32.0 20.9 42.5 42.9
2011 33.5 - 20.6 47.5 -

.

Table 1B: Percentage of high attainers by admissions practice 2011, 2012 and 2013

Selective Comprehensive Modern
2013 88.9 30.9 20.5
2012 89.8 31.7 20.9
2011 90.3 31.6 20.4

.

Table 1C: Percentage of high attainers by gender, all state-funded mainstream schools 2011, 2012, 2013

  Boys Girls
2013 32.3 33.3
2012 33.4 33.8
2011 32.6 34.4

 

The 2013 Performance Tables list 10 schools where 100% of pupils are deemed high attainers, all of which are selective. Thirteen selective schools were in this position in 2012.

But there is also a fairly wide spread amongst selective schools, with some recording as few as 70% high attainers, broadly comparable with some prominent non-selective schools.

For example, Dame Alice Owen’s School and The Cardinal Vaughan Memorial RC School – both comprehensive – have high attainer populations of 79% and 77% respectively, while Fort Pitt Grammar School in Chatham, Kent and Skegness Grammar School are at 65% and 66% respectively.

This raises intriguing questions about the nature of selective education and the dividing line between selective and non-selective schools

At the other extreme there are no schools recording zero high attainers, but 99 record 10% or fewer, several of them prominent academies. This is an improvement on 2012 when 110 schools fell into this category.

The two schools with the fewest high attainers (3%) are Barnfield Business and Enterprise Studio (which opened in 2013) and St Aldhelm’s Academy in Poole.

Academies based in coastal towns are well represented amongst the 99.

It is interesting to speculate whether very small high attainer cohorts generally perform better than slightly larger cohorts that perhaps constitute a ‘critical mass’.

Certainly there are huge variations in performance on the key measures amongst those schools with few high attainers (where results have not been suppressed). This is particularly true of EBacc entry and success rates.

For example, amongst the 50 schools with the fewest high attainers:

  • the EBacc success rate varies from 73% at Aston Manor Academy to zero, returned by 12 of the 50.
  • The percentage of high attaining pupils making the expected progress in English varies from 50% to 100% while the corresponding range in maths is from 47% to 100%.

Many of these figures are derived from very small cohorts (all are between 1 and 20), but the point stands nevertheless.

.

The Performance of High Attainers

As noted above, there are no true high attainment measures relating to the achievement of GCSE A*/A grades within the Secondary Tables, so this section is necessarily reliant on the universal measures they contain.

.

5+ GCSEs at Grades A*-C including English and maths

The 2013 Secondary Performance Tables reveal that:

  • 53.6% of students at state-funded schools achieved 5+ GCSEs at Grades A*-C including English and maths, up 1.7% from 51.9% in 2012.
  • But the Tables pay more attention to the percentage achieving 5+ GCSEs at Grades A*-C (or equivalent) including GCSEs in English and maths: 60.6% of students attending state-funded schools achieved that measure in 2013, compared, up 1.8% from 58.8% in 2012.
  • 94.7% of high attainers in state-funded schools secured this outcome, up an improvement on 94.0% in 2012. The comparable figures for middle attainers and low attainers (with 2012 figures in brackets) are 57.4% (55.1%) and 6.9% (7.1%) respectively. Hence the overall increase of 1.8% masks a slight fall amongst low attainers and a significantly smaller increase amongst high attainers. Although there has been improvement, one in every 20 high attainers continues to fall short.
  • But it is notable that around 530 schools achieved 100% amongst their high attainers on this measure, compared with some 480 in 2012. Moreover, only 14 schools are at or below 67%, compared with 19 in 2012, and 47 are at or below 75% compared with 66 in 2012. This is positive news and suggests that the inclusion of the distinction within the Tables is beginning to bear fruit.

 Tables 2A and 2B, below show there has been an increase of 3.5% on this measure since 2011 across all pupils in state-funded mainstream schools. Meanwhile the proportion of high attainers securing this outcome has fallen by 0.4% (after rising slightly in 2012).

It may well be harder for schools to eradicate the last vestiges of underachievement at the top end than to strengthen performance amongst middle attainers, where there is significantly more scope for improvement. But some may also be concentrating disproportionately on those middle attainers.

This overall picture masks very different trends in different types of school.

In sponsored academies an overall improvement of 4.8% coincides with a slight 0.1% fall amongst high attainers, who have recovered following a substantial dip in 2012.

But in converter academies the overall success rate has fallen by almost 9% since 2011, while the rate for high attainers has fallen by only 2.7%.

And in free schools, UTCs and studios a slight overall fall since 2012 (there are no figures for 2011) is accompanied by an improvement for high attainers of over 4%.

Comprehensive schools have improved by 2.6% overall since 2011, yet their high attainers have fallen back by 0.3%. In selective schools the overall rate has fallen back by 2.3% while the high attainer rate has dropped by a similar 2.0%. This is concerning.

It is not straightforward to work out what is happening here, though the changing size of different sectors must be having a significant impact. 2012 GCSE results in English will certainly have influenced the dip in last year’s figures.

High attainers in free schools, UTCs and studios still have some ground to make up on other sectors and it will be interesting to see whether their improving trend will continue in 2014

 ,

Table 2A: Percentage achieving 5+ A*-C grades (or equivalent) including English and maths by sector

All state-funded mainstream LA maintained mainstream Sponsored academies Converter academies Free schools, UTCs and studio schools
All HA All HA All HA All HA All HA
2013 61.7 94.7 59.2 94.1 51.2 93.0 68.2 95.7 54.6 91.7
2012 59.8 94.0 58.2 93.5 49.3 91.5 68.4 95.5 55.7 87.5
2011 58.2 95.1 N/A N/A 46.8 93.1 77.1 98.4 N/A N/A

.

Table 2B: Percentage achieving 5+ A*-C grades (or equivalent) including English and maths by admissions practice

Selective Comprehensive Modern
All HA All HA All HA
2013 96.4 97.3 60.4 94.5 55.3 92.5
2012 97.4 98.2 58.5 93.5 53.1 92.2
2011 98.7 99.3 57.8 94.8 50.8 91.8

.

A*-C grades in GCSE English and maths

According to the 2013 Secondary Tables:

  • 61.3% of all students in state-funded schools achieved GCSE grades A*-C in English and maths, compared with 59.5% in 2012, an improvement of 1.8%.
  • However, 95.1% of high attainers in state-funded schools achieved this measure compared with 94.3% in 2012, an increase of only 0.8%. The comparable figures for middle and low attainers (with 2012 figures in brackets) were 58.5% (55.8%) and 7.1% (7.3%) respectively. The pattern is therefore similar to the 5A*-C measure, with limited improvement at the top, significant improvement in the middle and a slight decline at the bottom.
  • Some 610 state-funded schools had 100% of their high attainers achieve this outcome, a significant improvement on the 530 recorded in 2012. There were 12 schools where the percentage was 67% or lower, compared with 18 in 2012, and 38 where the percentage was 75% or lower, compared with almost 60 in 2012.
  • These latter figures include Pate’s, King Edward VI Camp Hill and Bishop Wordsworth’s, all presumably tripped up again by their choice of IGCSE. Other poor performers were Gloucester Academy (again) at 44% and St Antony’s Catholic College in Trafford (59%). The worst performers were relatively stronger than their predecessors from 2012.

The trend data derived from the associated statistical publications shows that the overall figure for high attainers in state-funded schools has increased by 0.9% compared with 2012, recovering most of the 1.2% dip that year compared with 2011.

Sponsored academies have improved significantly with their high attainers back to 93.5% (their 2011 percentage) following a 1.7% dip in 2012. On the other hand, high attainers in converter academies have made little improvement compared with 2012, while free schools, studio schools and UTCs have improved by 3.9%.

Once again these patterns are probably influenced strongly by change in the size of some sectors and the impact of 2012 GCSE English results.

Interestingly though, selective school high attainers – having managed 99.5% on this measure in 2011 – are continuing to fall back, recording 98.3% in 2012 and now 97.5%. This may have something to do with the increasing attraction of IGCSE.

 .

Entry to and achievement of the EBacc

The 2013 Secondary Tables show that:

  • 35.5% of all students at state-funded schools were entered for all English Baccalaureate subjects, compared with 23.1% in 2012, and 22.8% achieved all EBacc subjects, up 6.6% from 16.2% in 2012.
  • Both entry (65.0%) and success (52.1%) rates continue to be much higher for high attainers than for middle attainers (27.8% entered and 11.8% successful) and low attainers (3.4% entered and 0.5% successful)
  • In 2012, the entry rate for high attainers was 46.3%, so there has been a substantial improvement of almost 19%. The 2012 success rate was 38.5%, so that has improved by 13.6%.
  • One could reasonably argue that a 52.1% success rate is lower than might be expected and relatively disappointing given that almost two-thirds of high attainers now enter all EBacc subjects. But, compared with the two previous measures, schools are much further away from the 100% ceiling with the EBacc, so further significant improvement amongst high attainers is likely over the next few years. However, the forthcoming shift to ‘Progress 8’ measure is likely to impact significantly.
  • 55 schools entered no high attainers for the EBacc, down considerably from 186 in 2012. Zero high attainers achieved the EBacc at 79 schools compared with 235 in 2012. Several grammar schools were of this number.

Tables 3A and B below indicate that, despite rapid improvement since 2012, only a third of high attainers in sponsored academies achieve the EBacc, compared with almost 6 in 10 attending converter academies.

The success rate for high attainers at free schools, UTCs and studios is only slightly higher than that for sponsored academies and both are improving at a similar rate.

Amost exactly half of high attainers at comprehensive schools are successful, as are almost exactly three quarters of high attainers at selective schools, but the rate of improvement is much faster in comprehensive schools – and indeed in modern schools too.

.

Table 3A: Percentages achieving the EBacc by sector

All state-funded mainstream LA maintained mainstream Sponsored academies Converter academies Free schools, UTCs and studio schools
All HA All HA All HA All HA All HA
2013 23.2 52.1 20.9 49.1 11.0 34.7 30.1 58.1 16.1 35.6
2012 16.4 38.5 14.5 35.0 6.3 21.1 25.7 49.1 12.2 23.6
2011 15.6 37.2 - - 5.2 17.7 31.5 55.4 - -

.

Table 3B: Percentages achieving the EBacc by admissions practice

Selective Comprehensive Modern
All HA All HA All HA
2013 71.6 74.6 21.5 49.9 12.1 33.3
2012 68.2 70.7 14.5 35.0 7.2 20.7
2011 68.1 70.5 13.7 33.6 6.7 20.3

.

Three Levels of Progress in English and maths

The Tables inform us that:

  • 70.4% of all pupils in state-funded secondary schools made at least three levels of progress in English (up 2.4% from 68% in 2012) and 70.7% did so in maths (up 2.0% from 68.7% in 2012).
  • In both subjects more high attainers made the requisite progress than middle and low attainers: 86.2% in English (2.8% up on 2012) and 87.8% in maths (up 2.0%). Despite these improvements, it remains the case that approximately one in seven high attainers fail to make the expected progress in English and one in eight fail to do so in maths. This is extremely disappointing.
  • There were 108 schools in which every high attainer made the requisite progress in English, up from 93 in 2012. In maths, 120 schools ensured every high attainer made the expected progress, compared with 100 in 2012. A total of 36 schools managed this feat in both English and maths, whereas only 26 did so in 2012.
  • At the three grammar schools we have already encountered, no high attainers made the expected progress in English. Forty-four schools were at or below 50% on this measure, down markedly from 75 in 2012. The worst performer apart from the grammar schools was Gloucester Academy at 28%.
  • Thirty-one schools had 50% or fewer high attainers making the expected progress in maths, an improvement on the 46 registering this result last year. The poorest performer was Stafford Sports College at 29%.

Tables 4A and B below contain the trend data for the achievement of three levels of progress in English while Tables 5A and B cover maths.

The figures within these tables are not strictly comparable, since the statistics unaccountably define high attainment slightly differently for the two populations. In the case of the ‘all’ column, they use achievement of Level 5 in the relevant KS2 test (ie English or maths), rather than above Level 4 achievement across all three core subjects, while the definition for the ‘high attainers’ column is the customary one set out above.

Nevertheless, one can see that, overall, the percentage of high attainers meeting this benchmark in English is recovering following a significant fall last year. Free schools, UTCs and studios have just overtaken sponsored academies while converter academies are 3.5 percentage points ahead of LA maintained mainstream schools.

In maths converter academies have an even more substantial 4.5 percentage point lead over LA maintained mainstream schools. Sponsored academies are a full 10 percentage points behind converters and five percentage points behind free schools, UTCs and studios, but the latter category is recording a downward trend while everyone else is moving in the opposite direction.

The fact that one in five high attainers in sponsored academies, free schools, UTCs and studios is failing to make three levels of progress in English is serious cause for concern.  Worse still, the same is true of maths in sponsored academies. This state of affairs requires urgent attention.

It is noticeable that the general recovery in performance amongst high attainers in English does not extend to selective schools, which have fallen back still further since 2012 and are now a full 3.5 percentage points behind their 2011 level. Regardless of causality – and early entry policy as well as the increasing popularity of IGCSE may be involved – this too is a matter for concern. The situation is more positive in maths however.

.

Table 4A: Percentages achieving three levels of progress in English by sector

All state-funded mainstream LA maintained mainstream Sponsored academies Converter academies Free schools, UTCs and studio schools
All HA All HA All HA All HA All HA
2013 79.7 86.2 - 85.0 - 80.7 - 88.5 - 81.0
2012 76.9 83.4 - 82.5 - 76.0 - 86.7 - 68.1
2011 69.0 87.2 - - - 79.6 - 94.5 - -

.

Table 4B: Percentages achieving three levels of progress in English by admissions practice

Selective Comprehensive Modern
All HA All HA All HA
2013 - 93.0 - 85.5 - 81.0
2012 - 93.4 - 82.3 - 77.1
2011 - 96.5 - 86.2 - 81.1

.

Table 5A: Percentages achieving three levels of progress in maths by sector

All state-funded mainstream LA maintained mainstream Sponsored academies Converter academies Free schools, UTCs and studio schools
All HA All HA All HA All HA All HA
2013 81.7 87.8 - 86.2 - 80.8 - 90.7 - 85.6
2012 79.7 85.8 - 84.4 - 77.8 - 90.2 - 87.5
2011 76.8 85.2 - - - 75.6 - 93.2 - -

.

Table 5B: Percentages achieving three levels of progress in maths by admissions practice

Selective Comprehensive Modern
All HA All HA All HA
2013 - 96.6 - 86.9 - 84.4
2012 - 95.9 - 84.7 - 80.5
2011 - 96.6 - 83.9 - 79.3

.

Other measures

The Performance Tables show:

  • The average point score (APS) per pupil for the best eight subjects (GCSE only) across all state-funded schools was 280.1, up from 276.7 in 2012. Amongst high attainers this rose to 377.6, up from 375.4 in 2012. Only five schools – all selective – achieved an APS above 450 for their high attainers (eight schools managed this in 2012). The top performer was Colyton Grammar School in Devon. At the other extreme, four schools were at 200 or lower (much reduced from 16 in 2012). These were Hadden Park High School in Nottingham (160.1), Pent Valley Technology College in Kent (188.5), Aylesford School in Kent (195.7) and Bolton St Catherine’s Academy (198.2).
  • According to the value added (best 8) measure, the best results for high attainers were achieved by four schools that scored over 1050 (seven schools managed this in 2012). These were Tauheedul Islam Girls High School, Harris Girls’ Academy, East Dulwich, Sheffield Park Academy and Lordswood Girls’ School and Sixth Form Centre. Conversely there were three schools where the score was 900 or below. The lowest VA scores were recorded by Ark Kings Academy; Hadden Park High School; and Manchester Creative and Media Academy for Boys.
  • The Tables also provide an average grade per GCSE per high attainer (uncapped) but, at the time of writing, the relevant column in the Tables refuses to sort in ascending/descending order. This press article draws on another measure – average grade per pupil per qualification (capped at best 8) to identify Colyton Grammar School as the only state-funded school to achieve an average A* on this measure. It is highly likely that Colyton Grammar will top the rankings for the uncapped high attainer measure too. The article adds that a total of 195 schools (state and independent presumably) achieved an average of either A*, A*-, A+, A or A- on the capped measure, noting that the Hull Studio School posted G- (though with only seven pupils) while two further schools were at E+ and a further 82 schools averaged D grades.
  •  The average number of GCSE entries for high attainers in state-funded schools was 9.9, up slightly from 9.7 in 2012. The highest level of GCSE entries per high attainer was 15.5 at Colyton Grammar School, repeating its 2012 achievement in this respect. At three schools – Hadden Park, Aylesford and Bolton St Catherine’s – high attainers were entered for fewer than five GCSEs (15 schools were in this category last year). One school – Ormesby in Middlesbrough – entered its high attainers for 22 qualifications, which seems a little excessive.

 

A Level Achievement

.

Percentage achieving 3+ A levels at AAB+ in facilitating subjects

According to the 16-18 Performance Tables:

  • 7.5% of A level students in all state funded schools and colleges achieved three A levels, all in facilitating subjects, at AAB or higher, up slightly from 7.4% in 2012. This is another column in the Tables that – at the time of writing – will not sort results into ascending/descending order. In 2012 a handful of state-funded institutions achieved 60% on this measure and that is likely to have been repeated. There were also 574 schools and colleges that recorded zero in 2012 and there may have been a slight improvement on that this year.
  • The same problem arises with the parallel measure showing the percentage of students achieve AAB+ with at least two in facilitating subjects. We know that 12.1% of A level students in state funded schools and colleges achieved this, up very significantly from 7.8% in 2012, but there is no information about the performance of individual schools. In 2012 a handful of institutions achieved over 80% on this measure, with Queen Elizabeth’s Barnet topping the state schools at 88%. At the other extreme, there were about 440 schools and colleges which recorded zero in 2012.

Tables 6A and B below show that the success rate on the first of these measures is creeping up in LA-maintained mainstream schools, sponsored academies, sixth form colleges and FE colleges. The same is true of comprehensive schools.

On the other hand, the success rate is falling somewhat in converter academies, free schools, UTCs and studios – and also in selective and modern schools.

It is noticeable how badly sponsored academies fare on this measure, achieving exactly half the rate of LA-maintained mainstream schools.

.

Table 6A: Percentages of students achieving 3+ A levels at AAB+ in facilitating subjects by sector

All state-funded mainstream LA maintained mainstream Sponsored academies Converter academies Free schools, UTCs and studio schools Sixth form colleges FE
2013 8.7 7.4 3.7 10.4 5.1 6.0 3.5
2012 8.6 7.2 3.4 11.4 7.5 5.8 3.3
2011 - - - - - - -

.

Table 6B: Percentages of students achieving 3+ A levels at AAB+ in facilitating subjects in schools by admissions practice

Selective Comprehensive Modern
2013 21.1 6.8 1.0
2012 21.5 6.6 1.6
2011 - - -

.

The 2013 statistics also contain breakdowns for the ‘AAB+ with two in facilitating subjects’ measure, as shown in Table 6C below.

.

Table 6C: Percentages of students achieving 3+ A levels at AAB+ with two in facilitating subjects by sector and admissions practice, 2013 only.

State-funded mainstream schools 13.6
LA-funded mainstream schools 11.4
Sponsored academies (mainstream) 5.4
Converter academies (mainstream) 16.4
Mainstream free schools, UTCs and studios 11.3
Sixth Form Colleges 10.4
FE Colleges 5.8
Selective schools 32.4
Comprehensive schools 10.7
Modern schools 2.0

.

While sponsored academies are achieving unspectacular results – in that they are even further behind LA-funded schools and even below FE colleges on this measure – selective schools are managing to get almost one third of their students to this level.

.

Percentage achieving 3+ A levels at A*/A

The Performance Tables do not include this measure, but it is included in the statistical reports. Tables 7A and B below show the trends – downwards in all sectors and types of school except FE colleges and free schools, UTCs and studios.

It is unclear why their performance should be improving on this measure but declining on the AAB+ in three facilitating subjects measure, though it seems likely that some UTCs and studios are less likely to enter their students for facilitating subjects.

.

Table 7A: Percentages of all students achieving 3+ A levels at A*/A by sector

All state-funded mainstream LA maintained mainstream Sponsored academies Converter academies Free schools, UTCs and studio schools Sixth form colleges FE
2013 10.7 8.7 4.1 13.1 7.9 9.3 5.1
2012 10.9 9.1 4.2 14.8 6.0 9.7 5.0
2011 11.4 - - - - 10.2 4.9

.

Table 7B: Percentages of students achieving 3+ A levels at A*/A in schools by admissions practice

Selective Comprehensive Modern
2013 27.0 8.1 1.7
2012 27.7 8.3 1.9
2011 27.7 8.4 2.3

.

Percentage achieving 3+ A levels at AAB+

Once again, this measure is not in the Tables but is in the statistical bulletin. Tables 8A and B below compare trends. The broad trend is again downwards although this is being bucked (just) by FE colleges and (more significantly) by sponsored academies. So while sponsored academies are getting slightly falling results at AAA+, their results are improving at AAB.

 .

Table 8A: Percentages of students achieving 3+ A levels at AAB+ by sector

All state-funded mainstream LA maintained mainstream Sponsored academies Converter academies Free schools, UTCs and studio schools Sixth form colleges FE
2013 17.9 15.1 7.9 21.4 13.0 16.4 9.5
2012 17.9 15.4 7.5 23.4 16.4 16.8 9.4
2011 - - - - - - -

.

Table 8B: Percentages of students achieving 3+ A levels at AAB+ in schools by admissions practice

Selective Comprehensive Modern
2013 40.0 14.5 4.4
2012 40.6 14.5 4.7
2011 40.9 14.8 5.4

It is interesting to compare 2013 performance across these different high attainment measures and Table 9 below does this, enabling one to see more clearly the differentiated response to facilitating subjects amongst high attainers.

In most parts of the schools sector, the success rate for AAB+ in any subjects is roughly twice that of AAB+ in facilitating subjects, but this is not true of FE and sixth form colleges, nor of free schools, UTCs and studios.

At the same time, every sector and school type shows a higher rate at AAA+ than at AAB+ in three facilitating subjects.

.

Table 9: Percentages of students by sector/school type achieving different A level high attainment measures in 2013

AAB+ in 3FS AAB+ 2in FS AAB+ AAA+
All state-funded mainstream 8.7 13.6 17.9 10.7
LA-funded mainstream 7.4 11.4 15.1 8.7
Sponsored academies 3.7 5.4 7.9 4.1
Converter academies 10.4 16.3 21.4 13.1
Free schools UTCs and studios 5.1 11.3 13.0 7.9
Sixth form colleges 6.0 10.4 16.4 9.3
FE colleges 3.5 5.8 9.5 5.1
Selective schools 21.1 32.4 40.0 27.0
Comprehensive schools 6.8 10.7 14.5 8.1
Modern schools 1.0 2.0 4.4 1.7

.

Other Measures

Reverting back to the Performance Tables:

  • The APS per A level student across all state-funded institutions is 782.3, up significantly from 736.2 in 2012. The highest APS was recorded by Dartford Grammar School which recorded 1650.0. At the other end of the spectrum, an APS of 252.6 was recorded by Hartsdown Technology College in Kent.
  • The APS per A level entry across all state-funded institutions was 211.3 compared with 210.2 in 2012. The strongest performer in the maintained sector was Queen Elizabeth’s School, Barnet, which achieved 271.4. The lowest score in the maintained sector is 97.7, at Appleton Academy in Bradford.
  • A new average point score per A level pupil expressed as a grade is dominated by independent schools, but the top state-funded performers – both achieving an average A grade – are Henrietta Barnet and Queen Elizabeth’s Barnet. A handful of schools record U on this measure: Appleton Academy, The Gateway Academy in Thurrock, Hartsdown Technology College and The Mirus Academy in Walsall.
  • A new A level value added measure has also been introduced for the first time. It shows Ripon Grammar School as the top performer scoring 0.61. The lowest score generated on this measure is -1.03 at Appleton Academy, which comes in comfortably below any of its competitors.
  • The Statistical Bulletin also tells us what percentage of A level entries were awarded A* and A grades. Tables 10 A and B below record this data and show the trend since 2011. It is evident that A* performance is falling back slightly in every context, with the sole exception of FE (a very slight improvement) and free schools, UTCs and studios. A*/A performance is generally holding up better, other than in converter academies.

 .

Table 10A: Percentage of A* and A*/A grades by sector

All state-funded mainstream LA maintained mainstream Sponsored academies Converter academies Free schools, UTCs and studio schools Sixth form colleges FE
A* A*/A A* A*/A A* A*/A A* A*/A A* A*/A A* A*/A A* A*/A
2013 6.8 24.4 5.9 21.4 3.7 15.1 7.9 27.5 4.7 20.0 5.7 21.6 3.9 15.4
2012 7.2 24.3 6.3 21.8 4.0 14.4 8.9 29.1 4.3 20.3 5.8 21.8 3.8 15.6
2011 7.3 24.5 - - - - - - - - 6.3 22.4 3.9 16.0

.

Table 10B: Percentage of A* and A*/A grades in schools by admissions practice

Selective Comprehensive Modern
A* A*/A A* A*/A A* A*/A
2013 12.7 40.6 5.6 21.1 2.5 10.6
2012 13.4 41.2 5.9 20.9 2.6 11.1
2011 13.4 41.0 6.0 21.1 3.0 11.6

.

Conclusion

What are we to make of this analysis overall?

The good news is that high attainers have registered improvements since 2012 across all the most important GCSE measures:

  • 5+ GCSEs or equivalent including English and maths GCSEs (up 0.7%)
  • GCSEs in English and maths (up 0.8%)
  • 3 levels of progress in English (up 2.8%)
  • 3 levels of progress in maths (up 2.0%)
  •  EBacc entry (up 19%) and EBacc achievement (up 13.6%).

The last of these is particularly impressive.

At A level, the underlying trend for high attainment per se is slightly downward, but significantly upward for AAB+ grades with two in facilitating subjects.

The less good news is that some of these improvements have been made from a relatively low base, so the overall levels of performance still fall short of what is acceptable.

It is not cause for congratulation that one in seven high attainers still fail to make the expected progress in English, while one in eight still fail to do so in English. Nor is it encouraging that one in twenty high attainers still fail to secure five or more GCSEs (or equivalent) including GCSEs in English and maths.

The significant improvement with the EBacc masks the fact that one fifth of high attainers who enter exams in all the requisite subjects still fail to secure the necessary grades.

Moreover, some schools are demonstrating very limited capacity to secure high achievement and – in particular – sufficient progress from their high attainers.

The fact that several schools achieve better progression in English for both their middle and their low attainers is particularly scandalous and needs urgent attention.

As noted above, the floor targets regime is too blunt an instrument to address the shortcomings in the substantial majority of schools that have been highlighted in this study for poor performance on one or more high attainers measures.

The combined impact of the secondary accountability reforms planned for 2016 is as yet unclear. For the time being at least, Ofsted inspection is the only game in town.

The school inspection guidance now demands more attention to the performance and progress made by the ‘most able students’. But will inspection bring about with sufficient rapidity the requisite improvements amongst the poorest performers highlighted here?

I have it in mind to monitor progress in this small sample of twenty-two schools – and also to look back on what has happened to a parallel group of the poorest performers in 2012.

.

GP

January 2014

Gifted Phoenix’s 2013 Review and Retrospective

.

This final post of 2013 takes a reflective look back at this year’s activity.

4-Eyes-resized-greenjacketfinal

One purpose is straightforward self-congratulation – a self-administered pat on the back for all my hard work!

This is also an opportunity to review the bigger picture, to reflect on the achievements and disappointments of the year now ending and to consider the prospects for 2014 and beyond.

Perhaps I can also get one or two things off my chest…

…So, by way of an aside, let me mention here that I provide this information to you entirely free of charge, partly because I believe that global progress in (gifted) education is obstructed by the rationing of knowledge, partly to encourage those who construct and shelter behind paywalls to reflect on the negative consequences of their behaviour.

I try my best to offer you a factual, balanced and objective assessment, to flag up weaknesses as well as strengths. In short, I tell it like it is. I have no interest in self-aggrandisement, in reputation or the trappings of academia. You will search in vain for those trappings in my CV, but I speak and write with commensurate authority, based on extended experience as a national policy maker and student of the field …

Another purpose is to provide an annotated list of my posts, so that readers can catch up with anything they missed.

I make this my 35th post of 2013, five fewer than I managed in 2012. I took an extended break during August and September this year, half of it spent on tour in Western Australia and the remainder engaged on other projects.

During the course of the year I’ve made a conscious effort simultaneously to narrow and diversify my focus.

I’ve devoted around two-thirds of my posts to educational reform here in England, while the remainder continued to address global issues.

Some of the Anglocentric posts were intended to draw out the wider implications of these reforms, rather than confining themselves exclusively to gifted education and the impact on gifted learners.

I wanted to paint on a broader canvas. It is all too easy to exist in a gifted education ghetto, forgetting that it must be integral to our national educational systems as well as a global endeavour in its own right.

 .

Global Gifted Education

During 2013 I published two feature-length posts about the performance of high achievers in international comparisons studies:

Like it or not, these international tests are becoming increasingly influential in most countries around the world. Those involved in gifted education ignore them at their peril.

Many of the countries that top the rankings already invest significantly in gifted education – and some of those that do not (invest significantly and/or top the rankings) ought seriously to consider this as a potential route to further improvement.

Other posts with a global gifted focus include:

My best effort at a personal credo, derived from the experience of writing this Blog. Colleagues were very flattering

.

.

I supplemented the post with a vision for delivery, primarily to inform UK-based discussion within GT Voice, but also relevant to Europe (the EU Talent Centre) and globally (the World Council).

I took a second look at this nascent field, exploring developments since I first blogged about it in 2010. I like to flatter myself that I invented the term.

The post tells of the passing interest exhibited by IRATDE and notes the reference in the July 2012 World Council Newsletter to a special issue of Gifted and Talented International (GTI) that will be devoted to the topic.

I heard in May that an unnamed specialist had been invited to prepare a ‘target paper’, but nothing has materialised to date. The wheels of academic publishing turn parlous slow.

I concluded the post with a tongue-in cheek contribution of my own – the Gifted Phoenix Equation!

Minimising the Excellence Gap and Optimising the Smart Fraction maximises impact on Economic Growth (Min EG + Optimal SF = Max EG)

This post opened with a self-confessed rant about the ‘closed shop’ operated by academics in the field, defended by research paywalls and conference keynote monopolies.

But I set aside my prejudices to review the nine leading academic journals in gifted education, examine the rights the publishers offer their authors and offer a constructive set of proposals for improving the accessibility of research.

There were also a handful of new national studies:

the last of which is strictly a transatlantic study of support for low income high ability students, developed from analysis of the US NAGC publication of the same name.

.

Gifted Education in England

Two posts examined material within England’s national school performance tables relating to high attainment and high attainers.

The latter is the second such analysis I have provided, following one on the 2012 Tables published last December. The former will be supplanted by a new version when the Secondary Tables are published in January.

I also offered a detailed treatment of the underlying accountability issues in:

These posts explored the rather haphazard treatment now afforded ‘the most able students’ in documents supporting the School Inspection Framework, as well as the different definitions deployed in the Performance Tables and how these might change as a consequence of the trio of accountability consultations launched this year.

.

.

During the Spring I wrote:

Despite the Government’s reported intention to establish a national network of up to twelve of these, still only two have been announced – sponsored by King’s College London and Exeter University respectively.

I might devote a 2014 post to updating my progress report.

There was also special mini-series, corralled under the speculatively optimistic title: A Summer of Love for Gifted Education?’

This is fundamentally a trilogy:

The original conceit had been to build each episode around a key publication expected during the year. Episodes One and Two fitted this description but the third, an ‘Investigation of school- and college- level strategies to raise the Aspirations of High-Achieving Disadvantaged Pupils to pursue higher education’ was (is still) overdue, so I had to adjust the focus.

Episode Two was a particularly rigorous examination of the Ofsted report that led to the changes to the inspection documentation.

.

.

In Episode Three, I took the opportunity to expose some questionable use of statistics on the part of selective universities and their representative bodies, setting out a 10-point plan to strengthen the representation of disadvantaged students at Oxford and Cambridge. This was accompanied by a flying pig.

.

.

There were also some supplementary posts associated with the Summer of Love:

And some material I produced at the time that Ofsted published ‘The Most Able Students’:

Did it turn out to be a ‘Summer of Love’? Looking back now, I have mixed feelings. Significant attention was paid to meeting the needs of high attaining learners, and those needs are likely to be better recognised and responded to as a consequence.

But the response, such as it is, relies almost exclusively on the accountability system. There is still a desperate need for authoritative updated national framework guidance. Ideally this should be developed by the national gifted education community, working collaboratively with government seed funding.

But the community shows little sign of readiness to take on that responsibility. Collaboration is virtually non-existent:  GT Voice has failed thus far to make any impact (justifying my decision to stand down from the board in protest at frustratingly slow progress).

Meanwhile, several players are pursuing their own diverse agendas. Most are prioritising income generation, either to survive or simply for commercial gain. Everyone is protecting their corner. Too many scores are being settled. Quality suffers.

For completeness, I should also mention a couple of shorter posts:

a piece I wrote for another publisher about how free schools might be rolled into this national collaborative effort, and

which was my best effort to summarise the ‘current state’ on the other side of Ofsted’s Report, as well as an alternative future vision, avoiding the Scylla of top-down centralised prescription and the Charybdis of bottom-up diffused autonomy.

 

Wider English Educational Reform

Almost all the posts I have written within this category are associated with emerging national policy on curriculum and assessment:

.

.

There was even

which I still expect to see in a manifesto come 2015!

As things stand, there are still many unanswered questions, not least where Labour stands on these issues.

Only one of three accountability consultations has so far received a Government response. The response to the primary consultation – comfortably the least persuasive of the three – was due in ‘the autumn’ but hadn’t appeared by Christmas.

The decision to remove National Curriculum levels looks set to have several unintended negative consequences, not least HMCI Wilshaw’s recent call for the reintroduction of national testing at KS1 and KS3.

I am still to be persuaded that this decision is in the best interest of high attainers.

 

Social Media

This year I have spent more time tweeting and less time producing round-ups of my Twitter activity.

At the time of writing, my follower count has reached 4,660 and I have published something approaching 18,700 Tweets on educational topics.

I try to inform my readers about wider developments in UK (especially English) education policy, keeping a particularly close eye on material published by the Government and by Parliament.

I continue to use #gtchat (global) and #gtvoice (UK) to hashtag material on gifted education and related issues. I look out particularly for news about developments worldwide. I publish material that seems interesting or relevant, even though I might disagree with it. I try to avoid promotional material or anything that is trying to sell you something.

I began 2013 intending to produce round-ups on ‘a quarterly-cum-termly basis’ but have managed only two editions:

The next volume is already overdue but I simply can’t face the grinding effort involved in the compilation process. I may not continue with this sequence in 2014.

I was also invited to answer the question:

ResearchED was a conference organised via Twitter which took place in September.

The post argued for a national network of UK education bloggers. This hasn’t materialised, although the status and profile of edublogging has improved dramatically during 2013, partly as a consequence of the interest taken by Michael Gove.

There are many more blogs and posts than a year ago, several co-ordinated through Blogsync and/or reblogged via The Echo Chamber.

Precious few bloggers enter the field of gifted education, though honourable mentions must go to Distilling G&T Ideas and Headguruteacher.

Elsewhere in the world, not too many gifted education bloggers are still generating a constant flow of material.

Exceptions include Lisa Conrad, who is maintaining two blogs in the US Gifted Parenting Support and Global #GT Chat Powered by TAGT. Also Kari Kolberg who produces Krummelurebloggen (in Norwegian) and Javier Touron who writes Talento y Educacion (in Spanish).

I need urgently to revisit my Blogroll. I might also write a post about the general state of global gifted education blogging in the early part of 2014.

 

Reference

I have made only limited progress this year with the reference pages on this Blog:

  • Who’s Who?  remains embryonic. I had plans to force myself to produce a handful of entries each day, but managed only two days in succession! There isn’t a great deal of intellectual challenge in this process – life may be too short!
  • Key Documents is a mixed bag. The UK pages are fully stocked. You should be able to find every significant national publication since 2000. The Rest of the World section is still largely empty.

Rightly or wrongly, the production of blog posts is taking priority.

 

Analytics

Compared with 2012, the number of page views has increased by over 30%, although the number of posts is down by 12.5%. I’m happy with that.

Some 40% of views originate in the UK. Other countries displaying significant interest include the US, Singapore, Australia, India, Hong Kong, Saudi Arabia, New Zealand, Canada and Spain. Altogether there have been visits from 169 countries.

The most popular posts published this year are, in order of popularity:

  • Whither National Curriculum Assessment Without Levels?
  • What the KS2/KS4 Transition Matrices Show About High Attainers’ Performance
  • High Attaining Students in the 2012 Secondary School Performance Tables
  • Analysis of the Primary Assessment and Accountability Consultation Document and
  • A Summer of Love for English Gifted Education Episode 2: Ofsted’s ‘The Most Able Students’

.

Visuals

I have changed the theme of my Blog twice this year – initially to Zoren and more recently to Highwind. I wanted a clearer, spacier look and a bigger font.

During the course of the year I have alternated between using my photographs within posts and producing work that is largely free of illustration. I have mixed feelings about this.

It seems somehow incongruous to intersperse unrelated photographs within a post about educational matters, but the stock of education-relevant non-copyrighted illustration is severely limited. Then again, screeds of unbroken text can be rather dreary to the eye.

So readers can expect some more views of Western Australia (especially) during 2014! Here’s one to whet your appetite.

.

Flora 2 by Gifted Phoenix

Flora 2 by Gifted Phoenix

 

The Future

I close 2013 in a pessimistic mood. Despite the more favourable domestic policy climate, I am markedly less optimistic about the future of gifted education than I was at the start of the year.

Disillusion is setting in, reinforced by negligible progress towards the objectives I hold most dear.

The ‘closed shop’ is as determinedly closed as ever; vested interests are shored up; governance is weak. There is fragmentation and vacuum where there should be inclusive collaboration for the benefit of learners. Too many are on the outside, looking in. Too many on the inside are superannuated and devoid of fresh ideas.

Every so often I witness dispiriting egotism, duplicity or even vengefulness. Disagreements fester because one or both of the parties is unwilling to work towards resolution.

The world of gifted education is often not a happy place – and while it remains that way there is no real prospect of achieving significant improvements in the education and life chances of gifted learners.

To mix some metaphors, it may soon be time to cut my losses, stop flogging this moribund horse and do something else instead.

Happy New Year!

.

GP

December 2013