This post is part of a bigger one on the relationship between curriculum, assessment and accountability reforms. Given the inordinate length of that piece and the complexity of the proposals for primary assessment and accountability, I have published my analysis of those proposals separately here.
The post sets out what has been published, ruminates on the purpose of the Pupil Premium, undertakes a section-by-section analysis of the consultation document and draws together the issues of greatest concern.
It attempts an overall scaled score assessment of the document and finds it seriously wanting. There are major fault lines running through the proposals and little clarity over several key issues.
These proposals are far from ‘implementation-ready’ and ultimately disappointing, both in terms of the threshold and progress achieved. If there was a floor standard for consultation documents, this would fall significantly short.
Those with little time are recommended to go straight to the latter section – ‘Primary Assessment and Accountability: Issues and Omissions’ which can be found about two-thirds of the way through the post.
What has been published?
17 July saw the publication of three documents in the following order:
- A press release which appeared shortly after a midnight embargo;
- The consultation document ‘Primary assessment and accountability’ published mid-morning; and
- A Ministerial statement (Col 1099) made, published and briefly debated at lunchtime.
There was no response to the parallel ‘Secondary school accountability’ consultation launched on 7 February and completed on 1 May, despite the connectivity between the two sets of proposals – and no firm indication of when that response would be published.
A third consultation, on post-16 assessment and accountability, was not mentioned either.
The staged publication of the primary material meant that initial analysis and questioning of Ministers was based largely on the headlines in the press release rather than on the substance of the proposals.
Initial media appearances appeared to generate a groundswell of hostility that Ministers could not readily counter. The answers to some reasonable questions on the detail were not yet in the public domain.
It was particularly noteworthy that the announcement had integrated within it a second, about the size of Pupil Premium allocations in 2014-15. This was clearly intended to sugar the pill, though the coating is rather thin and there are also potentially wider ramifications (see below).
The Pupil Premium announcement must have been the justification for presentation by Lib Dem Deputy Prime Minister Clegg and Minister of State Laws, rather than by Tory Secretary of State Gove.
He (Gove) must have been delighted at avoiding this particularly poisoned chalice, already delayed into the dog days of summer – often a deliberate strategy for downplaying a particularly contentious announcement.
The consultation has a deadline of 11 October, allowing a total of 11 weeks and two days for responses, including the entirety of the school summer holidays, so the majority of the consultation period occurs while most schools are closed. This may also serve to mute opposition to the proposals contained in the document.
There is a commitment to publish the outcomes of consultation, together with a response ‘in autumn 2013’, which is a very quick turn round assuming that autumn means November rather than December. If there is any degree of contention, this might well edge close to Christmas.
An Aside: The Pupil Premium
The assessment and accountability announcement was sugar-coated by confirmation of the size of Pupil Premium allocations in 2014-15.
But close scrutiny of the coating reveals it as rather a thin veneer.
It was already known that the total Pupil Premium funding envelope would increase by £625m, from £1.875bn in 2013-14 to £2.5bn in 2014-15, so the overall budget was not in itself newsworthy. There is a degree of economy with the truth at play if the funding is claimed to be ‘new money’.
But the apparent decision to weight this towards primary schools was new. Ministers made much of the 44% increase for primary schools, from £900 to £1,300 per pupil, while conspicuously omitting to confirm the same uprating for secondary schools.
Newly released data for the 2013-14 Premium suggests that it might be possible to afford the same uprating for secondary-age pupils, assuming numbers eligible do not increase between January 2013 and January 2014, but the silence on this point betrays some uncertainty, most probably driven partly by numbers and partly by the early impact of Universal Credit on eligibility.
We do know, from the Spending Review, that the total budget for the Premium will be protected in real terms in 2015-16 but will not be further increased.
It remains to be seen whether any new weighting in favour of the primary sector will be retained, but that seems highly likely given the level of disruption that would be caused by frequent recalibration.
One influential commentator – Institute of Education Director Chris Husbands – has suggested that the bracketing of the two announcements marks a significant adjustment:
‘This is a further twist in the evolving purpose of the pupil premium – once intended as an incentive to primary schools to admit more disadvantaged children, then a compensatory payment for the additional costs involved in meeting the needs of disadvantaged children, it is now more clearly a fund to secure threshold levels of attainment.’
This argument runs like a leitmotif through the analysis below.
But it also runs counter to the Government’s official position that the Premium is designed to support all disadvantaged pupils and close the attainment gap between them and their peers, a position reinforced by the fact that the Government has delineated separate ‘catch-up premium support’ exclusively for those below the thresholds.
There is no change in recent announcements about strengthening the accountability underpinning Pupil Premium support. Husbands’ argument also runs against the tenor of Ofsted’s publications about effective use of the Premium and the latest Unseen Children report, published following deliberations by an expert panel on which Husbands served.
The source appears to be a recent IPPR publication ‘Excellence and Equity: Tackling Educational Disadvantage in England’s Secondary Schools’, Chapter 4 of which asserts (without supporting evidence) that:
‘Policymakers talk interchangeably about the pupil premium being used to support pupils who are falling behind, and it being used to support those who are on free school meals.’
This despite the fact that:
‘The overlap between these two categories is not as large as many people suppose. Last year, only 23 per cent of low-attaining pupils at the end of primary school were eligible for free school meals, and only 26 per cent of pupils eligible for free school meals were low attaining. This puts schools in the difficult position of having to decide whether to spend their pupil premium resources on pupils who have a learning need, even though many of them will not be eligible for free school meals, or whether they should focus them on FSM pupils, even though many of them will be performing at the expected level.’
The notion that pupils who are performing at the expected levels do not, by definition, have a ‘learning need’ is highly contentious, but let that pass.
The substantive argument is that, because ‘tackling the long tail of low achievement is the biggest challenge facing England’s school system’ and because the Premium ‘provides insufficient funds targeted at the right age range’:
‘In order to have maximum impact, the pupil premium should be explicitly targeted towards raising low achievement in primary and early secondary school… The Department for Education should therefore focus the additional funding at this age range. It should… create a higher level of pupil premium in primary schools, and… increase the ‘catch-up premium’ (for year 7 pupils) in secondary schools; the pupil premium in secondary schools would be held at its current level. This would provide primary schools with sufficient resources to fund targeted interventions, such as Reading Recovery, for all children who are at risk of falling behind. It would also compensate secondary schools that have large numbers of pupils starting school below the expected level of literacy and numeracy.
…Secondary schools are currently given a catch-up premium for every pupil who enters below level 4 in English and maths. However, there is no mechanism to guarantee that these pupils benefit from the money. The ‘catch-up premium’ should therefore be replaced with a ‘catch-up entitlement’. Every pupil that falls into this category would be entitled to have the money spent specifically on helping to raise his or her attainment. Schools would be required to write a letter to these pupils and their families explaining how the resources are being spent.’
The Government has potentially front-loaded the Pupil Premium into the primary sector, but not – as far as we are aware – the early years of secondary school. Nor has it increased the catch-up premium, unless by some relatively small amount yet to be announced, or made it an individual entitlement.
Husbands’ initial argument – that the linking of Premium and assessment necessarily means a closer link being forged with tackling below-threshold attainment – depends on his assertion that:
‘The core message of the consultation is that the concern is with absolute attainment – secondary readiness – rather than the progress made by primary schools.’
The analysis below examines the case for that assertion.
What the Primary Assessment Consultation Says
The commentary below follows the sections in the consultation document
The case for change
The second paragraph of ‘The case for change’ says:
‘We believe that it is right that the government should set out in detail what pupils should be taught…’
a somewhat different slant to that adopted in the National Curriculum proposals (and which of course applies only to the core subjects in state-maintained schools).
The next section works towards a definition of the term ‘secondary ready’, described as ‘the single most important outcome that any primary school should strive to achieve’.
It is discussed exclusively in terms of achievement in KS2 English and maths tests, at a level sufficient to generate five GCSE Grades A*-C including English and maths five years later.
This despite the fact that the secondary accountability consultation proposes two quite different headline measures: good GCSE grades in both English and maths and Average Points Score in eight subjects from a three-category menu (neither of which is yet defined against the proposed new 8 to 1 GCSE grading scale).
No other criteria are introduced into the definition, rendering it distinctly narrow. This might arguably be the most important outcome of primary education, but it is not the sole outcome by any stretch.
The Government states an ‘ambition’ that all pupils should achieve this benchmark, excepting a proportion ‘with particular learning needs’.
There is no quantification of this proportion, though it is later used to identify a floor target assumption that 85% of the cohort should achieve the benchmark, so the group with ‘particular learning needs’ must be something less than 15% of all learners.
The introduction of a second and parallel floor target, relating to progression, is justified here on the grounds that ‘some schools have particularly demanding intakes’ so ‘will find it challenging to reach the ambitious [attainment] threshold…’. This will also help to identify coasting schools.
This approach to progression, as a fall back in circumstances where the threshold measure is problematic, lends some weight to Husbands’ contention that absolute attainment is now paramount.
Note that the wording in this section is unclear whether the new floor target consists of both of these measures – secondary readiness and progression – or the imposition of one or the other. This issue comes up again later below.
There is nothing here about the importance of applying measures that do not have in-built perverse incentives to focus on the threshold boundary, but this too will resurface later.
There is early confirmation that:
‘We will continue to prescribe statutory assessment arrangements in English, mathematics and science.’
The ‘core principles’ mentioned in the Assessment Without Levels text appear at this stage to be those proposed in the June 2011 Bew Report rather than any new formulation. Note the second bullet point, which pushes in directly the opposite direction to Husbands’ assertion:
- ongoing assessment is a crucial part of effective teaching, but it should be left to schools. The government should only prescribe how statutory end of key stage assessment is conducted;
- external school-level accountability is important, but must be fair. In particular, measures of progress should be given at least as much weight as attainment;
- a wide range of school performance information should be published to help parents and others to hold schools to account in a fair, rounded way; and
- both summative teacher assessment and external testing are important forms of statutory assessment and both should be published
Already there are mixed messages.
The next section justifies the removal of National Curriculum levels:
‘Imposing a single system for ongoing assessment, in the way that national curriculum levels are built into the current curriculum and prescribe a detailed sequence for what pupils should be taught, is incompatible with this curriculum freedom. How schools teach their curriculum and track the progress pupils make against it will be for them to decide. Schools will be able to focus their teaching, assessment and reporting not on a set of opaque level descriptions, but on the essential knowledge that all pupils should learn. There will be a clear separation between ongoing, formative assessment (wholly owned by schools) and the statutory summative assessment which the government will prescribe to provide robust external accountability and national benchmarking. Ofsted will expect to see evidence of pupils’ progress, with inspections informed by the school’s chosen pupil tracking data.’
Paraphrasing this statement, one derives the following rather questionable logic:
- We want to give schools freedom to determine their own approaches to in-school assessment
- The current system of levels has come to be applied to both statutory and in-school assessment
- So we are removing levels from both statutory and in-school assessment.
The only justification for this must lie in recognition that the retention of levels in statutory assessment will inevitably have a ‘backwash effect’ on in-school assessment.
Yet this backwash effect is not acknowledged in respect of the proposed new arrangements for end of key stage statutory assessment. There is a fundamental issue here.
Schools will still be required to report to parents at the end of each year and key stage. There will be no imposition of a system for them doing so but, as we have already recognised, parents will more readily understand a system that is fully consistent with that applied for end of key stage assessment, rather than a substantively different approach.
The next segment begins to explore the case for shifting the baseline assessment – on which to build measures of progression in primary schools – back to Year R. This will ‘reinforce the importance of early intervention’. The EYFS profile will be retained but might become non-statutory.
The introduction of new summative assessments at end KS1 and end KS2 is confirmed for 2016, with interim arrangements as confirmed in the National Curriculum documentation. The accountability reforms also take effect at this point, so changes will be introduced in the December 2016/January 2017 Performance Tables.
There is also confirmation that academies’ funding agreements require compliance ‘with statutory assessment arrangements as they apply to maintained schools’. This is as close as we get to an explanation of how statutory assessments that apply to all schools will be derived from the National Curriculum programmes of study and single ‘lowest common denominator’ attainment targets.
Teacher assessment and reporting to parents
This section begins with a second justification for the removal of levels. Some anecdotal evidence is cited to support the argument:
‘Teachers have told us that the use of levels for assessment has become burdensome and encouraged crude ‘best fit’ judgements to differentiate pupil progress and attainment.’
This marks the beginning of the justification for a more sophisticated (and hence more complex) approach.
Schools are free to design their assessment systems, though these must be integrated with the school curriculum. There is a hint that these systems might be different for different subjects (adding still further complexity for parents) though ‘groups of schools may wish to use a common approach’.
Paragraph 3.7 is a confusing complement to the Bew-based core principles that appeared earlier:
‘We expect schools to have a curriculum and assessment framework that meets a set of core principles and:
- sets out steps so that pupils reach or exceed the end of key stage expectations in the new national curriculum;
- enables them to measure whether pupils are on track to meet end of key stage expectations;
- enables them to pinpoint the aspects of the curriculum in which pupils are falling behind, and recognise exceptional performance;
- supports teaching planning for all pupils; and
- enables them to report regularly to parents and, where pupils move to other schools, providing clear information about each pupils strengths, weaknesses and progress towards the end of key stage expectations.
Question 1: Will these principles underpin an effective curriculum and assessment system?’
The ‘and’ in the opening sentence suggests that this isn’t part of the set of core principles, but the question at the end suggests these are the principles we should be considering, rather than those derived from Bew.
So we have two competing sets of core principles, the second set relating to schools’ own curriculum and assessment frameworks, but not to accountability.
The references here – to steps relative to end of KS expectations, measuring progress towards those expectations, identifying areas where learners are ahead and behind, supporting planning and reporting to parents – are entirely familiar. They really describe the functions of assessment rather than any principles that govern its application.
There is a commitment that the Government will ‘provide examples of good practice’ and:
‘Work with professional associations, subject experts, education publishers and external test developers to signpost schools to a range of potential approaches. Outstanding schools and teaching schools have an opportunity to take the lead in developing and sharing curriculum and assessment systems which meet the needs of their pupils…Commercial providers and subject organisations may offer curriculum schemes of work with inbuilt assessment, including class exercises, homework and summative tests.’
The second consultation question asks respondents to identify additional support and ‘other good examples of effective practice’.
The final section on reporting confirms that the Government plans to continue to publish teacher assessment outcomes in the core subjects, in line with Bew’s recommendation. It is not clear whether this is or is not subject to new scoring arrangements.
There is a brief allusion, almost an afterthought, to schools providing information on transfer and transition. There is no acknowledgement that this process becomes more complex when schools are following different curricula and pursuing different in-house assessment systems.
National Curriculum tests in English, maths and science
This section begins with a further set of Bewisms, this time on the uses of data derived from statutory assessment. They are the justification for the continuation of externally-marked National Curriculum tests.
The proposal is that these should continue in maths and in English reading and grammar, spelling and punctuation. Writing will continue to be assessed through externally moderated teacher assessment (suggesting it will be scale scored), while national science sampling will also continue at the end of KS2. The Year 1 Phonics Screening Check will also continue, with results available in Raise Online but not in Performance Tables.
The timetable, including phasing, is rehearsed again, before the critically important tripartite approach to reporting is introduced.
- A ‘scaled score’
- Decile-based ranking within the ‘national cohort’ and
- Progression from the baseline
The scaled score is the threshold marker of whether the learner is ‘secondary-ready’. We knew from previous announcements that this standard would be raised from level 4c equivalent to 4b equivalent.
It is also necessary to standardise the scale – and to know by how much any given learner has undershot or overshot this threshold:
‘Because it is not possible to create tests of precisely the same difficulty every year, the number of marks needed to meet the secondary readiness standard will fluctuate slightly from one year to another. To ensure that results are comparable over time, we propose to convert raw test marks into a scaled score, where the secondary readiness standard will remain the same from year to year.
Scaled scores are used in all international surveys and ensure that test outcomes are comparable over time. The Standards and Testing Agency will develop this scale. If, as an example, we developed scaled scores based on the current national curriculum tests, we might employ a scale from 80 to 130. We propose to use a scaled score of 100 as the secondary ready standard.’
The notion of a scaled score, with current Level 4b benchmarked at 100 and a scale sufficiently long to accommodate all levels of attainment above and below, is familiar from PISA and other international comparisons studies.
If the scale has 50 points, as this example does, then there are 50 potential levels of achievement in each assessment – about three times as many as there are currently.
But the score will also be accompanied by a norm-referenced decile, showing how each learner’s performance compares with their peers.
And an average scaled score is generated for learners with the same prior attainment at the baseline, which might or might not move to Year R, so enabling parents to compare their child’s scaled score with this average.
This material would not be used to generate simpler ‘proxy’ grades but would be provided in this tripartite format.
Assuming the illustrative elements above are adopted:
- The highest possible KS2 performer would receive a scaled score of 130, confirmation that he is within the top decile of his peers and a comparative average scaled score. If this is less than 130, he has made better progress than those with the same prior baseline attainment. If it is 130 he has made the same progress. By definition his progress cannot be worse than the others.
- A lowest possible KS2 performer would have a scaled score of 80, confirmation that he is within the bottom decile of the cohort and a comparative average scaled score which could be as low as 80 (all peers with the same prior attainment have made the same limited progress as he) but no lower since that is the extreme of the scale;
- A median KS2 performer would obtain a scaled score of 100, confirmation that he is within the fifth decile and a correspondingly variable average scaled score.
No illustrative modelling is supplied, but one assumes that average scaled scores for those with similar prior attainment will typically cluster, such that most learners will see relatively little difference, while some outliers might get to +15 or -15. It also seems likely that the ‘progression score’ will eventually be expressed in this manner.
The progress measure is based exclusively on comparison with how other learners are progressing, rather than any objective standard of the progression required.
The document claims that:
‘Reporting a scaled score and decile ranking from national curriculum tests will make it easy to identify the highest attainers for example using the highest scaled scores and the top percentiles of pupils. We do not propose to develop an equivalent to the current level 6 tests, which are used to challenge the highest attaining pupils. Key stage 2 national curriculum tests will include challenging material (at least of the standard of the current level 6 test) which all pupils will have the opportunity to answer, without the need for a separate test.’
But, while parents of high attainers who score close to the maximum might reasonably assume that their offspring have performed in the top one or two percentiles, they will be told only that they are within the top decile. This is rather less differentiated than securing a Level 6 under current arrangements.
Moreover, the preparation of single tests covering the full span of attainment will be a tall order, particularly in maths.
This DfES publication from 2004 notes:
‘It is well known that individual differences in arithmetical performance are very marked in both children and adults. For example, Cockcroft (1982) reported that an average British class of eleven-year-olds is likely to contain the equivalent of a seven-year range in arithmetical ability. Despite many changes in UK education since then, including the introduction of a standard National Curriculum and a National Numeracy Strategy, almost identical results were obtained by Brown, Askew, Rhodes et al (2002). They found that the gap between the 5th and 95th percentiles on standardized mathematics tests by children in Year 6 (10 to 11-year-olds) corresponded to a gap of about 7 chronological years in ‘mathematics ages’.’
There is no reference to the test development difficulties that this creates, including the risk that high-attaining learners have to undertake pointless ramping of easy questions, unnecessarily extending the length of their tests.
The text claims that the opposite risk – that ceilings are set too low – will be avoided, with at least Level 6-equivalent questions included, but what will their impact be on low attainers undertaking the tests? This is the KS4 tiering debate rewritten for KS2.
It is possible that statutory teacher assessment in the core subjects – other than KS2 writing – could be reported in whatever format schools prefer, rather than in the same manner as test outcomes are reported but, like much else, this is not made clear in the document.
By implication there will be no reporting from the national sampling tests in science.
Baselines to measure progress
The section on baselines is particularly confusing because of the range of choices it offers consultees.
It begins by stating bluntly that, with the removal of levels, KS1:
‘Teacher assessment of whether a pupil has met the expectations of the programme of study will not provide sufficient information to act as a baseline’.
This is because teacher assessment ‘will not provide differentiated outcomes to allow us to measure progress’, maybe because it won’t attract a scaled score. But the document says later that KS1 data collected under the existing system might be used as an interim baseline measure.
Two core options are offered:
- Retaining a baseline at the end of KS1, through new English and maths tests that would be marked by teachers but externally moderated. These would be introduced in ‘summer 2016’ Views are sought over whether these test results should be published, given that publication might reduce the tendency for schools to ‘under-report pupils’ outcomes in the interest of showing the progress pupils have made in the most positive light’.
- Introducing a new baseline at the start of the reception year, from September 2015, an option that gives credit for progress achieved up to the end of Year 2 and removes a perverse incentive to prioritise early intervention. This is described as ‘a simple check…administered by a teacher within two to six weeks of each pupil entering reception…subject to external monitoring’. It would either be developed in-house or procured from a third party. The existing EYFS Profile would remain in place but become non-statutory, so schools would not have to undertake it and the data would not be moderated or collected.
An array of additional options is set out:
- Allowing schools to choose their preferred baseline check (presumably always undertaken in Reception, though the document is not clear on this point).
- Making the baseline check optional, with schools choosing not to use it being ‘judged by attainment alone in performance tables and floor standards’. In other words, the progress measure itself becomes optional, which would appear to run counter to one of Bew’s principles articulated at the beginning of the document and support the Husbands’ line.
- Assuming a Reception baseline check, making end of KS1 tests non-statutory for primary schools, while retaining statutory tests for infant schools because of their need for such an accountability measure and to provide a baseline for junior schools. KS1 tests would still be available for primary schools to use on an optional basis.
Much of the criticism of the document has focused on the Reception baseline proposal, especially concern that the check will be too demanding for the young children undertaking it. On the face of it, this seems rather unreasonable, but the document is at fault by not specifying more clearly what exactly such a check would entail.
The penultimate section addresses performance tables and floor standards. It begins with the usual PISA-referenced arguments for a high autonomy, high accountability system, mentions again the planned data portal and offers continuing commitments to performance tables and floor standards alike.
It includes the statement that:
‘In recent years, we have made the floor both more challenging and fairer, by including a progress element’
even though the text has only just suggested making the progress element optional!
The section on floor standards begins with the exhortation that:
‘All primary schools should ensure that as many pupils as possible leave secondary ready.’
It repeats the intention to raise expectations by increasing the height of the hurdle:
‘We therefore propose a new requirement that 85% of pupils should meet the secondary readiness standard in all the floor standard measures (including writing teacher assessment). This 85% attainment requirement will form part of the floor standard. This standard challenges the assumption that some pupils cannot be secondary ready after seven years of primary school. At the same time it allows some flexibility to recognise that a small number of pupils may not meet the expectations in the curriculum because of their particular needs, and also that some pupils may not perform at their best on any given test day.’
So the 85% threshold is increased from 60% and the standard itself will be calibrated on the current Level 4b rather than 4c. This represents a hefty increase in expectations.
The text above appears to suggest that all pupils should be capable of becoming ‘secondary-ready’, regardless of their baseline – whether in Year R or Year 2 – apart from the group with particular unspecified needs. But, this time round, there is also allowance for a second group who might underperform on the day of the test.
Once again, the justification for a parallel progress measure is not to ensure consistency with the Bew principles, but to offer schools with ‘particularly challenging intakes’ a second string to their bows in the form of a progress measure. The precise wording is:
‘We therefore propose that schools would also be above floor standards if they have good progress results.’
Does this mean that schools only have to satisfy one of the two measures, or both? This is not absolutely clear, but the sentence construction is perhaps more consistent with the former rather than the latter.
If we are right, this is substantively different to the requirements in place for 2013 and announced for 2014:
‘In key stage 2 tests in 2014, primary schools will be below the floor standard if:
- fewer than 65% of its pupils do not achieve Level 4 or above in reading, writing and maths, and
- it is below the England median for progression by two levels in reading, in writing, and in maths.
*Results in the new grammar, punctuation and spelling test are likely to be part of the floor standard in 2014.
For tests taken this year, primary schools will be below the floor standard if:
- fewer than 60% of its pupils do not achieve Level 4 or above in reading, writing and maths, and
- it is below the England median for progression by two levels in reading, in writing, and in maths.
*Results in the new grammar, punctuation and spelling test will not be part of the floor standard this year.’
It is also substantively different to the new arrangements proposed for secondary schools.
Slightly later on, the text explains that schools which exceed the floor target on the basis of progression, while falling below the 85% secondary-ready threshold, will be more likely to be inspected by Ofsted than those exceeding this threshold.
However, Ofsted will also look at progress measures, and:
‘Schools in which low, middle and high attaining pupils all make better than average progress will be much less likely to be inspected.’
The text argues that:
‘Progress measures mean that the improvements made by every pupil count – there is no perverse incentive to focus exclusively on pupils near the borderline of an attainment threshold.’
But, assuming the progression target only comes into play for schools with ‘particularly challenging intakes’, the large majority will have no protection against this perverse incentive unless an optional APS measure is also introduced (see below).
As already stated, the progress measure will be derived from comparison with the average scaled scores of those with similar prior attainment at the baseline – in essence the aggregation of the third element in reporting to parents. Exactly how this aggregation will be calculated is not explained.
Of course, an average measure like this does not preclude schools from giving disproportionately greater attention to learners at different points on the attainment spectrum and comparatively neglecting others.
Unless the performance tables distinguish progress by high attainers, they might be likely to lose out, as will those never likely to achieve the ‘secondary-ready’ attainment threshold. More on this below.
The precise score for the floor targets is yet to be determined, but is expected ‘to be between 98.5 and 99’:
‘Our modelling suggests that a progress measure set at this level, combined with the 85% threshold attainment measure, would result in a similar number of schools falling below the floor as at present. Over time we will consider whether schools should make at least average progress as part of floor standards.’
So the progress element of the standard will be set slightly below average progress to begin with, perhaps to compensate for the much higher attainment threshold. This may support the argument that progress plays second fiddle to attainment.
Finally, the idea of incorporating an ‘average point score attainment measure’ in floor targets is floated:
‘Schools would be required to achieve either the progress measure or both the threshold and average point score attainment measure to be above the floor. This would prevent schools being above floor standards by focusing on pupils close to the expected standard, and would encourage schools to maximise the achievement of all their pupils. Alternatively we could publish the average point score to inform inspections and parents’ choices, but not include the measure in hard accountability.’
The first part of this paragraph reinforces the interpretation that the floor standard is now to be based either on the attainment threshold or the progress measure, but not both. But, under this option, the threshold measure could have an additional APS component to protect against gaming the threshold.
That goes some way towards levelling the playing field in terms of attainment, but of course it does nothing to support a balanced approach to progression in the vast majority of schools.
The treatment of performance tables begins with a further reference to the supporting ‘data portal’ that will include material about ‘the attainment of certain pupil groups’. This is designed to reduce pressure to overload the tables with information, but may also mean the relegation of data about the comparative performance of those different groups.
The description of ‘headline measures’ to be retained in the tables includes, for each test presumably:
- the percentage of learners who meet ‘the secondary readiness standard’;
- the school’s average scaled score, comparing it with the average score for the national cohort;
- the rate of progress of pupils in the school
There will also be a ‘high attainer’ measure:
‘We will also identify how many of the school’s pupils are among the highest-attaining nationally, by including a measure showing the percentage of pupils attaining a high scaled score in each subject.’
The pitch of this high scaled score is not mentioned. It could be set low – broadly the top third, as in the current ‘high attainer’ measure, or at a somewhat more demanding level. This is a significant omission and clarification is required.
Statutory teacher assessment outcomes will also be published (though some at least may follow schools’ chosen assessment systems rather than utilise scaled scores – see above).
All annual results will also be accompanied by three year rolling averages, to improve the identification of trends and protect small schools in particular from year-on-year fluctuation related to the quality of intake. There is an intention to extend rolling averages to floor targets once the data is available.
All these measures will be shown separately for those eligible for the Pupil Premium. This means that, for the first time, high attainers amongst this group will be distinguished, so it will be possible to see the size of any ‘excellence gap’. This is an important and significant change.
There will also be a continuation of the ‘family of schools’ approach – comparing schools with others that have a similar intake – recently integrated into the current Performance Tables.
The Pupil Premium will be increased:
‘To close the attainment gap between disadvantaged pupils and their peers and to help them achieve these higher standards…Schools have the flexibility to spend this money in the best way possible to support each individual child to reach his or her potential.’
So, despite the rider in the second sentence, the purpose of the Premium is now two-fold.
In practice this is likely to mean that schools at risk of being below the standard will focus the Premium disproportionately on those learners that are not deemed ‘secondary-ready’, which further supports the Husbands theory.
Recognising the attainment and progress of all pupils
Rather disappointingly, this final short section is actually exclusively about low attainers and those with SEN – presumably amongst those who will not be able to demonstrate that they are ‘secondary ready’.
It tells us that access arrangements are likely to be unchanged. Although the new KS2 tests will be based on the entire PoS:
‘Even if pupils have not met the expectations for the end of the key stage, most should be able to take the tests and therefore most will have their attainment and progress acknowledged’.
There will also be ‘a small minority’ currently assessed via the P-scales. There is a commit to explore whether the P-scales should be adjusted to ‘align with the revised national curriculum’.
There is an intention to publish data about the progress of pupils with very low prior attainment, though floor standards will not be applied to special schools. The document invites suggestions for what data should be published for accountability purposes.
Primary Assessment and Accountability: Issues and Omissions
The extended analysis above reveals a plethora of issues with the various measures proposed within the consultation document.
Equally, it ignores some important questions raised by material already published, especially the parallel secondary consultation document.
So we have a rather distorted picture with several missing pieces.
The longer first section below draws together the shortcomings in the argument constructed by the consultation document. I have organised these thematically rather than present them in order of magnitude – too many are first order issues. I have also included Labour’s response to the document.
The shorter second section presents the most outstanding unanswered questions arising from the relationship between this document and the materials published earlier.
Issues arising from the consultation document
The multiple issues of concern include:
- The core purpose of the Pupil Premium in primary schools: Is it to narrow attainment gaps between advantaged and disadvantaged learners, or to push the maximum number of schools over more demanding floor targets by delivering more ‘secondary ready’ pupils, regardless of disadvantage. There is much evidence to support the Husbands’ argument that the Premium ‘is now more clearly a fund to secure threshold levels of attainment.’ There is some overlap between the two objectives – though not as much as we commonly think, as the IPPR report quoted above points out. Chasing both simultaneously will surely reduce the chances of success on each count. That does not bode well for the Government’s KPIs.
- The definition of ‘secondary ready’: This is based exclusively on an attainment measure derived from scores achieved in once-only tests in maths and aspects of English, plus teacher assessment in writing. It is narrow in a curricular sense, but also in the sense that it defines readiness entirely in terms of attainment, even though the document admits that this is ‘the single most important outcome’ rather than the only outcome.
- The pitch of the new attainment threshold for the floor target: The level of demand has been ratcheted up significantly, by increasing the height of the hurdle from Level 4c to Level 4b-equivalent and increasing the percentage of pupils required to reach this level by 25%, from 60% to 85%. The consultation document says unpublished modelling suggests combining this with fixing the proposed progress measure a percentage or two below the average ‘would result in a similar number of schools falling below the floor as at present’. It would be helpful to see hard evidence that this is indeed the case. Given that the vast majority of schools will be judged against the floor standard solely on the attainment measure (see below), there are grounds for contesting the assertion.
- Whether the proposed floor target consists of two measures or one of two measures: There is considerable ambiguity within the consultation document on this point, but the weight of evidence suggests that the latter applies, and that progression is only to be brought into the equation when schools ‘have particularly challenging intakes’. This again supports the Husbands line. It is a significant change from current arrangements in the primary sector and is also materially different to proposed arrangements for the secondary sector. It ought to be far more explicit as a consequence.
- The risk of perverse incentives in the floor targets: The consultation document points out that inclusion of a progress measure reduces a perverse incentive to focus exclusively or disproportionately on learners near the borderline of the attainment threshold. But if the progress measure is only to apply to a small (but unquantified) minority of schools with the most demanding intakes, the perverse incentive remains in place for most. In any case, a measure that focuses on average progress across the cohort does not necessarily militate against disproportionate attention to those at the borderline.
- Which principles are the core principles? We were promised a set of such principles in the piece quoted above on ‘Assessment without levels’. Instead we seem to have a set of ‘key principles’ on which ‘the proposals in this consultation are based’, these being derived from Bew (paragraph 1.5) and some additional points that the main text concedes do not themselves qualify as core principles (paragraph 3.7). Yet the consultation question about core principles follows directly beneath the latter and, moreover, calls them principles! This is confusing, to say the least.
- Are the core principles consistently followed? This depends of course on what counts as a core principle. But if one of those principles is Bew’s insistence that ‘measures of progress should be given at least as much weight as attainment’, that does not seem to apply to the treatment of floor targets in the document, where the attainment threshold trumps the progress measure. If one of the core proposals runs counter to the proposed principles, that is clearly a fundamental flaw.
- Implications of a choice of in-house assessment schemes: Schools will be able to develop their own schemes or else draw on commercially available products. One possibility is that the market will become increasingly dominated by a few commercial providers who profit excessively from this arrangement. Another is that hundreds of alternative schemes will be generated and there will be very little consistency between those in use in different schools. This will render primary-secondary transition and in-phase transfer much more complex, especially for ‘outlier’ learners. It seems that this downside of a market-driven curriculum and assessment model has not been properly quantified or acknowledged.
- Whether or not these apply to statutory teacher assessment: We know that the results of teacher assessment in writing will feature in the new floor target, alongside the outcomes of tests which attract a new-style scale score. But does this imply that all statutory teacher assessment will attract similar scale scores, or will it be treated as ‘ongoing assessment’. I might have missed it, but I cannot find an authoritative answer to this point in the document.
- Whether the proposed tripartite report to parents is easier to understand than existing arrangements: This is a particularly significant issue. The argument that the system of National Curriculum levels was not properly understood is arguably a fault of poor communication rather than inherent to the system itself. It is also more than arguable that the alternative now proposed – comprising a scaled score, decile and comparative scaled score in each test – is at least as hard for parents to comprehend. There is no interest in converting this data into a simple set of proxy grades with an attainment and a progression dimension, as I have proposed. The complexity is compounded because schools’ internal assessment systems may well be completely different. Parents are currently able to understand progress within a single coherent framework. In future they will need to relate one system for in-school assessment to another for end of key stage assessment. This is a major shortcoming that is not properly exposed in the document.
- Whether decile-based differentiation is sufficient: Parents arguably have a right to know in which percentile their children’s performance falls, rather than just the relevant decile. At the top of the attainment spectrum, Level 6 achievement is more differentiated than a top decile measure, in that those who pass the test are a much more selective group than the top ten percent. The use of comparatively vague deciles may be driven by concern about labelling (and perhaps also some recognition of the unreliability of more specific outcomes from this assessment process). The document insists that only parents will be informed about deciles, but it does not require a soothsayer to predict that learners will come to know them, just as they know their levels. (The secondary consultation document sees virtue in older learners knowing and using their ‘APS8 score’ so what is different?) In practice it is hard to imagine a scenario where those in possession of percentile rankings could withhold this data if a parent demanded it.
- Norm versus criterion-referencing: Some commentators appear relatively untroubled by a measure of progress that rests entirely on comparison between a learner and his peers. They suppose that most parents are most concerned whether their child is keeping up with their peers, rather than whether their rate of progress is consistent with some abstract measure. That may be true – and it may be also too difficult to design a new progress measure that applies consistently to the non-linear development of every learner, regardless of their prior attainment. On the other hand, it does not seem impossible to contemplate a measure of progress associated with the concept of ‘mastery’ that is now presumed to underpin the National Curriculum, since its proponents are clear that ‘mastery’ does not hold back those who are capable of progressing further and faster.
- Development of tests to suit all abilities and the risk of ceiling effects: There must be some degree of doubt whether universal tests are the optimal approach to assessment for the full attainment spectrum, especially for those at either end, particularly in maths where the span of the spectrum is huge. The document contains an assurance that the new tests will be at least as demanding as existing Level 6 tests, so single tests will aim to accommodate six levels of attainment in old money. Is that feasible? Despite the assurance, the risk of undesirable ceiling effects is real and of particular concern for the highest attainers.
- Where to pitch the baseline: The arguments in favour of a Year R baseline – and the difficulties associated with implementing one – have attracted the lion’s share of the criticism directed at the paper, which has rather served to obscure some of its other shortcomings. The obvious worry is that the baseline check will be either disproportionate or unreliable – and quite possibly both. Most of the focus is on the overall burden of testing: the document floats a variety of ideas that would add another layer of fragmentation and complexity, such as making the check optional, making KS1 tests optional and providing different routes for stand-alone infant/junior schools and all-through primaries.
- The nature of the baseline check: Conversely, the consultation document is unhelpfully coy about the nature of the check required. If it had made a better fist of describing the likely parameters of the check, exaggerated concerns about its negative impact on young children might have been allayed. Instead, the focus on the overall testing burden leads one to assume that the Year R check will be comparatively onerous.
- How high attainers will be defined in the performance tables: There are welcome commitments to a ‘high attainer’ measure for each test, based on scaled scores, and the separate publication of this measure for those in receipt of the Pupil Premium. But we are given no idea where the measure will be pitched, nor whether it will address progress as well as attainment. One obvious approach would be to use the top decile, but that runs against an earlier commitment not to incorporate the deciles in performance tables, despite there being no obvious reason why this should be problematic, assuming that anonymity can be preserved (which may not be possible in smaller cohorts). It would be particularly disappointing if high attainers continue to be defined as around one third of the cohort – say the top three deciles, but that may be the path of least resistance.
There are also more technical assessment issues – principally associated with the construction of the scaled score – which I leave it to assessment experts to anlayse.
Labour’s response to the consultation document picks up some of the wider concerns above. Their initial statement focused on the disappearance of ‘national statements of learning outcomes’, how a norm-referenced approach would protect standards over time and the narrowness of the ‘secondary-ready’ concept.
A subsequent Twigg article begins with the latter point, bemoaning the Government’s:
‘Backward looking vision, premised on rote-learning and a failure to value the importance of the skills and aptitudes that young people need to succeed’.
It moves on to oppose the removal of level descriptors:
‘There might be a case to look at reforming level descriptors to ensure sufficient challenge but scrapping them outright is completely misguided and will undermine standards in primary schools’
and the adoption of norm-referenced ranking into deciles:
‘By ranking pupils against others in their year- rather than against set, year-on-year standards – this will lead to distortions from one year to another. There is not a sound policy case for this.’
But it offers support for changing the baseline:
‘I have been clear that I want to work constructively on the idea of setting baseline assessments at 5. There is a progressive case for doing this. All-too-often it is the case that the prior attainment of children from socially-deprived backgrounds is much lower than for the rest. It is indeed important that schools are able to identify a baseline of pupil attainment so that teachers can monitor learning and challenge all children to reach their potential.’
Unfortunately, this stops short of a clear articulation of Labour policy on any of these three points, though it does suggest that several aspects of these reforms are highly vulnerable should the 2015 General Election go in Labour’s favour.
There are several outstanding questions within the section above, but also a shorter list of issues relating to the interface between the primary assessment and accountability consultation document, its secondary counterpart and the National Curriculum proposals. Key amongst them are:
- Consistency between the primary and secondary floor targets: The secondary consultation is clear ‘that schools should have to meet a set standard on both the threshold and progress measure to be above the floor’. There is no obvious justification for adopting an alternative threshold-heavy approach in the primary sector. Indeed, it is arguable that the principle of a floor relies on broad consistency of application across phases. Progression across the attainment spectrum in the primary phase should not be sacrificed on the altar of a single, narrow ‘secondary ready’ attainment threshold.
- How the KS2 to KS4 progress measure will be calculated: While the baseline-KS2 progress measure may be second order for the purposes of the primary floor, the KS2-KS4 progression measure is central to the proposals in the secondary consultation document. We now know that this will be based on the relationship between the KS2 scaled score and the APS8 measure. But there is no information about how these two different currencies will be linked together. Will the scaled score be extended into KS3 and KS4 so that GCSE grades are ‘translated’ into higher points on the same scale? Further information is needed before we can judge the appropriateness of the proposed primary scaled scores as a baseline.
- How tests will be developed from singleton attainment targets: The process by which tests will be developed in the absence of a framework of level descriptions and given single ‘lowest common denominator’ attainment targets for each programme of study remains shrouded in mystery. This is not simply a dry technical issue, because it informs our understanding of the nature of the tests proposed. It also raises important questions about the relationship academies will need to have with programmes of study that – ostensibly at least – they are not required to follow. One might have hoped that the primary document would throw some light on this matter.
Because there has been no effort to link together the proposals in the primary and secondary consultation documents (and we still await a promised post-16 document) there are significant outstanding questions about cross-phase consistency and, especially, the construction of the KS2-KS4 progress measure.
I have identified no fewer than sixteen significant issues with the proposals in the primary consultation document. Several of these are attributable to a lack of clarity within the text, not least over the core principles that should be applied across the piece to ensure policy coherence and internal consistency between different elements of the package. This is a major shortcoming.
The muddle and obfuscation over the nature of the floor target is an obvious concern, together with the decision to hitch the Pupil Premium to the achievement of the floor, as well as to narrowing achievement gaps. There is a fundamental tension here that needs to be unpacked and addressed.
The negative impact of the removal of the underpinning framework ensuring consistency between statutory end of key stage assessment and end-year assessment in schools has been underplayed. There is significant downside to balance against any advantages from greater freedom and autonomy, but this has not been spelled out.
The case for the removal of levels has been asserted repeatedly, despite a significant groundswell of professional opinion against it, stretching back to the original response to consultation on the recommendations of the Expert Panel. There may be reason to believe that Labour would reverse this decision.
While there is apparently cross-party consensus on the wisdom of shifting the KS1 baseline to Year R, big questions remain about the nature of the ‘baseline check’ required.
Despite some positive commitments to make the assessment and accountability regime ‘high attainer friendly’ there are also significant reservations about how high attainment will be defined and reported.
On a scaled score from 80 to 130, I would rate the Government at 85 and, with some benefit of the doubt, put the Opposition at 100.
In a nutshell…
We have perhaps two-thirds of the bigger picture in place, though some parts are distinctly fuzzy.
The secondary proposals are much more coherent than those for the primary sector and these two do not fit together well.
The primary proposals betray an incoherent vision and vain efforts to reconcile irreconcilably divergent views. It is no surprise that they were extensively delayed, only to be published in the last few days of the summer term.
Has this original June 2012 commitment been met?
‘In terms of statutory assessment, however, I believe that it is critical that we both recognise the achievements of all pupils, and provide for a focus on progress. Some form of grading of pupil attainment in mathematics, science and English will therefore be required, so that we can recognise and reward the highest achievers as well as identifying those that are falling below national expectations.
We have scores rather than grading and they don’t extend to science. High achievers will receive attention but we don’t know whether they will be the highest achievers or a much broader group.
Regrettably then, the answer is no.