This post tracks progress towards the introduction of the primary assessment and accountability reforms introduced by England’s Coalition Government.
It considers the possibility of delay as a consequence of the May 2015 General Election and the potential impact of a new government with a different political complexion.
An introductory section outlines the timeline for reform. This is followed by seven thematic sections dealing with:
There are page jumps from each of the bullets above, should readers wish to refer to these specific sections.
Each section summarises briefly the changes and commitments set out in the consultation response (and in the original consultation document where these appear not to have been superseded).
Each then reviews in more detail the progress made to date, itemising the tasks that remain outstanding.
I have included deadlines for all outstanding tasks. Where these are unknown I have made a ‘best guess’ (indicated by a question mark after the date).
I have done my best to steer a consistent path through the variety of material associated with these reforms, pointing out apparent conflicts between sources wherever these exist.
A final section considers progress across the reform programme as a whole – and how much remains to be done.
It discusses the likely impact of Election Purdah and the prospects for changes in direction consequent upon the outcome of the Election.
I have devoted previous posts to ‘Analysis of the Primary Assessment and Accountability Consultation Document’ (July 2013) and to the response in ‘Unpacking the Primary Assessment and Accountability Reforms’ (April 2014) so there is inevitably some repetition here, for which I apologise.
This is a long and complex post, even by my standards. I have tried to construct the big picture from a variety of different sources, to itemise all the jigsaw pieces already in place and all those that are still missing.
If you spot any errors or omissions, do let me know and I will do my best to correct them.
[Postscript: Please note that I have added several further postscripts to this document since the original date of publication. If you are revisiting, do pause at the new emboldened paragraphs below.]
Timeline for Reform
The consultation document ‘Primary assessment and accountability under the new national curriculum’ was published on 7 July 2013.
It contained a commitment to publish a response in ‘autumn 2013’, but ‘Reforming assessment and accountability for primary schools’ did not appear until March 2014.
The implementation timetable has to be inferred from a variety of sources but seems to be as shown in the table below. (I have set aside interim milestones until the thematic sections below.)
|Sept 2014||Schools no longer expected to use levels for non-statutory assessment|
|May 2015||End of KS1 and KS2 national curriculum tests and statutory teacher assessment reported through levels for the final time. .|
|Summer term 2015||Final 2016 KS1 and KS2 test frameworks, sample materials and mark schemes published.|
|Guidance published on reporting of test results.|
|Sept 2015||Schools can use approved reception baseline assessments (or a KS1 baseline).|
|Sept/Autumn term 2015||New performance descriptors for statutory teacher assessment published.|
|Dec 2015||Primary Performance Tables use levels for the final time.|
|May 2016||New KS1 and KS tests introduced, reported through new attainment and progress measures.|
|June 2016||Statutory teacher assessment reported through new performance descriptors.|
|Sept 2016||Reception baseline assessment the only baseline option for all-through primaries.|
|Schools must publish new headline measures on their websites.|
|New floor standards come into effect (with progress element still derived from KS1 baseline).|
|Dec 2016||New attainment and performance measures published in Primary Performance Tables.|
The General Election takes place on 7 May 2015, but pre-Election Purdah will commence on 30 March, almost exactly a year on from publication of the consultation response.
At the time of writing, some 40 weeks have elapsed since the response was published – and there are some 10 weeks before Purdah descends.
Assuming that the next Government is formed within a week of the Election (which might be optimistic), there is a second working period of roughly 10 weeks between that and the end of the AY 2014/15 summer term.
The convention is that all significant assessment and accountability reforms are notified to schools a full academic year before implementation, so allowing them sufficient time to plan for implementation.
A full year’s lead time is no longer sacrosanct (and has already been set aside in some instances below) but any shorter notification period may have significant implications for teacher workload – something that the Government is committed to tackling.
[Postscript: On 6 February the Government published its response to the Workload Challenge, which contained a commitment to introduce, from ‘Spring 2015’, a:
‘DfE Protocol setting out minimum lead-in times for significant curriculum, qualifications and accountability changes…’
Elsewhere the text says that the minimum lead time will be a year, thus reinforcing the convention described above.
The term ‘significant’ allows some wriggle room, but one might reasonably expect it to be applied to some of the outstanding actions below.
The Protocol was published on 23 March. The first numbered paragraph implicitly defines a significant change as one having ‘a significant workload impact on schools’, though what constitutes significance (and who determines it) is left unanswered.
There is provision for override ‘in cases where change is urgently required’ but criteria for introducing an override are not supplied.]
We now know that a minimum lead time will not be applied to the introduction of new performance descriptors for statutory teacher assessment (see below). The original timescale did not fit this description and it has not been adjusted in the light of consultation.]
Announcements made during the long summer holiday are much disliked by schools, so the end of summer term 2015 becomes the de facto target for any reforms requiring implementation from September 2016.
One might therefore conclude that:
- We are about two-thirds of the way through the main implementation period.
- There is a period of some 100 working days in which to complete the reforms expected to be notified to schools before the end of the AY2014/15 summer term. This is divided into two windows of some 50 working days on either side of Purdah.
- There is some scope to extend more deadlines into the summer break and autumn 2015, but the costs of doing so – including loss of professional goodwill – might outweigh the benefits.
Purdah will act as a brake on progress across the piece. It will delay announcements that might otherwise have been made in April and early May, such as those related to new tests scheduled for May 2016.
The implications of Purdah are discussed further in the final section of this post.
Reception Baseline Assessment
A new Reception Baseline will be introduced from September 2015. This will be undertaken by children within their first few weeks of school (so not necessarily during the first half of the autumn term).
Teachers will be able to select from a range of assessments ‘but most are likely to be administered by the reception teaching staff’. Assessments will be ‘short’ and ‘sit within teachers’ broader assessments of children’s development’.
They will be:
‘…strong predictors of key stage 1 and key stage 2 attainment whilst reflecting the age and abilities of children in reception’
Schools that use an approved baseline assessment ‘in September 2015’ (and presumably later during the 2015/16 academic year) will have their progress measured in 2022 against that or a KS1 baseline, whichever gives the best result.
However, only the reception baseline will be available from September 2016 and, from this point, the Early Years Foundation Stage (EYFS) profile will no longer be compulsory.
The reception baseline will not be compulsory either, since:
‘Schools that choose not to use an approved baseline assessment from 2016 will be judged on an attainment floor standard alone.’
But, since the attainment floor standard is so demanding (see below), this apparent choice may prove illusory for most schools.
Further work includes:
- Engaging experts to develop criteria for the baselines.
- A study in autumn 2014 of schools that already use such assessments, to inform decisions on moderation and the reporting of results to parents.
- Communicating those decisions about moderation and reporting results – to Ofsted as well as to parents – ensuring they are ‘contextualised by teachers’ broader assessments’.
- Publishing a list of assessments that meet the prescribed criteria.
Developments to date
Baseline criteria were published by the STA in May 2014.
The purpose of the assessments is described thus:
‘…to support the accountability framework and help assess school effectiveness by providing a score for each child at the start of reception which reflects their attainment against a pre-determined content domain and which will be used as the basis for an accountability measure of the relative progress of a cohort of children through primary school.’
This emphasis on the relevance of the baseline to floor targets is in marked contrast with the emphasis on reporting progress to parents in the original consultation document.
Towards the end of the document here is a request for ‘supporting information in addition to the criteria’:
‘What guidance will suppliers provide to schools in order to enable them to interpret the results and report them to parents in a contextualised way, for example alongside teacher observation?’
This seems to refer to the immediate reporting of baseline outcomes rather than of subsequent progress measures. Suitability for this purpose does not appear within the criteria themselves.
Interestingly, the criteria specify that the content domain:
‘…must demonstrate a clear progression towards the key stage 1 national curriculum in English and mathematics’,
but there is no reference to progression to KS2, and nothing about assessments being ‘strong predictors’ of future attainment, whether at KS1 or KS2.
Have expectations been lowered, perhaps because of concerns about the predictive validity of the assessments currently available?
A research study was commissioned in June 2014 (so earlier than anticipated) with broader parameters than originally envisaged.
The Government awarded a 9-month contract to NFER worth £49.7K, to undertake surveys of teachers’, school leaders’ and parents’ views on baseline assessment.
The documentation reveals that CEM is also involved in a parallel quantitative study which will ‘simulate an accountability environment’ for a group of schools, to judge changes in their behaviour.
Both of these organisations are also in the running for concession contracts to deliver the assessments from September 2015 (see below).
The aims of the project are to identify:
- The impact of the introduction of baseline assessments in an accountability context.
- Challenges to the smooth introduction of baseline assessments as a means to constructing an accountability measure.
- Potential needs for monitoring and moderation approaches.
- What reporting mechanisms and formats stakeholders find most useful.
Objectives are set out for an accountability strand and a reporting strand respectively. The former refer explicitly to identification of ‘gaming’ and the exploration of ‘perverse incentives’.
It is not entirely clear from the latter whether researchers are focused solely on initial contexualised reporting of reception baseline outcomes, or are also exploring the subsequent reporting of progress.
The full objectives are reproduced below
The final ‘publishable’ report is to be delivered by March 2015. It will be touch and go whether this can be released before Purdah descends. Confirmation of policy decisions based on the research will likely be delayed until after the Election.
The process has begun to identify and publish a list of assessments that meet the criteria.
A tender appeared on Contracts Finder in September 2014 and has been updated several times subsequently, the most recent version appearing in early December.
The purpose is to award several concession contracts, giving holders the right to compete with each other to deliver baseline assessments.
Contracts were scheduled to be awarded on 26 January 2015, but there was no announcement. Each will last 19 months (to August 2016), with an option to extend for a further year. The total value of the contracts, including extensions, is calculated at £4.2m.
There is no limit to the number of concessions to be awarded, but providers must meet specified (and complex) school recruitment and delivery targets which essentially translate into a 10% sample of all eligible schools.
Under-recruiting providers can be included if fewer than four meet the 10% target, as long as they have recruited at least 1,000 eligible schools.
‘The minimum volume requirement may be waived if the number of schools choosing to administer the reception baseline is fewer than 8,887 [50% of the total number of schools with a reception class].’
Hence the number of suppliers in the market is likely to be limited to 10 or so: there will be some choice, but not too much.
My online researches unearthed four obvious candidates:
And suggestions that this might constitute the entire field
The initial deadline for recruiting the target number of schools is 30 April 2015, slap-bang in the middle of Purdah. This may prove problematic.
[Postscript: The award of six concession contracts was quietly confirmed on Wednesday 4 February, via new guidance on DfE’s website. The two contractors missing from the list above are Early Excellence and Hodder Education.
The guidance confirms that schools must sign up with their preferred supplier. They can do so after the initial deadline of 30 April but, on 3 June, schools will be told if they have chosen a provider that has been suspended for failing to recruit sufficient schools. They will then need to choose an alternative provider.
It adds that, in AY2015/16, LA-maintained schools, academies and free schools will be reimbursed for the ‘basic cost’ of approved reception baselines. Thereafter, school budgets will include the necessary funding.
In the event, the Government has barely contributed to publicity for the assessment, leaving it to suppliers to make the running. The initial low-key approach (including links to the contractors’ home pages rather than to details of their baseline offers) has been maintained.
The only addition to the guidance has been the inclusion, from 20 March, of the criteria used to evaluate the original bids. This seems unlikely to help schools select their preferred solution since, by definition, all the successful bids must have satisifed these criteria!
Purdah will now prevent any further Government publicity.]
It seems likely that the decision to allow a range of baseline assessments – as opposed to a single national measure – will create significant comparability issues.
One of the ‘clarification questions’ posed by potential suppliers is:
‘We can find no reference to providing a comparability score between provider assessments. Therefore, can we assume that each battery of assessments will be independent, stand-alone and with no need to cross reference to other suppliers?’
The answer given is:
‘The assumption is correct at this stage. However, STA will be conducting a comparability study with successful suppliers in September 2015 to determine whether concordance tables can be constructed between assessments.’
This implies that progress measures will need to be calculated separately for users of each baseline assessment – and that these will be comparable only through additional ‘concordance tables’, should these prove feasible.
There are associated administrative and workload issues for schools, particularly those with high mobility rates, which may find themselves needing to engage with several different baseline assessment products.
One answer to a supplier’s question reveals that:
‘As currently, children will be included in performance measures for the school in which they take their final assessment (i.e. key stage 2 tests) regardless of which school they were at for the input measure (i.e. reception baseline on key stage 1). We are currently reviewing how long a child needs to have attended a school in order for their progress outcome to be included in the measure.’
The issue of comparability also raises questions about their aggregation for floor target purposes. Will targets based on several different baseline assessments be comparable with those based on only one? Will schools with high mobility rates be disadvantaged?
Schools will pay for the assessments. The supporting documentation says that:
‘The amount of funding that schools will be provided with is still to be determined. This will not be determined until after bids have been submitted to avoid accusations of price fixing.’
One of the answers to a clarification question says:
‘The funding will be available to schools from October 2015 to cover the reception baseline for the academic year 2015/16.’
Another says this funding is unlikely to be ringfenced.
There is some confusion over the payment mechanism. One answer says:
‘…the mechanism for this is still to be determined. In the longer term, money will be provided to schools through the Dedicated Schools Grant (DSG) to purchase the reception baseline. However, the Department is still considering options for the first year and may pay suppliers directly depending on the amount of data provided.’
But yet another is confident that:
‘Suppliers will be paid directly by schools. The Department will reimburse schools separately.’
The documentation also reveals that there has as yet been no decision on how to measure progress between the baseline and the end of KS2:
‘The Department is still considering how to measure this and is keen for suppliers to provide their thoughts.’
The ‘Statement of requirements’ once again foregrounds the use of the baseline for floor targets rather than reporting individual learners’ progress.
‘On 27 March 2014, the Department for Education (DfE) announced plans to introduce a new floor standard from September 2016. This will be based on the progress made by pupils from reception to the end of primary school. The DfE will use a new Reception Baseline Assessment to capture the starting point from which the progress that schools make with their pupils will be measured. The content of the Reception Baseline will reflect the knowledge and understanding of children at the start of reception, and will be clearly linked to the learning and development requirements of the Early Years Foundation Stage and key stage 1 national curriculum in English and mathematics. The Reception Baseline will be administered within the first half term of a pupil’s entry to a reception class.’
In relation to reporting to parents, one of the answers to suppliers’ questions states:
‘Some parents will be aware of the reception baseline from the national media coverage of the policy announcement. We anticipate that awareness of the reception baseline will develop over time. As with other assessments carried out by a school, we would expect schools to share information with parents if asked, though there will be no requirement to report the outcome of the reception baseline to parents.’
So it appears that, regardless of the outcomes of the research above, initial short term reporting of reception baseline outcomes will be optional.
[Postscript: This position is still more vigorously stated in a letter dated November 2014 from Ministers to a primary group formed by two maths associations. It says (my emphasis):
‘Let me be clear that we do not intend the baseline assessment to be used to monitor the progress of individual children. You rightly point out that any assessment that was designed to be reliable at individual child level would need to take into account the different ages at which children start reception and be sufficiently detailed to account for the variation in performance one expects from young children day-to-day. Rather, the baseline assessment is about capturing the starting point for the cohort which can then be used to assess the progress of that cohort at the end of primary school,’
This distinction has not been made sufficiently explicit in material published elsewhere.]
The overall picture is of a process in which procurement is running in parallel with research and development work intended to help resolve several significant and outstanding issues. This is a consequence of the September 2015 deadline for introduction, which seems increasingly problematic.
Particularly so given that many professionals are yet to be convinced of the case for reception baseline assessment, expressing reservations on several fundamental grounds, extending well beyond the issues highlighted above.
A January 2015 Report from the Centre Forum – Progress matters in Primary too – defends the plan against its detractors, citing six key points of concern. Some of the counter-arguments summarised below are rather more convincing than others:
- Validity: The contention that reception level assessments are accurate predictors of attainment at the end of KS2 is justified by reference to CEM’s PIPS assessment, which was judged in 2001 to give a correlation of 0.7. But of course KS2 tests were very different in those days.
- Reliability: The notion that attainment can be reliably determined in reception is again justified with reference to PIPS data from 2001 (showing a 0.98 correlation on retesting). The authors argue that the potentially negative effects of test conditions on young children and the risks of bias should be ‘mitigated’ (but not eliminated) through the development and selection process.
- Contextualisation: The risk of over-simplification through reporting a single numerical score, independent of factors such as age, needs to be set against the arguments in favour of a relatively simple and transparent methodology. Schools are free to add such context when communicating with parents.
- Labelling: The argument that baseline outcomes will tend to undermine universally high expectations is countered by the view that assessment may actually challenge labelling attributable to other causes, and can in any case be managed in reporting to parents by providing additional contextual information.
- Pupil mobility: Concern that the assessment will be unfair on schools with high levels of mobility is met by reference to planned guidance on ‘how long a pupil needs to have attended a school in order to be included in the progress measure’. However, the broader problems associated with a choice of assessments are acknowledged.
- Gaming: The risk that schools will artificially depress baseline outcomes will be managed through effective moderation and monitoring.
The overall conclusion is that:
‘…the legitimate concerns raised by stakeholders around the reliability and fairness of a baseline assessment do not present fundamental impediments to implementing the progress measure. Overall, a well-designed assessment and appropriate moderation could address these concerns to the extent that a baseline assessment could provide a reasonable basis for constructing a progress measure.
That said, the Department for Education and baseline assessment providers need to address, and, where indicated, mitigate the concerns. However, in principle, there is nothing to prevent a well-designed baseline test being used to create a progress-based accountability measure.’
The report adds:
‘However, this argument still needs to be won and teachers’ concerns assuaged….
.. Since the majority of schools will be reliant on the progress measure under the new system, they need to be better informed about the validity, reliability and purpose of the baseline assessment. To win the support of school leaders and teachers, the Department for Education must release clear, defensible evidence that the baseline assessment is indeed valid, fair and reliable.’
[Postscript: On 25 March the STA tendered for a supplier to ‘determine appropriate models for assuring the national data from the reception baseline’. The notice continues:
‘Once models have been determined, STA will agree up to three approaches to be implemented by the supplier in small scale pilots during September/October 2015. The supplier will also be responsible for evaluating the approaches using evidence from the pilots with the aim of recommending an approach to be implemented from September 2016.’
The need for quality assurance is compounded by the fact that there are six different assessment models. The documentation makes clear that monitoring, moderation and other quality assurance methods will be considered.
The contract runs from 1 July 2015 to 31 January 2016 with the possibility of extension for a further 12 months. It will be let by 19 June.]
- Publish list of contracts for approved baseline assessments (26 January 2015) COMPLETED
- Explain funding arrangements for baseline assessments and how FY2015-16 funding will be distributed (January 2015?) COMPLETED
- Publish research on baseline assessment (March/April 2015)
- Confirm monitoring and moderation arrangements (March/April 2015?)
- Deadline for contractors recruiting schools for initial baseline assessments (30 April 2015)
- Publish guidance on the reporting of baseline assessment results (May 2015?)
- Award quality assurance tender (June 2016)
- Undertake comparability study with successful suppliers to determine whether concordance tables can be constructed (Autumn 2015)
- Determine funding required for AY2015/16 assessment and distribute to schools (or suppliers?) (October 2015?)
- Pilot quality assurance models (October 2015)
KS1 and KS2 tests
The new tests will comprise:
- At KS1 – externally set and internally marked tests of maths and reading and an externally set test of grammar, punctuation and spelling (GPS). It is unclear from the text whether the GPS test will be externally marked.
- At KS2 – externally set and externally marked tests of maths, reading and science, plus a sampling test in science.
Outcomes of both KS1 and KS2 tests (other than the science sampling test) will be expressed as scaled scores. A footnote makes it clear that, in both cases, a score of ‘100 will represent the new expected standard for that stage’
The consultation document says of the scaled scores:
‘Because it is not possible to create tests of precisely the same difficulty every year, the number of marks needed to meet the secondary readiness standard will fluctuate slightly from one year to another. To ensure that results are comparable over time, we propose to convert raw test marks into a scaled score, where the secondary readiness standard will remain the same from year to year. Scaled scores are used in all international surveys and ensure that test outcomes are comparable over time.’
It adds that the Standards and Testing Agency (STA) will develop the scale.
Otherwise very little detail is provided about next steps. The consultation response is silent on the issue. The original consultation document says only that:
‘The Standards and Testing Agency will develop new national curriculum tests, to reflect the new national curriculum programmes of study.’
Adding, in relation to the science sampling test:
‘We will continue with national sample tests in science, designed to monitor national standards over time. A nationally-representative sample of pupils will sit a range of tests, designed to produce detailed information on the cohort’s performance across the whole science curriculum. The design of the tests will mean that results cannot be used to hold individual schools or pupils accountable.’
Developments to date
On March 31 2014, the STA published draft test frameworks for the seven KS1 and KS2 tests to be introduced from 2016:
- KS1 GPS: a short written task (20 mins); short answer questions (20 mins) and a spelling task (15 mins)
- KS1 reading: two reading tests, one with texts and questions together, the other with a separate answer booklet (2 x 20 mins)
- KS1 maths: an arithmetic test (15 mins) and a test of fluency, problem-solving and reasoning (35 mins)
- KS2 GPS: a grammar and punctuation test (45 mins) and a spelling task (15 mins)
- KS2 reading: a single test (60 mins)
- KS2 maths: an arithmetic test (30 mins) and two tests of fluency, problem-solving and reasoning (2 x 40 mins)
- KS2 science (sampling): tests in physics, chemistry and biology contexts (3 x 25 mins).
Each test will be designed for the full range of prior attainment and questions will typically be posed in order of difficulty.
Each framework explains that all eligible children at state-funded schools will be required to take the tests, but some learners will be exempt.
For further details of which learners will be exempted, readers are referred to the current Assessment and Reporting Arrangements (ARA) booklets.
According to these, the KS1 tests should be taken by all learners working at level 1 or above and the KS2 tests by all learners working at level 3 and above. Teacher assessment data must be submitted for pupils working below the level of the tests.
But of course levels will no longer exist – and we have no equivalent in the form of scaled scores – so the draft frameworks do not define clearly the lower parameter of the range of prior attainment the tests are intended to accommodate.
It will not be straightforward to design workable tests for such broad spans of prior attainment.
Each framework has a common section on the derivation of scaled scores:
‘The raw score on the test…will be converted into a scaled score. Translating raw scores into scaled scores ensures performance can be reported on a consistent scale for all children. Scaled scores retain the same meaning from one year to the next. Therefore, a particular scaled score reflects the same level of attainment in one year as in the previous year, having been adjusted for any differences in difficulty of the test.
Additionally, each child will receive an overall result indicating whether or not he or she has achieved the required standard on the test. A standard-setting exercise will be conducted on the first live test in 2016 in order to determine the scaled score needed for a child to be considered to have met the standard. This process will be facilitated by the performance descriptor… which defines the performance level required to meet the standard. In subsequent years, the standard will be maintained using appropriate statistical methods to translate raw scores on a new test into scaled scores with an additional judgemental exercise at the expected standard. The scaled score required to achieve the expected level on the test will always remain the same.
The exact scale for the scaled scores will be determined following further analysis of trialling data. This will include a full review of the reporting of confidence intervals for scaled scores.’
In July 2014 STA also published sample questions, mark schemes and associated commentaries for each test.
I have been unable to trace any details of the timetable for test development and trialling.
As far as I can establish, STA has not published an equivalent to QCDA’s ‘Test development, level setting and maintaining standards’ (March 2010) which describes in some detail the different stages of the test development process.
This old QCA web-page describes a 22-month cycle, from the initial stages of test development to the administration of the tests.
This aligns reasonably well with the 25-month period between publication of the draft test frameworks on 31 March 2014 and the administration of the tests in early May 2016.
Applying the same timetable to the 2016 tests – using publication of the draft frameworks as the starting point – suggests that:
- The first pre-test should have been completed by November 2014
- The second pre-test should take place by February 2015
- Mark schemes and tests should be finalised by July 2015
STA commits to publishing, the final test frameworks and a full set of sample tests and mark schemes for each of the national curriculum tests at key stages 1 and 2 ‘during the 2015 summer term’.
Given Purdah, these seem most likely to appear towards the end of the summer term rather than a full year ahead of the tests.
In relation to the test frameworks, STA says:
‘We may make small changes as a result of this work; however, we do not expect the main elements of the frameworks to change.’
They will also produce, to the same deadline, guidance on how the results of national curriculum tests will be reported, including an explanation of scaled scores.
So we have three further outstanding tasks:
- Publishing the final test frameworks (summer term 2015)
- Finalising the scale to be used for the tests (summer term 2015)
- Publishing guidance explaining the use and reporting of scaled scores (summer term 2015)
[Postscript: Since publishing this post, I have found on Contracts Finder various STA contracts, as follows:
- KS1 technical pre-test (NFER): Undertaken April-May 2015; Report in August 2015
- KS1 item validation trial (NFER): Undertaken June-July 2015; Report in September 2015
- KS2 technical pre-test (NFER): Undertaken April-June 2015; Report in August 2015
- KS2 item validation trial (SQA): Undertaken June 2015; Report in October 2015.
How these square with the timetable above is, as yet, unclear. If there is a possibility that final test frameworks cannot be finalised until Autumn 2015, the Workload Challenge Protocol may well bite here too.]
Statutory teacher assessment
- KS1 maths, reading, writing, speaking and listening and science
- KS2 maths, reading, writing and science.
There are to be performance descriptors for each statutory teacher assessment:
- a single descriptor for KS1 science and KS2 science, reading and maths
- several descriptors for KS1 maths, reading, writing and speaking and listening, and also for KS2 writing.
There is a commitment to improve KS1 moderation, given concerns expressed by Ofsted and the NAHT Commission.
In respect of low attaining pupils the response says:
‘All pupils who are not able to access the relevant end of key stage test will continue to have their attainment assessed by teachers. We will retain P-scales for reporting teachers’ judgements. The content of the P-scales will remain unchanged. Where pupils are working above the P-scales but below the level of the test, we will provide further information to enable teachers to assess attainment at the end of the relevant key stage in the context of the new national curriculum.’
And there is to be further consideration of whether to move to external moderation of P-scale teacher assessment.
So, to summarise, the further work involves:
- Developing new performance descriptors – to be drafted by an expert group. According to the response, the KS1 descriptors would be introduced in ‘autumn 2014’. No date is given for the KS2 descriptors.
- Improving moderation of KS1 teacher assessment, working closely with schools and Ofsted.
- Providing guidance to support teacher assessment of those working above the P-scales but below the level of the tests.
- Deciding whether to move to external moderation of P-scale teacher assessment.
Developments to date
Updated statutory guidance on the P-Scale attainment targets for pupils with SEN was released in July 2014, but neither it nor the existing guidance on when to use the P-Scales relates them to the new scaled scores, or discusses the issue of moderation.
In September 2014, a guidance note ‘National curriculum and assessment from September 2014: Information for schools’ revised the timeline for the development of performance descriptors:
‘New performance descriptors will be published (in draft) in autumn 2014 which will inform statutory teacher assessment at the end of key stage 1 and 2 in summer 2016. Final versions will be published by September 2015.’
A consultation document on performance descriptors: ‘Performance descriptors for use in key stage 1 and 2 statutory teacher assessment for 2015 to 2016’ was published on 23 October 2014.
The descriptors were:
‘… drafted with experts, including teachers, representatives from Local Authorities, curriculum and subject experts. Also Ofsted and Ofqual have observed and supported the drafting process’
A further FoI has been submitted requesting details of their remit but, at the time of writing, this has not been answered.
[Postscript: The FoI response setting out the remit was published on 5 February.]
The consultation document revealed for the first time the complex structure of the performance descriptor framework.
It prescribes four descriptors for KS1 reading, writing and maths but five for KS2 writing.
The singleton descriptors reflect ‘working at the national standard’.
Where four descriptors are required these are termed (from the top down): ‘mastery’, ‘national’, ‘working towards national’ and ‘below national’ standard.
In the case of KS2 writing ‘above national standard’ is sandwiched between ‘mastery’ and ‘national’.
The document explains how these different levels cross-reference to the assessment of learners exempted from the tests.
In the case of assessments with only a single descriptor, it becomes clear that a further distinction is needed:
‘In subjects with only one performance descriptor, all pupils not assessed against the P-scales will be marked in the same way – meeting, or not meeting, the ‘national standard’.
So ‘not meeting the national standard’ should also be included in the table above. The relation between ‘not meeting’ and ‘below’ national standard is not explained.
But still further complexity is added since:
‘There will be some pupils who are not assessed against the P-scales (because they are working above P8 or because they do not have special educational needs), but who have not yet achieved the contents of the ‘below national standard’ performance descriptor (in subjects with several descriptors). In such cases, pupils will be given a code (which will be determined) to ensure that their attainment is still captured.’
This produces a hierarchy as follows (from the bottom up):
- P Scales
- In cases of assessments with several descriptors, an attainment code yet to be determined
- In case of assessments with single descriptors, an undeclared ‘not meeting the national standard’ descriptor
- The single descriptor or four/five descriptors listed above.
However, the document says:
‘The performance descriptors do not include any aspects of performance from the programme of study for the following key stage. Any pupils considered to have attained the ‘Mastery standard’ are expected to explore the curriculum in greater depth and build on the breadth of their knowledge and skills within that key stage.’
This places an inappropriate brake on the progress of the highest attainers because the assessment ceiling is pitched too low to accommodate them.
It is acknowledging that some high attainers will be performing above the level of the highest descriptors but, regardless of whether or not they move into the programme for the next key stage, there is no mechanism to record their performance.
This raises the further question whether the mastery standard is pitched at the equivalent of level 6, or below it. It will be interesting to see whether this is addressed in the consultation response.
The consultation document says that the draft descriptors will be trialled during summer term 2015 in a representative sample of schools.
These trials and the consultation feedback will together inform the development of the final descriptors, but also:
- ‘statutory arrangements for teacher assessment using the performance descriptors;
- final guidance for schools (and those responsible for external moderation arrangements) on how the performance descriptors should be used;
- an updated national model for the external moderation of teacher assessment; and
- nationally developed exemplification of the work of pupils for each performance descriptor at the end of each key stage.’
Published comments on the draft descriptors have been almost entirely negative, which might suggest that the response could be delayed. The consultation document said it should appear ‘around 26 February 2015’.
According to the document, the final descriptors will be published either ‘in September 2015’ or ‘in the autumn term 2015’, depending whether you rely on the section headed ‘Purpose’ or the one called ‘Next Steps’. The first option would allow them to appear as late as December 2015.
A recent newspaper report suggested that the negative reception had resulted in an ‘amber/red’ assessment of primary assessment reform as a whole. The leaked commentary said that any decision to review the approach would increase the risk that the descriptors could not be finalised ‘by September as planned’.
However, the story concludes:
‘The DfE says: “We do not comment on leaks,” but there are indications from the department that the guidance will be finalised by September. Perhaps ministers chose, in the end, not to “review their approach”, despite the concerns.’
Hence it would appear that delay until after the beginning of AY2015/16 will not be countenanced
Note that the descriptors are for use in academic year 2015/16, so even publication in September is problematic, since teachers will begin the year not knowing which descriptors to apply.
The consultation document refers only to descriptors for AY2015/16, which might imply that they will be further refined for subsequent years. Essentially therefore, the arrangements proposed here would be an imperfect interim solution.
[Postscript: On 26 February 2015 the Consultation Response was published – so on the date commited to in the consultation document.
As expected, it revealed significant opposition to the original proposals:
- 74% of respondents were concerned about nomenclature
- 76% considered that the descriptors were not spaced effectively across the range of pupils’ performance
- 69% of respondents considered them not clear or easy to understand
The response acknowledges that the issues raised:
‘….amount to a request for greater simplicity, clarity and consistency to support teachers in applying performance descriptors and to help parents understand their meaning.’
But goes on to allege that:
‘…there are some stakeholders who valued the levels system and would like performance descriptors to function in a similar way across the key stages, which is not their intention.’
Even so, although the Descriptors are not intended to inform formative assessment, respondents have raised concerns that they could be applied in this manner.
There is also the issue of comparability between formative and summative assessment measures, but this is not addressed.
The response does not entirely acknowledge that opposition to the original proposals is sending it back to the drawing board but:
‘As a result of some of the conflicting responses to the consultation, we will work with relevant experts to determine the most appropriate course of action to address the concerns raised and will inform schools of the agreed approach according to the timetable set out in the consultation document – i.e. by September 2015.
The new assessment commission (see below) will have an as yet undefined role in this process:
‘In the meantime, and to help with this [ie determining the most appropriate course of action] the Government is establishing a Commission on Assessment Without Levels….’
Unfortunately, this role has not been clarified in the Commission’s Statement of Intended Outputs
There is no reference to the trials in schools, which may or may not continue. A DfE Memorandum to the Education Select Committee on its 2014-15 Supplementary Estimates reveals that £0.3m has been reallocated to pay for them, but this is no guarantee that they will take place.
Implementation will not be delayed by a year, despite the commitment to allow a full year’s notice for significant reforms announced in the response to the Workload Challenge.
This part of the timetable is now seriously concertina’d and there must be serious doubt whether the timescale is feasible, especially if proper trialling is to be accommodated.]
- Publish response to performance descriptors consultation document (26 February 2015) COMPLETED
- Trial (revised?) draft performance descriptors (summer term 2015)
- Publish adjusted descriptors, revised in the light of consultation with experts and input from the commission (summer term 2015)
- Experts and commission on assessment produce response to concerns raised and inform schools of outcomes (September 2015)
- Confirm statutory arrangements for use of the performance descriptors (September/autumn term 2015)
- Publish final performance descriptors for AY2015/16 (September/autumn term 2015)
- Publish final guidance on the use of performance descriptors (September/autumn term 2015)
- Publish exemplification of each performance descriptor at each key stage (September/autumn term 2015)
- Publish an updated model for the external moderation of teacher assessment (September/autumn term 2015?)
- Confirm plans for the moderation of KS1 teacher assessment and use of the P-scales (September/autumn term 2015?)
- Publish guidance on assessment of those working above the P-scales but below the level of the tests (September/autumn term 2015?)
- Decide whether performance descriptors require adjustment for AY2016/17 onwards (summer term 2016)
Schools’ internal assessment and tracking systems
‘Schools will be able to focus their teaching, assessment and reporting not on a set of opaque level descriptions, but on the essential knowledge that all pupils should learn’
may be somewhat called into question by the preceding discussion of performance descriptors.
The consultation document continues:
‘There will be a clear separation between ongoing, formative assessment (wholly owned by schools) and the statutory summative assessment which the government will prescribe to provide robust external accountability and national benchmarking. Ofsted will expect to see evidence of pupils’ progress, with inspections informed by the school’s chosen pupil tracking data.’
A subsequent section adds:
‘We will not prescribe a national system for schools’ ongoing assessment….
…. We expect schools to have a curriculum and assessment framework that meets a set of core principles…
… Although schools will be free to devise their own curriculum and assessment system, we will provide examples of good practice which schools may wish to follow. We will work with professional associations, subject experts, education publishers and external test developers to signpost schools to a range of potential approaches.’
The consultation response does not cover this familiar territory again, saying only:
‘Since we launched the consultation, we have had conversations with our expert group on assessment about how to support schools to make best use of the new assessment freedoms. We have launched an Assessment Innovation Fund to enable assessment methods developed by schools and expert organisations to be scaled up into easy-to-use packages for other schools to use.’
Further work is therefore confined to the promulgation of core principles, the application of the Assessment Innovation Fund and possibly further work to ‘signpost schools to a range of potential approaches’.
Developments to date
The Assessment Innovation Fund was originally announced initially in December 2013.
A factsheet released at that time explains that many schools are developing new curriculum and assessment systems and that the Fund is intended to enable schools to share these.
Funding of up to £10K per school is made available to help up to 10 schools to prepare simple, easy-to-use packages that can be made freely available to other schools.
They must commit to:
‘…make their approach available on an open licence basis. This means that anyone who wishes to use the package (and any trade-marked name) must be granted a non-revocable, perpetual, royalty-free licence to do so with the right to sub-licence. The intellectual property rights to the system will remain with the school/group which devised it.’
Successful applicants were to be confirmed ‘in the week commencing 21 April 2014’
In the event, nine successful applications were announced on 1 May, although one subsequently withdrew, apparently over the licensing terms.
The packages developed with this funding are stored – in a rather user-unfriendly fashion – on this TES Community Blog, along with other material supportive of the decision to dispense with levels.
Much other useful material has been published online which has not been collected into this repository and it is not clear to what extent it will develop beyond its present limits, since the most recent addition was in early November 2014.
A recent survey by Capita Sims (itself a provider of assessment support) conducted between June and September 2014, suggested that:
- 25% of primary and secondary schools were unprepared for and 53% had not yet finalised plans for replacing levels.
- 28% were planning to keep the existing system of levels, 21% intended to introduce a new system and 28% had not yet made a decision.
- 50% of those introducing an alternative expected to do so by September 2015, while 23% intended to do so by September 2016.
- Schools’ biggest concern (53% of respondents) is measuring progress and setting targets for learners.
Although the survey is four months old and has clear limitations (there were only 126 respondents) this would suggest further support may be necessary, ideally targeted towards the least confident schools.
In April 2014 the Government published a set of Assessment Principles, building on earlier material in the primary consultation document. These had been developed by an ‘independent expert panel’.
It is not entirely clear whether the principles apply solely to primary schools and to schools’ own assessment processes (as opposed to statutory assessment).
The introductory statement says:
‘The principles are designed to help all schools as they implement arrangements for assessing pupils’ progress against their school curriculum; Government will not impose a single system for ongoing assessment.
Schools will be expected to demonstrate (with evidence) their assessment of pupils’ progress, to keep parents informed, to enable governors to make judgements about the school’s effectiveness, and to inform Ofsted inspections.’
This might suggest they are not intended to cover statutory assessment and testing but are relevant to secondary schools.
There are nine principles in all, divided into three groups:
The last of these seems particularly demanding.
In July 2014, Ofsted published guidance in the form of a ‘Note for inspectors: use of assessment information during inspections in 2014/15’. This says that:
‘In 2014/15, most schools, academies and free schools will have historic performance data expressed in national curriculum levels, except for those pupils in Year 1. Inspectors may find that schools are tracking attainment and progress using a mixture of measures for some, or all, year groups and subjects.
As now, inspectors will use a range of evidence to make judgements, including by looking at test results, pupils’ work and pupils’ own perceptions of their learning. Inspectors will not expect to see a particular assessment system in place and will recognise that schools are still working towards full implementation of their preferred approach.’
It goes on to itemise the ways in which inspectors will check that these systems are effective, without judging the systems themselves, but by gathering evidence of effective implementation through leadership and management, the accuracy of assessment, effectiveness in securing progress and quality of reporting to parents.
In September 2014, NCTL published a research report ‘Beyond Levels: alternative assessment approaches developed by teaching schools.’
The report summarises the outcomes of small-scale research conducted in 34 teaching school alliances. It offers six rather prolix recommendations for schools and DfE to consider, which can be summarised as follows:
- A culture shift is necessary in recognition of the new opportunities provided by the new national curriculum and the removal of levels.
- Schools need access to conferences and seminars to help develop their assessment expertise.
- Schools would benefit from access to peer reviewed commercial tracking systems relating to the new national curriculum. Clarification is needed about what data will be collected centrally.
- Teaching school alliances and schools need financial support to further develop assessment practice, especially practical classroom tools, which should be made freely available online.
- Financial support is needed for teachers to undertake postgraduate research and courses in this field.
- It is essential to develop professional knowledge about emerging effective assessment practice.
I can find no government response to these recommendations and so have not addressed them in the list of outstanding tasks below.
[Postscript: On 25 February 2015, the Government announced the establishment of a ‘Commission on Assessment Without Levels’:
‘To help schools as they develop effective and valuable assessment schemes, and to help us to identify model approaches we are today announcing the formation of a commission on assessment without levels. This commission will continue the evidence-based approach to assessment which we have put in place, and will support primary and secondary schools with the transition to assessment without levels, identifying and sharing good practice in assessment.’
This appears to suggest belated recognition that the steps outlined above have provided schools with insufficient support for the transition to levels-free internal assessment. It is also a response to the possibility that Labour might revisit the decision to remove them (see below).
The Consultation Response on Performance Descriptors released on 26 February (see above) says that the Commission will help to determine the most appropriate response to concerns raised about the Descriptors, while also suggesting that this task will not be devolved exclusively to them.
It adds that the Commission will:
‘…collate, quality assure, publish and share best practice in assessment with schools across the country…and will help to foster innovation and success in assessment practice more widely.’
The membership of the Commission was announced on 9 March.
The Commission met on 10 March and 23 March 2015 and will meet four more times – in April, May, June and July.
It seemed that the Commission, together with the further consultation of experts, supplied a convenient mechanism for ‘parking’ some difficult issues until the other side of the Election.
However, neither the terms of reference nor the statement of outputs mentions the Performance Descriptors, so the Commission’s role in relation to them remains shrouded in mystery.
The authors of the Statement of Outputs feel it necessary to mention in passing that it:
‘…supports the decision to removel levels, but appreciates that the reasons for removing levels are not widely understood’.
It sets out a 10-point list of outputs comprising:
- Another statement of the purposes of assessment and another set of principles to support schools in developing effective assessment systems, presumably different to those published by the previous expert group in April 2014. (It will be interesting to compare the two sets of principles, to establish whether Government policy on what constitutes effective assessment has changed over the last 12 months. It will also be worthwhile monitoring the gap between the principles and the views of Alison Peacock, one of the Commission’s members. She also sat on the expert panel that developed the original principles, some of which seem rather at odds with her own practice and preferences. Meanwhile, another member – Sam Freedman – has stated
- An explanation of ‘how assessment without levels can better serve the needs of pupils and teachers’.
- Guidance to ‘help schools create assessment policies which reflect the principles of effective assessment without levels’.
- Clear information about ‘the legal and regulatory assessment requirements’, intende to clarify what they are now, how they will change and when. (The fact that the Commission concludes that such information is not already available is a searing indictment of the Government’s communications efforts to date.)
- Clarification with Ofsted of ‘the role that assessment without levels will play in the inspection process’ so schools can demonstrate effectiveness without adding to teacher workload. (So again they must believe that Ofsted has not sufficiently clarified this already.)
- Dissemination of good practice, obtained through engagement with ‘a wide group of stakeholders including schools, local authorities, teachers and teaching unions’. (This is tacit admission that the strategy described above is not working.)
- Advice to the Government on how ITT and CPD can support assessment without levels and guidance to schools on the use of CPD for this purpose. (There is no reference to the resource implications of introducing additional training and development.)
- Advice to the Government on ensuring ‘appropriate provision is made for pupils with SEN in the development of assessment policy’. (Their judgement that this is not yet accounted for is a worrying indictment of Government policy to date. They see this as not simply a lapse of communication but a lacuna in the policy-making process.)
- ‘Careful consideration’ of commitments to tackling teacher workload – which they expect to alleviate by providing information, advice and support. (There is no hint that the introduction of Performance Descriptors will be delayed in line with the Workload Challenge.)
- A final report before the end of the summer term, though it may publish some outputs sooner. (It will not be able to do so until the outcome of the Election is decided.)
Although there is some implicit criticism of Government policy and communications to date, the failure to make any reference to the Performance Descriptors is unlikely to instil confidence in the capacity of the Commission to provide the necessary challenge to the original proposals, or support to the profession in identifying a workable alternative.]
- Further dissemination of good practice through the existing mechanisms (ongoing)
- Further ‘work with professional associations, subject experts, education publishers and external test developers to signpost schools to a range of potential approaches.’ (ongoing)
- Additional work (via the commission) to ‘collate, quality assure, publish and share’ best practice (Report by July 2015 with other outputs possible from May 2015)
Reporting to parents
- A scaled score
- The learner’s position in the national cohort, expressed as a decile
- The rate of progress from a baseline, derived by comparing a learner’s scaled score with that of other learners with the same level of prior attainment.
Deciles did not survive the consultation
The consultation response confirms that, for each test, parents will receive:
- Their own child’s scaled score; and
- The average scaled score for the school, ‘the local area’ (presumably the geographical area covered by the authority in which the school is situated) and the country as a whole.
They must also receive information about progress, but the response only discusses how this might be published on school websites and for the purposes of the floor targets (see sections below), rather than how it should be reported directly to parents.
We have addressed already the available information about the calculation of the scaled scores.
The original consultation document also outlined the broad methodology underpinning the progress measures:
‘In order to report pupils’ progress through the primary curriculum, the scaled score for each pupil at key stage 2 would be compared to the scores of other pupils with the same prior attainment. This will identify whether an individual made more or less progress than pupils with similar prior attainment…
…. Using this approach, a school might report pupils’ national curriculum test results to parents as follows:
In the end of key stage 2 reading test, Sally received a scaled score of 126 (the secondary ready standard is 100), placing her in the top 10% of pupils nationally. The average scaled score for pupils with the same prior attainment was 114, so she has made more progress in reading than pupils with a similar starting-point.’
Developments to date
On this web page first published in April 2014 STA commits to publishing guidance during summer term 2015 on how the results of national curriculum tests will be reported, including an explanation of scaled scores.
In September 2014, a further guidance note ‘National curriculum and assessment from September 2014: Information for schools’ shed a little further light on the calculation of the progress measures:
‘Pupil progress will be determined in relation to the average progress made by pupils with the same baseline (i.e. the same KS1 average point score). For example, if a pupil had an APS of 19 at KS1, we will calculate the average scaled score in the KS2 tests for all pupils with an APS of 19 and see whether the pupil in question achieved a higher or lower scaled score than that average The exact methodology of how this will be reported is still to be determined.’
It is hard to get a clear sense of the full range of assessment information that parents will receive.
I have been unable to find any comprehensive description, which would suggest that this is being held back until the methodology for calculating the various measures is finalised.
The various sections above suggest that they will receive details of:
- Reception baseline assessment outcomes.
- Attainment in end of KS1 and end of KS2 tests, now expressed as scaled scores (or via teacher assessment, code or P-scales if working below the level of the tests). This will be supplemented by a series of average scaled scores for each test.
- Progress between the baseline assessment (reception baseline from 2022; KS1 baseline beforehand) and end of KS2 tests, relative to learners with similar prior attainment at the baseline.
- Attainment in statutory teacher assessments, normally expressed through performance descriptors, but with different arrangements for low attainers.
- Attainment and progress between reception baseline, KS1 and KS2 tests, provided through schools’ own internal assessment and tracking systems.
We have seen that reporting mechanisms for the first and fourth are not yet finalised.
The fifth is now for schools to determine, taking account of Ofsted’s guidance and, if they wish, the Assessment Principles.
The scales necessary to report the second are not yet published, and these also form the basis of the remaining progress measures.
Parents will be receiving this information in a variety of different formats: scaled scores, average scaled scores, baseline scores, performance descriptors, progress scores and internal tracking measures.
Moreover, the performance descriptor scales will vary according to the assessment and internal tracking will vary from school to school.
This is certainly much more complex than the current unified system of reporting based on levels. Parents will require extensive support to understand what they are receiving.
Previous sections have already referenced expected guidance on reporting baseline assessments, scaled scores and the use of performance descriptors (which presumably includes parental reporting).
One assumes that there will also need to be unified guidance on all aspects of reporting to parents, intended for parental consumption.
So, avoiding duplication of previous sections, the remaining outstanding tasks are to:
- Finalise the methodology for reporting on pupil progress (summer term 2015)
- Provide comprehensive guidance to parents on all aspects of reporting (summer term 2015?)
Publication of outcomes
The initial consultation document has much to say about first of these, while the consultation response barely mentions the Tables, focusing almost exclusively on school websites
The original document suggests that the Performance Tables will include a variety of measures, including:
- The percentage of pupils meeting the secondary readiness standard
- The average scaled score
- Where the school’s pupils fit in the national cohort
- Pupils’ rate of progress
- How many of the school’s pupils are among the highest-attaining nationally, through a measure showing the percentage of pupils attaining a high scaled score in each subject.
- Teacher assessment outcomes in English maths and science
- Comparisons of each school’s performance with that of schools with similar intake
- Data about the progress of those with very low prior attainment.
All the headline measures will be published separately for pupils in receipt of the pupil premium.
All measures will be published as three year rolling averages in addition to annual results.
There is also a commitment to publish a wide range of test and teacher assessment data, relating to both attainment and progress, through a Data Portal:
‘The department is currently procuring a new data portal or “data warehouse” to store the school performance data that we hold and provide access to it in the most flexible way. This will allow schools, governors and parents to find and analyse the data about schools in which they are most interested, for example focusing on the progress of low attainers in mathematics in different schools or the attainment of certain pupil groups.’
The consultation response acknowledges as a guiding principle:
‘…a broad range of information should be published to help parents and the wider public know how well schools are performing.’
The accountability system will:
‘…require schools to publish information on their websites so that parents can understand both the progress pupils make and the standards they achieve.’
Data on low attainers’ attainment and progress will not be published since the diversity of this group demands extensive contextual information.
But when it comes to Performance Tables, the consultation response says only:
‘As now, performance tables will present a wide range of information about primary school performance.’
By implication, they will include progress measures since the text adds:
‘In 2022 performance tables, we will judge schools on whichever is better: their progress from the reception baseline to key stage 2; or their progress from key stage 1 to key stage 2.
However, schools will be required to publish a suite of indicators in standard format on their websites, including:
- The average progress made by pupils in reading, writing and maths
- The percentage of pupils achieving the expected standard at the end of KS2 in reading, writing and maths
- The average score of pupils in their end of KS2 assessments and
- The ‘percentage of pupils who achieve a high score in all areas’ at the end of KS2.
The precise form of the last of these indicators is not explained. This is not quite the same as the ‘measure showing the percentage of pupils attaining a high scaled score in each subject’ mentioned in the original consultation document.
Does ‘all areas’ mean reading, writing and maths? Must learners achieve a minimum score in each assessment, or a single aggregate score above a certain threshold?
‘So that parents can make comparisons between schools, we would like to show each school’s position in the country on these measures and present these results in a manner that is clear for all audiences to understand. We will discuss how best to do so with stakeholders, to ensure that the presentation of the data is clear, fair and statistically robust.’
Developments to date
In June 2014, a consultation document was issued ‘Accountability: publishing headline performance measures on school and college websites’. This was accompanied by a press release.
The consultation document explains the intended relationship between the Performance Tables, Data Portal and material published on schools’ websites:
‘Performance tables will continue to provide information about individual schools and colleges and be the central source of school and college performance information.’
‘Future changes to the website, through the school and college performance data portal, will improve accessibility to a wide range of information, including the headline performance measures. It will enable interested parents, students, schools, colleges and researchers to interrogate educational data held by the Department for Education to best meet their requirements.’
‘Nevertheless, the first place many parents and students look for information about a school or college is the institution’s own website’
Schools are already required to publish such information, but there is inconsistency in where and how it is presented. The document expresses the intention that consistent information should be placed ‘on the front page of every school and college website’.
The content proposed for primary school’s websites covers the four headline measures set out in the consultation response.
A footnote says:
‘These measures will apply to all-through primary, junior and middle schools. Variants of these measures will apply for infant and first schools.’
But the variants are not set out.
There is no reference to the plan to show ‘each school’s position in the country on these measures’ as mentioned in the consultation response.
The consultation proposes a standard visual presentation which, for primary schools, looks like this
The response to this consultation ‘Publishing performance measures on school and college websites’ appeared in December 2014 (the consultation document had said ‘Autumn 2014’).
The summary of responses says:
‘The majority of respondents to the consultation welcomed the proposals to present headline performance measures in a standard format. There was also strong backing for the proposed visual presentation of data to aid understanding of performance. However, many respondents suggested that without some sense of scale or spread to provide some context to the visual presentation, the data could be misleading. Others said that the language used alongside the charts should be clearer…’
…Whilst most respondents favoured a data application tool that would remove the burden of annually updating performance data on school and college websites, they also highlighted the difficulties of developing a data application that would be compatible with a wide range of school and college websites.’
It is clear that some respondents had questioned why school websites should not simply carry a link on their homepage to the School Performance Tables.
In the light of this reaction, further research will be undertaken to:
- develop a clear and simple visual representation of the data, but with added contextual information.
- establish how performance tables data can be presented ‘in a way that reaches more parents’.
The timeline suggests that this will result in ‘proposals for redevelopment of performance tables’ by May 2015, so we can no longer assume that the Tables will cover the list of material suggested in the original consultation document.
The timeline indicates that if initial user research concludes that a data application is required, that will be developed and tested between June and October 2015, for roll out between September 2016 and January 2017.
Schools will be informed by autumn 2015 whether they should carry a link to the Tables, download a data application or pursue a third option.
‘All schools and colleges, including academies, free schools and university technical colleges, will be required to publish the new headline performance measures in a consistent, standard format on their websites from 2016.’
So, if an application is not introduced, it seems that schools will still have to publish the measures on their websites: they will not be able to rely solely on a link to the Performance Tables.
Middle schools will only be required to publish the primary measures. No mention is made of infant or first schools.
There is no further reference to the data portal, since this project was quietly shelved in September 2014, following unexplained delays in delivery.
There has been no subsequent explanation of the implications of this decision. Will the material intended for inclusion in the Portal be included in the Performance Tables, or published by another route, or will it no longer be published?
Finally, some limited information has emerged about accountability arrangements for infant schools.
This appears on a web page – New accountability arrangements for infant schools from 2016 – published in June 2014.
It explains that the reception baseline will permit the measurement of progress alongside attainment. The progress of infant school pupils will be published for the first time in the 2019 Performance Tables.
This might mean a further addition to the list of information reported to parents set out in the previous section.
There is also a passing reference to moderation:
‘To help increase confidence and consistency in our moderation of infant schools, we will be increasing the proportion of schools where KS1 assessments are moderated externally. From summer 2015, half of all infant schools will have their KS1 assessments externally moderated.’
But no further information is forthcoming about the nature of other headline measures and how they will be reported.
- Complete user research and publish proposals for redevelopment of Performance Tables (May 2015)
- Confirm what data will be published in the 2016 Performance Tables (summer Term 2015?)
- Confirm how material originally intended for inclusion in Data Portal will be published (summer term 2015?)
- Confirm the format and publication route for data showing each school’s position in the country on the headline measures (summer term 2015?)
- Confirm headline performance measures for infant and first schools (summer term 2015?)
- If necessary, further develop and test a prototype data application for schools’ websites (October 2015)
- Inform schools whether a data application will be introduced (autumn 2015)
- Amend School Information Regulations to require publication of headline measures in standard format (April 2016)
- If proceeding, complete development and testing of a data application (May 2016)
- If proceeding, complete roll out of data application (February 2017)
Minimum expectations of schools will continue to be embodied in floor standards. Schools falling below the floor will attract ‘additional scrutiny through inspection’ and ‘intervention may be required’.
Although the new standard:
‘holds schools to account both on the progress they make and on how well their pupils achieve.’
In practice they are able to choose between one or the other.
An all-through primary school will be above the floor standards if:
- Pupils make sufficient progress between the reception baseline and the end of KS2 in all of reading, writing and maths or
- 85% or more of pupils meet the new expected standard at the end of KS2 (similar to Level 4b under the current system).
A junior or middle school will be above the floor standard if:
- pupils make sufficient progress at key stage 2 from their starting point at key stage 1; or
- 85% or more of pupils meet the new expected standard at the end of key stage 2
At this stage arrangements for measuring the progress of pupils in infant or first schools are still to be considered.
Since the reception baseline will be introduced in 2015, progress in all-through primary schools will continue to be measured from the end of KS1 until 2022.
This should mean that, prior to 2022, the standard would be achieved by ensuring that the progress made by pupils in a school – in reading, writing and maths – equals or exceeds the national average progress made by pupils with similar prior attainment at the end of KS1.
Exactly how individual progress will be aggregated to create a whole school measure is not yet clear. The original consultation document holds up the possibility that slightly below average progress will be acceptable:
‘…we expect the value-added score required to be above the floor to be between 98.5 and 99 (a value-added score of 100 represents average progress).’
The consultation response says the amount of progress required will be determined in 2016:
‘The proposed progress measure will be based on value-added in each of reading, writing and mathematics. Each pupil’s scaled scores in each area at key stage 2 will be compared with the scores of pupils who had the same results in their assessments at key stage 1.
For a school to be above the progress floor, pupils will have to make sufficient progress in all of reading, writing and mathematics. For 2016, we will set the precise extent of progress required once key stage 2 tests have been sat for the first time. Once pupils take a reception baseline, progress will continue to be measured using a similar value added methodology.’
In 2022 schools will be assessed against either the reception or KS1 baseline, whichever gives the best result. From 2023 only the reception baseline will be in play.
The attainment standard will be based on achievement of ‘a scaled score of 100 or more’ in each of the reading and maths tests and achievement, via teacher assessment, of the new expected standard in writing (presumably the middle of the five described above).
The attainment standard is significantly more demanding, in that the present requirement is for 65% of learners to meet the expected standard – and the standard itself will now be pitched higher, at the equivalent of Level 4B.
The original consultation document says:
‘Our modelling suggests that a progress measure set at this level, combined with the 85% threshold attainment measure, would result in a similar number of schools falling below the floor as at present. Over time we will consider whether schools should make at least average progress as part of floor standards.’
The consultation response does not confirm this judgement.
The only significant development since the publication of the consultation response is the detail provided on the June 2014 webpage New accountability arrangements for infant schools from 2016.
In addition to the points in the previous section, this also confirms that:
‘…there will not be a floor standard for infant schools’
But this statement has been called into question, since the table from the performance descriptors consultation, reproduced above, appears to suggest that KS1 teacher assessments in reading, writing and maths do contribute to a floor standard – whether for infant or all-through primary schools is unclear.
The aforementioned Centre Forum Report ‘Progress matters in Primary too’ (January 2015) also appears to call into question the results of the modelling reported in the initial consultation document.
‘…the likelihood is that, based on current performance, progress will be the measure used for the vast majority of schools, at least in the short to medium term. Even those schools which achieve the attainment floor target will only do so by ensuring at least average progress is made by their pupils. As a result, progress will in practice be the dominant accountability metric.’
It undertakes modelling based on 2013 attainment data – ie simulating the effect of the new standards had they been in place in 2013, using selected learning areas within the EYFSP as a proxy for the reception baseline – which suggests that just 10% of schools in 2013 would have met the new attainment floor.
It concludes that:
‘For the vast majority of schools, progress will be their only option for avoiding intervention when the reforms come into effect.’
Unfortunately though, it does not provide an estimate of the proportion of schools likely to achieve the progress floor standard, with either the current KS1 baseline or its proxy for a reception baseline.
- Confirm the detailed methodology for deriving both the attainment and progress elements of the floor standards, in relation to both the new reception baseline and the interim KS1 baseline (summer 2015?)
- Set the amount of progress required to achieve the progress element of the floor standards (summer 2016)
- (In the consultation document) Consider whether schools should make at least average progress as part of floor standards and ‘move to three year rolling averages for floor standard measures’ (long term)
Overall progress, Purdah and General Election outcomes
Progress to date and actions outstanding
The lists of outstanding actions above record some 40 tasks necessary to the successful implementation of the primary assessment and accountability reforms.
If the ‘advance notice’ conventions are observed, roughly half of these require completion by the end of the summer term in July 2015, within the two windows of 50 working days on either side of Purdah.
These conventions have already been set aside in some cases, most obviously in respect of reception baseline assessment and the performance descriptors for statutory teacher assessment.
Unsurprisingly, the commentary above suggests that these two strands of the reform programme are the most complex and potentially the most problematic.
The sheer number of outstanding tasks and the limited time in which to complete them could pose problems.
It is important to remember that there are similar reforms in the secondary and post-16 sectors that need to be managed in parallel.
The leaked amber/red rating was attributed solely to the negative reaction to the draft performance descriptors, but it could also reflect a wider concern that all the necessary steps may not be completed in time to give schools the optimal period for planning and preparation.
Schools may be able to cope with shorter notice in a few instances, where the stakes are relatively low, but if too substantive a proportion of the overall reform programme is delayed into next academic year, they will find the cumulative impact much harder to manage.
In a worst case scenario, implementation of some elements might need to be delayed by a year, although the corollary would be an extended transition period for schools that would be less than ideal. It may also be difficult to disentangle the different strands given the degree of interdependency between them.
Given the proximity of a General Election, it may not be politic to confirm such delays before Purdah intervenes: the path of least resistance is probably to postpone any difficult decisions for consideration by the incoming government.
The implications of Purdah
As noted above, if the General Election result is clear-cut, Purdah will last some five-and-a-half weeks and will occur at a critical point in the implementation timetable.
The impact of Purdah should not be under-estimated.
From the point at which Parliament is dissolved on Monday 30 March, the Government must abstain from major policy decisions and announcements.
The Election is typically announced a few days before the dissolution of Parliament. This ‘wash up’ period between announcement and dissolution is typically used to complete essential unfinished business.
The Cabinet Office issues guidance on conduct during Purdah shortly before it begins.
The 2015 guidance has not yet issued so the 2010 guidance is the best source of information about what to expect.
[Postscript: 2015 Guidance was posted on 30 March 2015 and is substantively the same as the 2010 edition.]
Key points include:
- ‘Decisions on matters of policy on which a new Government might be expected to want the opportunity to take a different view from the present Government should be postponed until after the Election, provided that such postponement would not be detrimental to the national interest or wasteful of public money.’
- ‘Officials should not… be asked to devise new policies or arguments…’
- ‘Departmental communications staff may…properly continue to discharge during the Election period their normal function only to the extent of providing factual explanation of current Government policy, statements and decisions.’
- ‘There would normally be no objection to issuing routine factual publications, for example, health and safety advice but these will have to be decided on a case by case basis taking account of the subject matter and the intended audience.’
- ‘Regular statistical releases and research reports (e.g. press notices, bulletins, publications or electronic releases) will continue to be issued and published on dates which have been pre-announced. Ad hoc statistical releases or research reports should be released only where a precise release date has been published prior to the Election period. Where a pre-announcement has specified that the information would be released during a specified period (e.g. a week, or longer time period), but did not specify a precise day, releases should not be published within the Election period.’
- ‘Research: Fieldwork involving interviews with the public or sections of it will be postponed or abandoned although regular, continuous and on-going statistical surveys may continue.’
- ‘Official websites…the release of new online services and publication of reworked content should not occur until after the General Election… Content may be updated for factual accuracy but no substantial revisions should be made and distributed.’
- The general principles and conventions set out in this guidance apply to NDPBs and similar public bodies.
Assuming similar provisions in 2015, most if not all of the assessment and accountability work programme would grind to a halt.
To take an example, it is conceivable that those awarded baseline assessment contracts would be able to recruit schools after 30 March, but they will receive little or no help from the DfE during the Purdah period. Given that the recruitment deadline is 30 April, this may be expected to depress recruitment significantly.
The impact of different General Election outcomes
Forming a Government in the case of a Hung Parliament may also take some time, further delaying the process.
The six days taken in 2010 may not be a guide to what will happen in 2015.
The Cabinet Manual (2011) says:
‘Where an election does not result in an overall majority for a single party, the incumbent government remains in office unless and until the Prime Minister tenders his or her resignation and the Government’s resignation to the Sovereign. An incumbent government is entitled to wait until the new Parliament has met to see if it can command the confidence of the House of Commons, but is expected to resign if it becomes clear that it is unlikely to be able to command that confidence and there is a clear alternative…
…The nature of the government formed will be dependent on discussions between political parties and any resulting agreement. Where there is no overall majority, there are essentially three broad types of government that could be formed:
- single-party, minority government, where the party may (although not necessarily) be supported by a series of ad hoc agreements based on common interests;
- formal inter-party agreement, for example the Liberal–Labour pact from 1977 to 1978; or
- formal coalition government, which generally consists of ministers from more than one political party, and typically commands a majority in the House of Commons’.
If one or more of the parties forming the next government has a different policy on assessment and accountability, this could result in pressure to amend or withdraw parts of the reform programme.
If a single party is involved, pre-Election contact with civil servants may have clarified its intentions, enabling work to resume as soon as the new government is in place but, if more than one party is involved, it may take longer to agree the preferred way forward.
Under a worst case scenario, planners might need to allow for Purdah and post-Election negotiations to consume eight weeks or longer.
The impact of the Election on the shape and scope of the primary assessment and accountability reforms will also depend on which party or parties enter government.
If the same Coalition partners are returned, one might expect uninterrupted implementation, unless the minority Lib Dems seek to negotiate different arrangements, which seems unlikely.
But if a different party or a differently constituted Coalition forms the Government, one might expect decisions to abandon or delay some aspects of the programme.
If Labour forms the Government, or is the major party in a Coalition, some unravelling will be necessary.
They are broadly committed to the status quo:
‘Yet when it comes to many of the technical day-to-day aspects of school leadership – child protection, curriculum reform, assessment and accountability – we believe that a period of stability could prove beneficial for raising pupil achievement. This may not be an exciting rallying cry, but it is crucial that the incoming government takes account of the classroom realities.’
Hunt has also declared:
‘Do not mistake me: I am zealot for minimum standards, rigorous assessment and intelligent accountability.
But if we choose to focus upon exam results and league tables to the detriment of everything else, then we are simply not preparing our young people for the demands of the 21st century.’
And, thus far, Labour has made few specific commitments in this territory.
- They support reception baseline assessment but whether that extends to sustaining a market of providers is unknown. Might they be inclined to replace this with a single national assessment?.
- There are mixed messages about the removal of levels. In June 2014 they expressed unhappiness, but efforts have been made to change their minds. More recently there have been references to schools needing proactive support to develop alternatives and to ‘a full, open and frank discussion’ about levels which might also extend to the teacher assessment performance descriptors. Another record of this same meeting references a commitment to ‘bring back a national assessment system’.
- There is very little about floor targets – a Labour invention – although the Blunkett Review appears to suggest that Directors of School Standards will enjoy some discretion in respect of their enforcement.
Reading between the lines, it seems likely that they would delay some of the strands described above – and potentially simplify others.
The primary assessment reform programme is both extensive and highly complex, comprising several strands and many interdependencies.
Progress to date can best be described as halting.
There are still many steps to be taken and difficult issues to resolve, about half of which should be completed by the end of this academic year. Pre-Election Purdah will cut significantly into the time available.
More announcements may be delayed into the summer holidays or the following autumn term, but this reduces the planning and preparation time available to schools and has potentially significant workload implications.
Alternatively, implementation of some elements or strands may be delayed by a year, but this extends the transition period between old and new arrangements. Any such rationalisation seems likely to be delayed until after the Election and decisions will be influenced by its outcome.
[Postscript: The commitment in the Government’s Workload Challenge response to a one-year lead time, now encapsulated in the Protocol published on 23 March, has not resulted in any specific commitments to delay ahead of the descent of Purdah.
At the onset of Purdah on 30 March some 18 actions appear to be outstanding and requiring completion by the end of the summer term. This will be a tall order for a new Government, especially one of a different complexion.]
If Labour is the dominant party, they may be more inclined to simplify some strands, especially baseline assessment and statutory teacher assessment, while also providing much more intensive support for schools wrestling with the removal of levels.
Given the evidence set out above, ‘amber/red’ seems an appropriate rating for the programme as a whole.
It seems increasingly likely that some significant adjustments will be essential, regardless of the Election outcome.