A Primary Assessment Progress Report

.

This post tracks progress towards the introduction of the primary assessment and accountability reforms introduced by England’s Coalition Government.

pencil-145970_640It reviews developments since the Government’s consultation response was published, as well as the further action required to ensure full and timely implementation.

It considers the possibility of delay as a consequence of the May 2015 General Election and the potential impact of a new government with a different political complexion.

An introductory section outlines the timeline for reform. This is followed by seven thematic sections dealing with:

There are page jumps from each of the bullets above, should readers wish to refer to these specific sections.

Each section summarises briefly the changes and commitments set out in the consultation response (and in the original consultation document where these appear not to have been superseded).

Each then reviews in more detail the progress made to date, itemising the tasks that remain outstanding.

I have included deadlines for all outstanding tasks. Where these are unknown I have made a ‘best guess’ (indicated by a question mark after the date).

I have done my best to steer a consistent path through the variety of material associated with these reforms, pointing out apparent conflicts between sources wherever these exist.

A final section considers progress across the reform programme as a whole – and how much remains to be done.

It discusses the likely impact of Election Purdah and the prospects for changes in direction consequent upon the outcome of the Election.

I have devoted previous posts to ‘Analysis of the Primary Assessment and Accountability Consultation Document’ (July 2013) and to the response in ‘Unpacking the Primary Assessment and Accountability Reforms’ (April 2014) so there is inevitably some repetition here, for which I apologise.

This is a long and complex post, even by my standards. I have tried to construct the big picture from a variety of different sources, to itemise all the jigsaw pieces already in place and all those that are still missing.

If you spot any errors or omissions, do let me know and I will do my best to correct them.

.

[Postscript: Please note that I have added several further postscripts to this document since the original date of publication. If you are revisiting, do pause at the new emboldened paragraphs below.]

Timeline for Reform

The consultation document ‘Primary assessment and accountability under the new national curriculum’ was published on 7 July 2013.

It contained a commitment to publish a response in ‘autumn 2013’, but ‘Reforming assessment and accountability for primary schools’ did not appear until March 2014.

The implementation timetable has to be inferred from a variety of sources but seems to be as shown in the table below. (I have set aside interim milestones until the thematic sections below.)

Month/year Action
Sept 2014 Schools no longer expected to use levels for non-statutory assessment
May 2015 End of KS1 and KS2 national curriculum tests and statutory teacher assessment reported through levels for the final time. .
Summer term 2015 Final 2016 KS1 and KS2 test frameworks, sample materials and mark schemes published.
Guidance published on reporting of test results.
Sept 2015 Schools can use approved reception baseline assessments (or a KS1 baseline).
Sept/Autumn term 2015 New performance descriptors for statutory teacher assessment published.
Dec 2015 Primary Performance Tables use levels for the final time.
May 2016 New KS1 and KS tests introduced, reported through new attainment and progress measures.
June 2016 Statutory teacher assessment reported through new performance descriptors.
Sept 2016 Reception baseline assessment the only baseline option for all-through primaries
Schools must publish new headline measures on their websites.
New floor standards come into effect (with progress element still derived from KS1 baseline).
Dec 2016 New attainment and performance measures published in Primary Performance Tables.

The General Election takes place on 7 May 2015, but pre-Election Purdah will commence on 30 March, almost exactly a year on from publication of the consultation response.

At the time of writing, some 40 weeks have elapsed since the response was published – and there are some 10 weeks before Purdah descends.

Assuming that the next Government is formed within a week of the Election (which might be optimistic), there is a second working period of roughly 10 weeks between that and the end of the AY 2014/15 summer term.

The convention is that all significant assessment and accountability reforms are notified to schools a full academic year before implementation, so allowing them sufficient time to plan for implementation.

A full year’s lead time is no longer sacrosanct (and has already been set aside in some instances below) but any shorter notification period may have significant implications for teacher workload – something that the Government is committed to tackling.

.

[Postscript: On 6 February the Government published its response to the Workload Challenge, which contained a commitment to introduce, from ‘Spring 2015’, a:

‘DfE Protocol setting out minimum lead-in times for significant curriculum, qualifications and accountability changes…’

Elsewhere the text says that the minimum lead time will be a year, thus reinforcing the convention described above.

The term ‘significant’ allows some wriggle room, but one might reasonably expect it to be applied to some of the outstanding actions below.

The Protocol was published on 23 March. The first numbered paragraph implicitly defines a significant change as one having ‘a significant workload impact on schools’, though what constitutes significance (and who determines it) is left unanswered.

There is provision for override ‘in cases where change is urgently required’ but criteria for introducing an override are not supplied.]

.

.

We now know that a minimum lead time will not be applied to the introduction of new performance descriptors for statutory teacher assessment (see below). The original timescale did not fit this description and it has not been adjusted in the light of consultation.]

.

Announcements made during the long summer holiday are much disliked by schools, so the end of summer term 2015 becomes the de facto target for any reforms requiring implementation from September 2016.

One might therefore conclude that:

  • We are about two-thirds of the way through the main implementation period.
  • There is a period of some 100 working days in which to complete the reforms expected to be notified to schools before the end of the AY2014/15 summer term. This is divided into two windows of some 50 working days on either side of Purdah.
  • There is some scope to extend more deadlines into the summer break and autumn 2015, but the costs of doing so – including loss of professional goodwill – might outweigh the benefits.

Purdah will act as a brake on progress across the piece. It will delay announcements that might otherwise have been made in April and early May, such as those related to new tests scheduled for May 2016.

The implications of Purdah are discussed further in the final section of this post.

.

Reception Baseline Assessment

Consultation response

A new Reception Baseline will be introduced from September 2015. This will be undertaken by children within their first few weeks of school (so not necessarily during the first half of the autumn term).

Teachers will be able to select from a range of assessments ‘but most are likely to be administered by the reception teaching staff’.  Assessments will be ‘short’ and ‘sit within teachers’ broader assessments of children’s development’.

They will be:

‘…strong predictors of key stage 1 and key stage 2 attainment whilst reflecting the age and abilities of children in reception’

Schools that use an approved baseline assessment ‘in September 2015’ (and presumably later during the 2015/16 academic year) will have their progress measured in 2022 against that or a KS1 baseline, whichever gives the best result.

However, only the reception baseline will be available from September 2016 and, from this point, the Early Years Foundation Stage (EYFS) profile will no longer be compulsory.

The reception baseline will not be compulsory either, since:

‘Schools that choose not to use an approved baseline assessment from 2016 will be judged on an attainment floor standard alone.’

But, since the attainment floor standard is so demanding (see below), this apparent choice may prove illusory for most schools.

Further work includes:

  • Engaging experts to develop criteria for the baselines.
  • A study in autumn 2014 of schools that already use such assessments, to inform decisions on moderation and the reporting of results to parents.
  • Communicating those decisions about moderation and reporting results – to Ofsted as well as to parents – ensuring they are ‘contextualised by teachers’ broader assessments’.
  • Publishing a list of assessments that meet the prescribed criteria.

.

Developments to date

Baseline criteria were published by the STA in May 2014.

The purpose of the assessments is described thus:

‘…to support the accountability framework and help assess school effectiveness by providing a score for each child at the start of reception which reflects their attainment against a pre-determined content domain and which will be used as the basis for an accountability measure of the relative progress of a cohort of children through primary school.’

This emphasis on the relevance of the baseline to floor targets is in marked contrast with the emphasis on reporting progress to parents in the original consultation document.

Towards the end of the document here is a request for ‘supporting information in addition to the criteria’:

‘What guidance will suppliers provide to schools in order to enable them to interpret the results and report them to parents in a contextualised way, for example alongside teacher observation?’

This seems to refer to the immediate reporting of baseline outcomes rather than of subsequent progress measures. Suitability for this purpose does not appear within the criteria themselves.

Interestingly, the criteria specify that the content domain:

‘…must demonstrate a clear progression towards the key stage 1 national curriculum in English and mathematics’,

but there is no reference to progression to KS2, and nothing about assessments being ‘strong predictors’ of future attainment, whether at KS1 or KS2.

Have expectations been lowered, perhaps because of concerns about the predictive validity of the assessments currently available?

A research study was commissioned in June 2014 (so earlier than anticipated) with broader parameters than originally envisaged.

The Government awarded a 9-month contract to NFER worth £49.7K, to undertake surveys of teachers’, school leaders’ and parents’ views on baseline assessment.

The documentation reveals that CEM is also involved in a parallel quantitative study which will ‘simulate an accountability environment’ for a group of schools, to judge changes in their behaviour.

Both of these organisations are also in the running for concession contracts to deliver the assessments from September 2015 (see below).

The aims of the project are to identify:

  • The impact of the introduction of baseline assessments in an accountability context.
  • Challenges to the smooth introduction of baseline assessments as a means to constructing an accountability measure.
  • Potential needs for monitoring and moderation approaches.
  • What reporting mechanisms and formats stakeholders find most useful.

Objectives are set out for an accountability strand and a reporting strand respectively. The former refer explicitly to identification of ‘gaming’ and the exploration of ‘perverse incentives’.

It is not entirely clear from the latter whether researchers are focused solely on initial contexualised reporting of reception baseline outcomes, or are also exploring the subsequent reporting of progress.

The full objectives are reproduced below

.

Reception baseline capture

.

The final ‘publishable’ report is to be delivered by March 2015. It will be touch and go whether this can be released before Purdah descends. Confirmation of policy decisions based on the research will likely be delayed until after the Election.

.

The process has begun to identify and publish a list of assessments that meet the criteria.

A tender appeared on Contracts Finder in September 2014 and has been updated several times subsequently, the most recent version appearing in early December.

The purpose is to award several concession contracts, giving holders the right to compete with each other to deliver baseline assessments.

Contracts were scheduled to be awarded on 26 January 2015, but there was no announcement. Each will last 19 months (to August 2016), with an option to extend for a further year. The total value of the contracts, including extensions, is calculated at £4.2m.

There is no limit to the number of concessions to be awarded, but providers must meet specified (and complex) school recruitment and delivery targets which essentially translate into a 10% sample of all eligible schools.

Under-recruiting providers can be included if fewer than four meet the 10% target, as long as they have recruited at least 1,000 eligible schools.

Moreover:

‘The minimum volume requirement may be waived if the number of schools choosing to administer the reception baseline is fewer than 8,887 [50% of the total number of schools with a reception class].’

Hence the number of suppliers in the market is likely to be limited to 10 or so: there will be some choice, but not too much.

My online researches unearthed four obvious candidates:

And suggestions that this might constitute the entire field

.

.

The initial deadline for recruiting the target number of schools is 30 April 2015, slap-bang in the middle of Purdah. This may prove problematic.

.

[Postscript: The award of six concession contracts was quietly confirmed on Wednesday 4 February, via new guidance on DfE’s website. The two contractors missing from the list above are Early Excellence and Hodder Education.

The guidance confirms that schools must sign up with their preferred supplier. They can do so after the initial deadline of 30 April but, on 3 June, schools will be told if they have chosen a provider that has been suspended for failing to recruit sufficient schools.  They will then need to choose an alternative provider.

It adds that, in AY2015/16, LA-maintained schools, academies and free schools will be reimbursed for the ‘basic cost’ of approved reception baselines. Thereafter, school budgets will include the necessary funding.

In the event, the Government has barely contributed to publicity for the assessment, leaving it to suppliers to make the running. The initial low-key approach (including links to the contractors’ home pages rather than to details of their baseline offers) has been maintained.

The only addition to the guidance has been the inclusion, from 20 March, of the criteria used to evaluate the original bids. This seems unlikely to help schools select their preferred solution since, by definition, all the successful bids must have satisifed these criteria!

Purdah will now prevent any further Government publicity.]

.

It seems likely that the decision to allow a range of baseline assessments – as opposed to a single national measure – will create significant comparability issues.

One of the ‘clarification questions’ posed by potential suppliers is:

‘We can find no reference to providing a comparability score between provider assessments. Therefore, can we assume that each battery of assessments will be independent, stand-alone and with no need to cross reference to other suppliers?’

The answer given is:

‘The assumption is correct at this stage. However, STA will be conducting a comparability study with successful suppliers in September 2015 to determine whether concordance tables can be constructed between assessments.’

This implies that progress measures will need to be calculated separately for users of each baseline assessment – and that these will be comparable only through additional ‘concordance tables’, should these prove feasible.

There are associated administrative and workload issues for schools, particularly those with high mobility rates, which may find themselves needing to engage with several different baseline assessment products.

One answer to a supplier’s question reveals that:

‘As currently, children will be included in performance measures for the school in which they take their final assessment (i.e. key stage 2 tests) regardless of which school they were at for the input measure (i.e. reception baseline on key stage 1). We are currently reviewing how long a child needs to have attended a school in order for their progress outcome to be included in the measure.’

The issue of comparability also raises questions about their aggregation for floor target purposes. Will targets based on several different baseline assessments be comparable with those based on only one? Will schools with high mobility rates be disadvantaged?

Schools will pay for the assessments. The supporting documentation says that:

‘The amount of funding that schools will be provided with is still to be determined. This will not be determined until after bids have been submitted to avoid accusations of price fixing.’

One of the answers to a clarification question says:

‘The funding will be available to schools from October 2015 to cover the reception baseline for the academic year 2015/16.’

Another says this funding is unlikely to be ringfenced.

There is some confusion over the payment mechanism. One answer says:

‘…the mechanism for this is still to be determined. In the longer term, money will be provided to schools through the Dedicated Schools Grant (DSG) to purchase the reception baseline. However, the Department is still considering options for the first year and may pay suppliers directly depending on the amount of data provided.’

But yet another is confident that:

‘Suppliers will be paid directly by schools. The Department will reimburse schools separately.’

The documentation also reveals that there has as yet been no decision on how to measure progress between the baseline and the end of KS2:

‘The Department is still considering how to measure this and is keen for suppliers to provide their thoughts.’

The ‘Statement of requirements’ once again foregrounds the use of the baseline for floor targets rather than reporting individual learners’ progress.

‘On 27 March 2014, the Department for Education (DfE) announced plans to introduce a new floor standard from September 2016. This will be based on the progress made by pupils from reception to the end of primary school.  The DfE will use a new Reception Baseline Assessment to capture the starting point from which the progress that schools make with their pupils will be measured.  The content of the Reception Baseline will reflect the knowledge and understanding of children at the start of reception, and will be clearly linked to the learning and development requirements of the Early Years Foundation Stage and key stage 1 national curriculum in English and mathematics.  The Reception Baseline will be administered within the first half term of a pupil’s entry to a reception class.’

In relation to reporting to parents, one of the answers to suppliers’ questions states:

‘Some parents will be aware of the reception baseline from the national media coverage of the policy announcement. We anticipate that awareness of the reception baseline will develop over time. As with other assessments carried out by a school, we would expect schools to share information with parents if asked, though there will be no requirement to report the outcome of the reception baseline to parents.’

So it appears that, regardless of the outcomes of the research above, initial short term reporting of reception baseline outcomes will be optional.

.

[Postscript: This position is still more vigorously stated in a letter dated November 2014 from Ministers to a primary group formed by two maths associations. It says (my emphasis):

‘Let me be clear that we do not intend the baseline assessment to be used to monitor the progress of individual children. You rightly point out that any assessment that was designed to be reliable at individual child level would need to take into account the different ages at which children start reception and be sufficiently detailed to account for the variation in performance one expects from young children day-to-day. Rather, the baseline assessment is about capturing the starting point for the cohort which can then be used to assess the progress of that cohort at the end of primary school,’

This distinction has not been made sufficiently explicit in material published elsewhere.]

.

The overall picture is of a process in which procurement is running in parallel with research and development work intended to help resolve several significant and outstanding issues. This is a consequence of the September 2015 deadline for introduction, which seems increasingly problematic.

Particularly so given that many professionals are yet to be convinced of the case for reception baseline assessment, expressing reservations on several fundamental grounds, extending well beyond the issues highlighted above.

A January 2015 Report from the Centre Forum – Progress matters in Primary too – defends the plan against its detractors, citing six key points of concern. Some of the counter-arguments summarised below are rather more convincing than others:

  • Validity: The contention that reception level assessments are accurate predictors of attainment at the end of KS2 is justified by reference to CEM’s PIPS assessment, which was judged in 2001 to give a correlation of 0.7. But of course KS2 tests were very different in those days.
  • Reliability: The notion that attainment can be reliably determined in reception is again justified with reference to PIPS data from 2001 (showing a 0.98 correlation on retesting). The authors argue that the potentially negative effects of test conditions on young children and the risks of bias should be ‘mitigated’ (but not eliminated) through the development and selection process.
  • Contextualisation: The risk of over-simplification through reporting a single numerical score, independent of factors such as age, needs to be set against the arguments in favour of a relatively simple and transparent methodology. Schools are free to add such context when communicating with parents.
  • Labelling: The argument that baseline outcomes will tend to undermine universally high expectations is countered by the view that assessment may actually challenge labelling attributable to other causes, and can in any case be managed in reporting to parents by providing additional contextual information.
  • Pupil mobility: Concern that the assessment will be unfair on schools with high levels of mobility is met by reference to planned guidance on ‘how long a pupil needs to have attended a school in order to be included in the progress measure’. However, the broader problems associated with a choice of assessments are acknowledged.
  • Gaming: The risk that schools will artificially depress baseline outcomes will be managed through effective moderation and monitoring.

The overall conclusion is that:

‘…the legitimate concerns raised by stakeholders around the reliability and fairness of a baseline assessment do not present fundamental impediments to implementing the progress measure. Overall, a well-designed assessment and appropriate moderation could address these concerns to the extent that a baseline assessment could provide a reasonable basis for constructing a progress measure.

That said, the Department for Education and baseline assessment providers need to address, and, where indicated, mitigate the concerns. However, in principle, there is nothing to prevent a well-designed baseline test being used to create a progress-based accountability measure.’

The report adds:

‘However, this argument still needs to be won and teachers’ concerns assuaged….

.. Since the majority of schools will be reliant on the progress measure under the new system, they need to be better informed about the validity, reliability and purpose of the baseline assessment. To win the support of school leaders and teachers, the Department for Education must release clear, defensible evidence that the baseline assessment is indeed valid, fair and reliable.’

.

[Postscript: On 25 March the STA tendered for a supplier to ‘determine appropriate models for assuring the national data from the reception baseline’. The notice continues:

‘Once models have been determined, STA will agree up to three approaches to be implemented by the supplier in small scale pilots during September/October 2015. The supplier will also be responsible for evaluating the approaches using evidence from the pilots with the aim of recommending an approach to be implemented from September 2016.’

The need for quality assurance is compounded by the fact that there are six different assessment models. The documentation makes clear that monitoring, moderation and other quality assurance methods will be considered.

The contract runs from 1 July 2015 to 31 January 2016 with the possibility of extension for a further 12 months. It will be let by 19 June.]

 .

Outstanding tasks

  • Publish list of contracts for approved baseline assessments (26 January 2015) COMPLETED
  • Explain funding arrangements for baseline assessments and how FY2015-16 funding will be distributed (January 2015?) COMPLETED
  • Publish research on baseline assessment (March/April 2015) 
  • Confirm monitoring and moderation arrangements (March/April 2015?) 
  • Deadline for contractors recruiting schools for initial baseline assessments (30 April 2015) 
  • Publish guidance on the reporting of baseline assessment results (May 2015?) 
  • Award quality assurance tender (June 2016)
  • Undertake comparability study with successful suppliers to determine whether concordance tables can be constructed (Autumn 2015) 
  • Determine funding required for AY2015/16 assessment and distribute to schools (or suppliers?) (October 2015?)
  • Pilot quality assurance models (October 2015)

KS1 and KS2 tests

.

Consultation response

The new tests will comprise:

  • At KS1 – externally set and internally marked tests of maths and reading and an externally set test of grammar, punctuation and spelling (GPS). It is unclear from the text whether the GPS test will be externally marked.
  • At KS2 – externally set and externally marked tests of maths, reading and science, plus a sampling test in science.

Outcomes of both KS1 and KS2 tests (other than the science sampling test) will be expressed as scaled scores. A footnote makes it clear that, in both cases, a score of ‘100 will represent the new expected standard for that stage’

The consultation document says of the scaled scores:

‘Because it is not possible to create tests of precisely the same difficulty every year, the number of marks needed to meet the secondary readiness standard will fluctuate slightly from one year to another. To ensure that results are comparable over time, we propose to convert raw test marks into a scaled score, where the secondary readiness standard will remain the same from year to year. Scaled scores are used in all international surveys and ensure that test outcomes are comparable over time.’

It adds that the Standards and Testing Agency (STA) will develop the scale.

Otherwise very little detail is provided about next steps. The consultation response is silent on the issue. The original consultation document says only that:

‘The Standards and Testing Agency will develop new national curriculum tests, to reflect the new national curriculum programmes of study.’

Adding, in relation to the science sampling test:

‘We will continue with national sample tests in science, designed to monitor national standards over time. A nationally-representative sample of pupils will sit a range of tests, designed to produce detailed information on the cohort’s performance across the whole science curriculum. The design of the tests will mean that results cannot be used to hold individual schools or pupils accountable.’

.

Developments to date

On March 31 2014, the STA published  draft test frameworks for the seven KS1 and KS2 tests to be introduced from 2016:

  • KS1 GPS: a short written task (20 mins); short answer questions (20 mins) and a spelling task (15 mins)
  • KS1 reading: two reading tests, one with texts and questions together, the other with a separate answer booklet (2 x 20 mins)
  • KS1 maths: an arithmetic test (15 mins) and a test of fluency, problem-solving and reasoning (35 mins)
  • KS2 GPS: a grammar and punctuation test (45 mins) and a spelling task (15 mins)
  • KS2 reading: a single test (60 mins)
  • KS2 maths: an arithmetic test (30 mins) and two tests of fluency, problem-solving and reasoning (2 x 40 mins)
  • KS2 science (sampling): tests in physics, chemistry and biology contexts (3 x 25 mins).

Each test will be designed for the full range of prior attainment and questions will typically be posed in order of difficulty.

Each framework explains that all eligible children at state-funded schools will be required to take the tests, but some learners will be exempt.

For further details of which learners will be exempted, readers are referred to the current Assessment and Reporting Arrangements (ARA) booklets.

According to these, the KS1 tests should be taken by all learners working at level 1 or above and the KS2 tests by all learners working at level 3 and above. Teacher assessment data must be submitted for pupils working below the level of the tests.

But of course levels will no longer exist – and we have no equivalent in the form of scaled scores – so the draft frameworks do not define clearly the lower parameter of the range of prior attainment the tests are intended to accommodate.

It will not be straightforward to design workable tests for such broad spans of prior attainment.

Each framework has a common section on the derivation of scaled scores:

‘The raw score on the test…will be converted into a scaled score. Translating raw scores into scaled scores ensures performance can be reported on a consistent scale for all children. Scaled scores retain the same meaning from one year to the next. Therefore, a particular scaled score reflects the same level of attainment in one year as in the previous year, having been adjusted for any differences in difficulty of the test.

Additionally, each child will receive an overall result indicating whether or not he or she has achieved the required standard on the test. A standard-setting exercise will be conducted on the first live test in 2016 in order to determine the scaled score needed for a child to be considered to have met the standard. This process will be facilitated by the performance descriptor… which defines the performance level required to meet the standard. In subsequent years, the standard will be maintained using appropriate statistical methods to translate raw scores on a new test into scaled scores with an additional judgemental exercise at the expected standard. The scaled score required to achieve the expected level on the test will always remain the same.

The exact scale for the scaled scores will be determined following further analysis of trialling data. This will include a full review of the reporting of confidence intervals for scaled scores.’

In July 2014 STA also published sample questions, mark schemes and associated commentaries for each test.

.

Outstanding tasks

I have been unable to trace any details of the timetable for test development and trialling.

As far as I can establish, STA has not published an equivalent to QCDA’s ‘Test development, level setting and maintaining standards’ (March 2010) which describes in some detail the different stages of the test development process.

This old QCA web-page describes a 22-month cycle, from the initial stages of test development to the administration of the tests.

This aligns reasonably well with the 25-month period between publication of the draft test frameworks on 31 March 2014 and the administration of the tests in early May 2016.

Applying the same timetable to the 2016 tests – using publication of the draft frameworks as the starting point – suggests that:

  • The first pre-test should have been completed by November 2014
  • The second pre-test should take place by February 2015 
  • Mark schemes and tests should be finalised by July 2015

STA commits to publishing, the final test frameworks and a full set of sample tests and mark schemes for each of the national curriculum tests at key stages 1 and 2 ‘during the 2015 summer term’.

Given Purdah, these seem most likely to appear towards the end of the summer term rather than a full year ahead of the tests.

In relation to the test frameworks, STA says:

‘We may make small changes as a result of this work; however, we do not expect the main elements of the frameworks to change.’

They will also produce, to the same deadline, guidance on how the results of national curriculum tests will be reported, including an explanation of scaled scores.

So we have three further outstanding tasks:

  • Publishing the final test frameworks (summer term 2015) 
  • Finalising the scale to be used for the tests (summer term 2015) 
  • Publishing guidance explaining the use and reporting of scaled scores (summer term 2015)

.

[Postscript: Since publishing this post, I have found on Contracts Finder various STA contracts, as follows:

How these square with the timetable above is, as yet, unclear. If there is a possibility that final test frameworks cannot be finalised until Autumn 2015, the Workload Challenge Protocol may well bite here too.]

.

Statutory teacher assessment

.

Consultation response

The response confirms statutory teacher assessment of:

  • KS1 maths, reading, writing, speaking and listening and science
  • KS2 maths, reading, writing and science.

There are to be performance descriptors for each statutory teacher assessment:

  • a single descriptor for KS1 science and KS2 science, reading and maths
  • several descriptors for KS1 maths, reading, writing and speaking and listening, and also for KS2 writing.

There is a commitment to improve KS1 moderation, given concerns expressed by Ofsted and the NAHT Commission.

In respect of low attaining pupils the response says:

‘All pupils who are not able to access the relevant end of key stage test will continue to have their attainment assessed by teachers. We will retain P-scales for reporting teachers’ judgements. The content of the P-scales will remain unchanged. Where pupils are working above the P-scales but below the level of the test, we will provide further information to enable teachers to assess attainment at the end of the relevant key stage in the context of the new national curriculum.’

And there is to be further consideration of whether to move to external moderation of P-scale teacher assessment.

So, to summarise, the further work involves:

  • Developing new performance descriptors – to be drafted by an expert group. According to the response, the KS1 descriptors would be introduced in ‘autumn 2014’. No date is given for the KS2 descriptors.
  • Improving moderation of KS1 teacher assessment, working closely with schools and Ofsted.
  • Providing guidance to support teacher assessment of those working above the P-scales but below the level of the tests.
  • Deciding whether to move to external moderation of P-scale teacher assessment.

.

Developments to date

Updated statutory guidance on the P-Scale attainment targets for pupils with SEN was released in July 2014, but neither it nor the existing guidance on when to use the P-Scales relates them to the new scaled scores, or discusses the issue of moderation.

.

In September 2014, a guidance noteNational curriculum and assessment from September 2014: Information for schools’ revised the timeline for the development of performance descriptors:

‘New performance descriptors will be published (in draft) in autumn 2014 which will inform statutory teacher assessment at the end of key stage 1 and 2 in summer 2016. Final versions will be published by September 2015.’

.

A consultation document on performance descriptors: ‘Performance descriptors for use in key stage 1 and 2 statutory teacher assessment for 2015 to 2016’ was published on 23 October 2014.

The descriptors were:

‘… drafted with experts, including teachers, representatives from Local Authorities, curriculum and subject experts. Also Ofsted and Ofqual have observed and supported the drafting process’

A November 2014 FoI response revealed the names of the experts involved and brief biographies were provided in the media.

A further FoI has been submitted requesting details of their remit but, at the time of writing, this has not been answered.

.

[Postscript: The FoI response setting out the remit was published on 5 February.]

.

The consultation document revealed for the first time the complex structure of the performance descriptor framework.

It prescribes four descriptors for KS1 reading, writing and maths but five for KS2 writing.

The singleton descriptors reflect ‘working at the national standard’.

Where four descriptors are required these are termed (from the top down): ‘mastery’, ‘national’, ‘working towards national’ and ‘below national’ standard.

In the case of KS2 writing ‘above national standard’ is sandwiched between ‘mastery’ and ‘national’.

.

Performance descriptor Capture 1Perfromance Decriptor Capture 2

The document explains how these different levels cross-reference to the assessment of learners exempted from the tests.

In the case of assessments with only a single descriptor, it becomes clear that a further distinction is needed:

‘In subjects with only one performance descriptor, all pupils not assessed against the P-scales will be marked in the same way – meeting, or not meeting, the ‘national standard’.

So ‘not meeting the national standard’ should also be included in the table above. The relation between ‘not meeting’ and ‘below’ national standard is not explained.

But still further complexity is added since:

‘There will be some pupils who are not assessed against the P-scales (because they are working above P8 or because they do not have special educational needs), but who have not yet achieved the contents of the ‘below national standard’ performance descriptor (in subjects with several descriptors). In such cases, pupils will be given a code (which will be determined) to ensure that their attainment is still captured.’

This produces a hierarchy as follows (from the bottom up):

  • P Scales
  • In cases of assessments with several descriptors, an attainment code yet to be determined
  • In case of assessments with single descriptors, an undeclared ‘not meeting the national standard’ descriptor
  • The single descriptor or four/five descriptors listed above.

However, the document says:

‘The performance descriptors do not include any aspects of performance from the programme of study for the following key stage. Any pupils considered to have attained the ‘Mastery standard’ are expected to explore the curriculum in greater depth and build on the breadth of their knowledge and skills within that key stage.’

This places an inappropriate brake on the progress of the highest attainers because the assessment ceiling is pitched too low to accommodate them.

It is acknowledging that some high attainers will be performing above the level of the highest descriptors but, regardless of whether or not they move into the programme for the next key stage, there is no mechanism to record their performance.

This raises the further question whether the mastery standard is pitched at the equivalent of level 6, or below it. It will be interesting to see whether this is addressed in the consultation response.

The consultation document says that the draft descriptors will be trialled during summer term 2015 in a representative sample of schools.

These trials and the consultation feedback will together inform the development of the final descriptors, but also:

  • ‘statutory arrangements for teacher assessment using the performance descriptors;
  • final guidance for schools (and those responsible for external moderation arrangements) on how the performance descriptors should be used;
  • an updated national model for the external moderation of teacher assessment; and
  • nationally developed exemplification of the work of pupils for each performance descriptor at the end of each key stage.’

Published comments on the draft descriptors have been almost entirely negative, which might suggest that the response could be delayed. The consultation document said it should appear ‘around 26 February 2015’.

According to the document, the final descriptors will be published either ‘in September 2015’ or ‘in the autumn term 2015’, depending whether you rely on the section headed ‘Purpose’ or the one called ‘Next Steps’. The first option would allow them to appear as late as December 2015.

A recent newspaper report suggested that the negative reception had resulted in an ‘amber/red’ assessment of primary assessment reform as a whole. The leaked commentary said that any decision to review the approach would increase the risk that the descriptors could not be finalised ‘by September as planned’.

However, the story concludes:

‘The DfE says: “We do not comment on leaks,” but there are indications from the department that the guidance will be finalised by September. Perhaps ministers chose, in the end, not to “review their approach”, despite the concerns.’

Hence it would appear that delay until after the beginning of AY2015/16 will not be countenanced

Note that the descriptors are for use in academic year 2015/16, so even publication in September is problematic, since teachers will begin the year not knowing which descriptors to apply.

The consultation document refers only to descriptors for AY2015/16, which might imply that they will be further refined for subsequent years. Essentially therefore, the arrangements proposed here would be an imperfect interim solution.

.

[Postscript: On 26 February 2015 the Consultation Response was published – so on the date commited to in the consultation document. 

As expected, it revealed significant opposition to the original proposals:

  • 74% of respondents were concerned about nomenclature
  • 76% considered that the descriptors were not spaced effectively across the range of pupils’ performance
  • 69% of respondents considered them not clear or easy to understand

The response acknowledges that the issues raised:

‘….amount to a request for greater simplicity, clarity and consistency to support teachers in applying performance descriptors and to help parents understand their meaning.’

But goes on to allege that: 

‘…there are some stakeholders who valued the levels system and would like performance descriptors to function in a similar way across the key stages, which is not their intention.’

Even so, although the Descriptors are not intended to inform formative assessment, respondents have raised concerns that they could be applied in this manner.

There is also the issue of comparability between formative and summative assessment measures, but this is not addressed.

The response does not entirely acknowledge that opposition to the original proposals is sending it back to the drawing board but:

‘As a result of some of the conflicting responses to the consultation, we will work with relevant experts to determine the most appropriate course of action to address the concerns raised and will inform schools of the agreed approach according to the timetable set out in the consultation document – i.e. by September 2015.

The new assessment commission (see below) will have an as yet undefined role in this process:

‘In the meantime, and to help with this [ie determining the most appropriate course of action] the Government is establishing a Commission on Assessment Without Levels….’

Unfortunately, this role has not been clarified in the Commission’s Statement of Intended Outputs

There is no reference to the trials in schools, which may or may not continue. A DfE Memorandum to the Education Select Committee on its 2014-15 Supplementary Estimates reveals that £0.3m has been reallocated to pay for them, but this is no guarantee that they will take place.

Implementation will not be delayed by a year, despite the commitment to allow a full year’s notice for significant reforms announced in the response to the Workload Challenge.

This part of the timetable is now seriously concertina’d and there must be serious doubt whether the timescale is feasible, especially if proper trialling is to be accommodated.]

.

Outstanding tasks 

  • Publish response to performance descriptors consultation document (26 February 2015) COMPLETED
  • Trial (revised?) draft performance descriptors (summer term 2015) 
  • Publish adjusted descriptors, revised in the light of consultation with experts and input from the commission (summer term 2015)
  • Experts and commission on assessment produce response to concerns raised and inform schools of outcomes (September 2015)
  • Confirm statutory arrangements for use of the performance descriptors (September/autumn term 2015) 
  • Publish final performance descriptors for AY2015/16 (September/autumn term 2015) 
  • Publish final guidance on the use of performance descriptors (September/autumn term 2015) 
  • Publish exemplification of each performance descriptor at each key stage (September/autumn term 2015)
  • Publish an updated model for the external moderation of teacher assessment (September/autumn term 2015?) 
  • Confirm plans for the moderation of KS1 teacher assessment and use of the P-scales (September/autumn term 2015?) 
  • Publish guidance on assessment of those working above the P-scales but below the level of the tests (September/autumn term 2015?) 
  • Decide whether performance descriptors require adjustment for AY2016/17 onwards (summer term 2016)

.

Schools’ internal assessment and tracking systems

.

Consultation response

The consultation document outlined some of the Government’s justification for the removal of national curriculum levels. The statement that:

‘Schools will be able to focus their teaching, assessment and reporting not on a set of opaque level descriptions, but on the essential knowledge that all pupils should learn’

may be somewhat called into question by the preceding discussion of performance descriptors.

The consultation document continues:

‘There will be a clear separation between ongoing, formative assessment (wholly owned by schools) and the statutory summative assessment which the government will prescribe to provide robust external accountability and national benchmarking. Ofsted will expect to see evidence of pupils’ progress, with inspections informed by the school’s chosen pupil tracking data.’

A subsequent section adds:

‘We will not prescribe a national system for schools’ ongoing assessment….

…. We expect schools to have a curriculum and assessment framework that meets a set of core principles…

 … Although schools will be free to devise their own curriculum and assessment system, we will provide examples of good practice which schools may wish to follow. We will work with professional associations, subject experts, education publishers and external test developers to signpost schools to a range of potential approaches.’

The consultation response does not cover this familiar territory again, saying only:

‘Since we launched the consultation, we have had conversations with our expert group on assessment about how to support schools to make best use of the new assessment freedoms. We have launched an Assessment Innovation Fund to enable assessment methods developed by schools and expert organisations to be scaled up into easy-to-use packages for other schools to use.’

Further work is therefore confined to the promulgation of core principles, the application of the Assessment Innovation Fund and possibly further work to ‘signpost schools to a range of potential approaches’.

.

Developments to date

The Assessment Innovation Fund was originally announced initially in December 2013.

A factsheet released at that time explains that many schools are developing new curriculum and assessment systems and that the Fund is intended to enable schools to share these.

Funding of up to £10K per school is made available to help up to 10 schools to prepare simple, easy-to-use packages that can be made freely available to other schools.

They must commit to:

‘…make their approach available on an open licence basis. This means that anyone who wishes to use the package (and any trade-marked name) must be granted a non-revocable, perpetual, royalty-free licence to do so with the right to sub-licence. The intellectual property rights to the system will remain with the school/group which devised it.’

Successful applicants were to be confirmed ‘in the week commencing 21 April 2014’

In the event, nine successful applications were announced on 1 May, although one subsequently withdrew, apparently over the licensing terms.

The packages developed with this funding are stored – in a rather user-unfriendly fashion – on this TES Community Blog, along with other material supportive of the decision to dispense with levels.

Much other useful material has been published online which has not been collected into this repository and it is not clear to what extent it will develop beyond its present limits, since the most recent addition was in early November 2014.

A recent survey by Capita Sims (itself a provider of assessment support) conducted between June and September 2014, suggested that:

  • 25% of primary and secondary schools were unprepared for and 53% had not yet finalised plans for replacing levels.
  • 28% were planning to keep the existing system of levels, 21% intended to introduce a new system and 28% had not yet made a decision.
  • 50% of those introducing an alternative expected to do so by September 2015, while 23% intended to do so by September 2016.
  • Schools’ biggest concern (53% of respondents) is measuring progress and setting targets for learners.

Although the survey is four months old and has clear limitations (there were only 126 respondents) this would suggest further support may be necessary, ideally targeted towards the least confident schools.

.

In April 2014 the Government published a set of Assessment Principles, building on earlier material in the primary consultation document. These had been developed by an ‘independent expert panel’.

It is not entirely clear whether the principles apply solely to primary schools and to schools’ own assessment processes (as opposed to statutory assessment).

The introductory statement says:

‘The principles are designed to help all schools as they implement arrangements for assessing pupils’ progress against their school curriculum; Government will not impose a single system for ongoing assessment.

Schools will be expected to demonstrate (with evidence) their assessment of pupils’ progress, to keep parents informed, to enable governors to make judgements about the school’s effectiveness, and to inform Ofsted inspections.’

This might suggest they are not intended to cover statutory assessment and testing but are relevant to secondary schools.

There are nine principles in all, divided into three groups:

.

Principles Capture

.

The last of these seems particularly demanding.

 .

In July 2014, Ofsted published guidance in the form of a ‘Note for inspectors: use of assessment information during inspections in 2014/15’. This says that:

‘In 2014/15, most schools, academies and free schools will have historic performance data expressed in national curriculum levels, except for those pupils in Year 1. Inspectors may find that schools are tracking attainment and progress using a mixture of measures for some, or all, year groups and subjects.

As now, inspectors will use a range of evidence to make judgements, including by looking at test results, pupils’ work and pupils’ own perceptions of their learning. Inspectors will not expect to see a particular assessment system in place and will recognise that schools are still working towards full implementation of their preferred approach.’

It goes on to itemise the ways in which inspectors will check that these systems are effective, without judging the systems themselves, but by gathering evidence of effective implementation through leadership and management, the accuracy of assessment, effectiveness in securing progress and quality of reporting to parents.

. 

In September 2014, NCTL published a research reportBeyond Levels: alternative assessment approaches developed by teaching schools.’

The report summarises the outcomes of small-scale research conducted in 34 teaching school alliances. It offers six rather prolix recommendations for schools and DfE to consider, which can be summarised as follows:

  • A culture shift is necessary in recognition of the new opportunities provided by the new national curriculum and the removal of levels.
  • Schools need access to conferences and seminars to help develop their assessment expertise.
  • Schools would benefit from access to peer reviewed commercial tracking systems relating to the new national curriculum. Clarification is needed about what data will be collected centrally.
  • Teaching school alliances and schools need financial support to further develop assessment practice, especially practical classroom tools, which should be made freely available online.
  • Financial support is needed for teachers to undertake postgraduate research and courses in this field.
  • It is essential to develop professional knowledge about emerging effective assessment practice.

I can find no government response to these recommendations and so have not addressed them in the list of outstanding tasks below.

.

[Postscript: On 25 February 2015, the Government announced the establishment of a ‘Commission on Assessment Without Levels’:

‘To help schools as they develop effective and valuable assessment schemes, and to help us to identify model approaches we are today announcing the formation of a commission on assessment without levels. This commission will continue the evidence-based approach to assessment which we have put in place, and will support primary and secondary schools with the transition to assessment without levels, identifying and sharing good practice in assessment.’

This appears to suggest belated recognition that the steps outlined above have provided schools with insufficient support for the transition to levels-free internal assessment. It is also a response to the possibility that Labour might revisit the decision to remove them (see below).

The Consultation Response on Performance Descriptors released on 26 February (see above) says that the Commission will help to determine the most appropriate response to concerns raised about the Descriptors, while also suggesting that this task will not be devolved exclusively to them.

It adds that the Commission will:

‘…collate, quality assure, publish and share best practice in assessment with schools across the country…and will help to foster innovation and success in assessment practice more widely.’

The membership of the Commission was announced on 9 March.

.

.

The Commission met on 10 March and 23 March 2015 and will meet four more times – in April, May, June and July.

Its Terms of Reference have been published. The Statement of Intended Outputs mentioned in the consultation response on Performance Descriptors appeared without any publicity on 27 March

It seemed that the Commission, together with the further consultation of experts, supplied a convenient mechanism for ‘parking’ some difficult issues until the other side of the Election.

However, neither the terms of reference nor the statement of outputs mentions the Performance Descriptors, so the Commission’s role in relation to them remains shrouded in mystery.

.

.

The authors of the Statement of Outputs feel it necessary to mention in passing that it:

‘…supports the decision to removel levels, but appreciates that the reasons for removing levels are not widely understood’.

It sets out a 10-point list of outputs comprising:

  • Another statement of the purposes of assessment and another set of principles to support schools in developing effective assessment systems, presumably different to those published by the previous expert group in April 2014. (It will be interesting to compare the two sets of principles, to establish whether Government policy on what constitutes effective assessment has changed over the last 12 months. It will also be worthwhile monitoring the gap between the principles and the views of Alison Peacock, one of the Commission’s members. She also sat on the expert panel that developed the original principles, some of which seem rather at odds with her own practice and preferences. Meanwhile, another member – Sam Freedman – has stated

.

.

  • An explanation of ‘how assessment without levels can better serve the needs of pupils and teachers’.
  • Guidance to ‘help schools create assessment policies which reflect the principles of effective assessment without levels’.
  • Clear information about ‘the legal and regulatory assessment requirements’, intende to clarify what they are now, how they will change and when. (The fact that the Commission concludes that such information is not already available is a searing indictment of the Government’s communications efforts to date.)
  • Clarification with Ofsted of ‘the role that assessment without levels will play in the inspection process’ so schools can demonstrate effectiveness without adding to teacher workload. (So again they must believe that Ofsted has not sufficiently clarified this already.)
  • Dissemination of good practice, obtained through engagement with ‘a wide group of stakeholders including schools, local authorities, teachers and teaching unions’. (This is tacit admission that the strategy described above is not working.)
  • Advice to the Government on how ITT and CPD can support assessment without levels and guidance to schools on the use of CPD for this purpose. (There is no reference to the resource implications of introducing additional training and development.)
  • Advice to the Government on ensuring ‘appropriate provision is made for pupils with SEN in the development of assessment policy’. (Their judgement that this is not yet accounted for is a worrying indictment of Government policy to date. They see this as not simply a lapse of communication but a lacuna in the policy-making process.)
  • ‘Careful consideration’ of commitments to tackling teacher workload – which they expect to alleviate by providing information, advice and support. (There is no hint that the introduction of Performance Descriptors will be delayed in line with the Workload Challenge.)
  • A final report before the end of the summer term, though it may publish some outputs sooner. (It will not be able to do so until the outcome of the Election is decided.)

Although there is some implicit criticism of Government policy and communications to date, the failure to make any reference to the Performance Descriptors is unlikely to instil confidence in the capacity of the Commission to provide the necessary challenge to the original proposals, or support to the profession in identifying a workable alternative.]

.

Outstanding tasks

  • Further dissemination of good practice through the existing mechanisms (ongoing) 
  • Further ‘work with professional associations, subject experts, education publishers and external test developers to signpost schools to a range of potential approaches.’ (ongoing)
  • Additional work (via the commission) to ‘collate, quality assure, publish and share’ best practice (Report by July 2015 with other outputs possible from May 2015)

Reporting to parents

.

Consultation response

The consultation document envisaged three outcomes for each test:

  • A scaled score
  • The learner’s position in the national cohort, expressed as a decile
  • The rate of progress from a baseline, derived by comparing a learner’s scaled score with that of other learners with the same level of prior attainment.

Deciles did not survive the consultation

The consultation response confirms that, for each test, parents will receive:

  • Their own child’s scaled score; and
  • The average scaled score for the school, ‘the local area’ (presumably the geographical area covered by the authority in which the school is situated) and the country as a whole.

They must also receive information about progress, but the response only discusses how this might be published on school websites and for the purposes of the floor targets (see sections below), rather than how it should be reported directly to parents.

We have addressed already the available information about the calculation of the scaled scores.

The original consultation document also outlined the broad methodology underpinning the progress measures:

‘In order to report pupils’ progress through the primary curriculum, the scaled score for each pupil at key stage 2 would be compared to the scores of other pupils with the same prior attainment. This will identify whether an individual made more or less progress than pupils with similar prior attainment…

…. Using this approach, a school might report pupils’ national curriculum test results to parents as follows:

In the end of key stage 2 reading test, Sally received a scaled score of 126 (the secondary ready standard is 100), placing her in the top 10% of pupils nationally. The average scaled score for pupils with the same prior attainment was 114, so she has made more progress in reading than pupils with a similar starting-point.’

.

Developments to date

On this web page first published in April 2014 STA commits to publishing guidance during summer term 2015 on how the results of national curriculum tests will be reported, including an explanation of scaled scores.

In September 2014, a further guidance note ‘National curriculum and assessment from September 2014: Information for schools’ shed a little further light on the calculation of the progress measures:

‘Pupil progress will be determined in relation to the average progress made by pupils with the same baseline (i.e. the same KS1 average point score). For example, if a pupil had an APS of 19 at KS1, we will calculate the average scaled score in the KS2 tests for all pupils with an APS of 19 and see whether the pupil in question achieved a higher or lower scaled score than that average The exact methodology of how this will be reported is still to be determined.’

It is hard to get a clear sense of the full range of assessment information that parents will receive.

I have been unable to find any comprehensive description, which would suggest that this is being held back until the methodology for calculating the various measures is finalised.

The various sections above suggest that they will receive details of:

  • Reception baseline assessment outcomes.
  • Attainment in end of KS1 and end of KS2 tests, now expressed as scaled scores (or via teacher assessment, code or P-scales if working below the level of the tests). This will be supplemented by a series of average scaled scores for each test.
  • Progress between the baseline assessment (reception baseline from 2022; KS1 baseline beforehand) and end of KS2 tests, relative to learners with similar prior attainment at the baseline.
  • Attainment in statutory teacher assessments, normally expressed through performance descriptors, but with different arrangements for low attainers.
  • Attainment and progress between reception baseline, KS1 and KS2 tests, provided through schools’ own internal assessment and tracking systems.

We have seen that reporting mechanisms for the first and fourth are not yet finalised.

The fifth is now for schools to determine, taking account of Ofsted’s guidance and, if they wish, the Assessment Principles.

The scales necessary to report the second are not yet published, and these also form the basis of the remaining progress measures.

Parents will be receiving this information in a variety of different formats: scaled scores, average scaled scores, baseline scores, performance descriptors, progress scores and internal tracking measures.

Moreover, the performance descriptor scales will vary according to the assessment and internal tracking will vary from school to school.

This is certainly much more complex than the current unified system of reporting based on levels. Parents will require extensive support to understand what they are receiving.

Outstanding tasks

Previous sections have already referenced expected guidance on reporting baseline assessments, scaled scores and the use of performance descriptors (which presumably includes parental reporting).

One assumes that there will also need to be unified guidance on all aspects of reporting to parents, intended for parental consumption.

So, avoiding duplication of previous sections, the remaining outstanding tasks are to:

  • Finalise the methodology for reporting on pupil progress (summer term 2015) 
  • Provide comprehensive guidance to parents on all aspects of reporting (summer term 2015?)

Publication of outcomes

.

Consultation response

This section covers publication of material for public consumption, within and alongside the Primary School Performance Tables and on schools’ websites.

The initial consultation document has much to say about first of these, while the consultation response barely mentions the Tables, focusing almost exclusively on school websites

The original document suggests that the Performance Tables will include a variety of measures, including:

  • The percentage of pupils meeting the secondary readiness standard
  • The average scaled score
  • Where the school’s pupils fit in the national cohort
  • Pupils’ rate of progress
  • How many of the school’s pupils are among the highest-attaining nationally, through a measure showing the percentage of pupils attaining a high scaled score in each subject.
  • Teacher assessment outcomes in English maths and science
  • Comparisons of each school’s performance with that of schools with similar intake
  • Data about the progress of those with very low prior attainment.

All the headline measures will be published separately for pupils in receipt of the pupil premium.

All measures will be published as three year rolling averages in addition to annual results.

There is also a commitment to publish a wide range of test and teacher assessment data, relating to both attainment and progress, through a Data Portal:

‘The department is currently procuring a new data portal or “data warehouse” to store the school performance data that we hold and provide access to it in the most flexible way. This will allow schools, governors and parents to find and analyse the data about schools in which they are most interested, for example focusing on the progress of low attainers in mathematics in different schools or the attainment of certain pupil groups.’

The consultation response acknowledges as a guiding principle:

‘…a broad range of information should be published to help parents and the wider public know how well schools are performing.’

The accountability system will:

‘…require schools to publish information on their websites so that parents can understand both the progress pupils make and the standards they achieve.’

Data on low attainers’ attainment and progress will not be published since the diversity of this group demands extensive contextual information.

But when it comes to Performance Tables, the consultation response says only:

‘As now, performance tables will present a wide range of information about primary school performance.’

By implication, they will include progress measures since the text adds:

‘In 2022 performance tables, we will judge schools on whichever is better: their progress from the reception baseline to key stage 2; or their progress from key stage 1 to key stage 2.

However, schools will be required to publish a suite of indicators in standard format on their websites, including:

  • The average progress made by pupils in reading, writing and maths
  • The percentage of pupils achieving the expected standard at the end of KS2 in reading, writing and maths
  • The average score of pupils in their end of KS2 assessments and
  • The ‘percentage of pupils who achieve a high score in all areas’ at the end of KS2.

The precise form of the last of these indicators is not explained. This is not quite the same as the ‘measure showing the percentage of pupils attaining a high scaled score in each subject’ mentioned in the original consultation document.

Does ‘all areas’ mean reading, writing and maths? Must learners achieve a minimum score in each assessment, or a single aggregate score above a certain threshold?

In addition:

‘So that parents can make comparisons between schools, we would like to show each school’s position in the country on these measures and present these results in a manner that is clear for all audiences to understand. We will discuss how best to do so with stakeholders, to ensure that the presentation of the data is clear, fair and statistically robust.’

.

Developments to date

In June 2014, a consultation document was issued ‘Accountability: publishing headline performance measures on school and college websites’. This was accompanied by a press release.

The consultation document explains the intended relationship between the Performance Tables, Data Portal and material published on schools’ websites:

‘Performance tables will continue to provide information about individual schools and colleges and be the central source of school and college performance information.’

Moreover:

‘Future changes to the website, through the school and college performance data portal, will improve accessibility to a wide range of information, including the headline performance measures. It will enable interested parents, students, schools, colleges and researchers to interrogate educational data held by the Department for Education to best meet their requirements.’

But:

‘Nevertheless, the first place many parents and students look for information about a school or college is the institution’s own website’

Schools are already required to publish such information, but there is inconsistency in where and how it is presented. The document expresses the intention that consistent information should be placed ‘on the front page of every school and college website’.

The content proposed for primary school’s websites covers the four headline measures set out in the consultation response.

A footnote says:

‘These measures will apply to all-through primary, junior and middle schools. Variants of these measures will apply for infant and first schools.’

But the variants are not set out.

There is no reference to the plan to show ‘each school’s position in the country on these measures’ as mentioned in the consultation response.

The consultation proposes a standard visual presentation which, for primary schools, looks like this

.

school websites Capture

.

The response to this consultation ‘Publishing performance measures on school and college websites’ appeared in December 2014 (the consultation document had said ‘Autumn 2014’).

The summary of responses says:

‘The majority of respondents to the consultation welcomed the proposals to present headline performance measures in a standard format. There was also strong backing for the proposed visual presentation of data to aid understanding of performance. However, many respondents suggested that without some sense of scale or spread to provide some context to the visual presentation, the data could be misleading. Others said that the language used alongside the charts should be clearer…’

…Whilst most respondents favoured a data application tool that would remove the burden of annually updating performance data on school and college websites, they also highlighted the difficulties of developing a data application that would be compatible with a wide range of school and college websites.’

It is clear that some respondents had questioned why school websites should not simply carry a link on their homepage to the School Performance Tables.

In the light of this reaction, further research will be undertaken to:

  • develop a clear and simple visual representation of the data, but with added contextual information.
  • establish how performance tables data can be presented ‘in a way that reaches more parents’.

The timeline suggests that this will result in ‘proposals for redevelopment of performance tables’ by May 2015, so we can no longer assume that the Tables will cover the list of material suggested in the original consultation document.

The timeline indicates that if initial user research concludes that a data application is required, that will be developed and tested between June and October 2015, for roll out between September 2016 and January 2017.

Schools will be informed by autumn 2015 whether they should carry a link to the Tables, download a data application or pursue a third option.

But, nevertheless:

‘All schools and colleges, including academies, free schools and university technical colleges, will be required to publish the new headline performance measures in a consistent, standard format on their websites from 2016.’

So, if an application is not introduced, it seems that schools will still have to publish the measures on their websites: they will not be able to rely solely on a link to the Performance Tables.

Middle schools will only be required to publish the primary measures. No mention is made of infant or first schools.

.

There is no further reference to the data portal, since this project was quietly shelved in September 2014, following unexplained delays in delivery.

.

.

There has been no subsequent explanation of the implications of this decision. Will the material intended for inclusion in the Portal be included in the Performance Tables, or published by another route, or will it no longer be published?

.

Finally, some limited information has emerged about accountability arrangements for infant schools.

This appears on a web page – New accountability arrangements for infant schools from 2016 – published in June 2014.

It explains that the reception baseline will permit the measurement of progress alongside attainment. The progress of infant school pupils will be published for the first time in the 2019 Performance Tables.

This might mean a further addition to the list of information reported to parents set out in the previous section.

There is also a passing reference to moderation:

‘To help increase confidence and consistency in our moderation of infant schools, we will be increasing the proportion of schools where KS1 assessments are moderated externally. From summer 2015, half of all infant schools will have their KS1 assessments externally moderated.’

But no further information is forthcoming about the nature of other headline measures and how they will be reported.

.

Outstanding tasks

  • Complete user research and publish proposals for redevelopment of Performance Tables (May 2015) 
  • Confirm what data will be published in the 2016 Performance Tables (summer Term 2015?)
  • Confirm how material originally intended for inclusion in Data Portal will be published (summer term 2015?)
  • Confirm the format and publication route for data showing each school’s position in the country on the headline measures (summer term 2015?) 
  • Confirm headline performance measures for infant and first schools (summer term 2015?) 
  • If necessary, further develop and test a prototype data application for schools’ websites (October 2015) 
  • Inform schools whether a data application will be introduced (autumn 2015) 
  • Amend School Information Regulations to require publication of headline measures in standard format (April 2016) 
  • If proceeding, complete development and testing of a data application (May 2016) 
  • If proceeding, complete roll out of data application (February 2017)

.

Floor standards

.

Consultation response

Minimum expectations of schools will continue to be embodied in floor standards. Schools falling below the floor will attract ‘additional scrutiny through inspection’ and ‘intervention may be required’.

Although the new standard:

‘holds schools to account both on the progress they make and on how well their pupils achieve.’

In practice they are able to choose between one or the other.

An all-through primary school will be above the floor standards if:

  • Pupils make sufficient progress between the reception baseline and the end of KS2 in all of reading, writing and maths or
  • 85% or more of pupils meet the new expected standard at the end of KS2 (similar to Level 4b under the current system).

A junior or middle school will be above the floor standard if:

  • pupils make sufficient progress at key stage 2 from their starting point at key stage 1; or
  • 85% or more of pupils meet the new expected standard at the end of key stage 2

At this stage arrangements for measuring the progress of pupils in infant or first schools are still to be considered.

Since the reception baseline will be introduced in 2015, progress in all-through primary schools will continue to be measured from the end of KS1 until 2022.

This should mean that, prior to 2022, the standard would be achieved by ensuring that the progress made by pupils in a school – in reading, writing and maths – equals or exceeds the national average progress made by pupils with similar prior attainment at the end of KS1.

Exactly how individual progress will be aggregated to create a whole school measure is not yet clear. The original consultation document holds up the possibility that slightly below average progress will be acceptable:

‘…we expect the value-added score required to be above the floor to be between 98.5 and 99 (a value-added score of 100 represents average progress).’

The consultation response says the amount of progress required will be determined in 2016:

‘The proposed progress measure will be based on value-added in each of reading, writing and mathematics. Each pupil’s scaled scores in each area at key stage 2 will be compared with the scores of pupils who had the same results in their assessments at key stage 1.

For a school to be above the progress floor, pupils will have to make sufficient progress in all of reading, writing and mathematics. For 2016, we will set the precise extent of progress required once key stage 2 tests have been sat for the first time. Once pupils take a reception baseline, progress will continue to be measured using a similar value added methodology.’

In 2022 schools will be assessed against either the reception or KS1 baseline, whichever gives the best result. From 2023 only the reception baseline will be in play.

The attainment standard will be based on achievement of ‘a scaled score of 100 or more’ in each of the reading and maths tests and achievement, via teacher assessment, of the new expected standard in writing (presumably the middle of the five described above).

The attainment standard is significantly more demanding, in that the present requirement is for 65% of learners to meet the expected standard – and the standard itself will now be pitched higher, at the equivalent of Level 4B.

The original consultation document says:

‘Our modelling suggests that a progress measure set at this level, combined with the 85% threshold attainment measure, would result in a similar number of schools falling below the floor as at present. Over time we will consider whether schools should make at least average progress as part of floor standards.’

The consultation response does not confirm this judgement.

.

Developments

The only significant development since the publication of the consultation response is the detail provided on the June 2014 webpage New accountability arrangements for infant schools from 2016.

In addition to the points in the previous section, this also confirms that:

‘…there will not be a floor standard for infant schools’

But this statement has been called into question, since the table from the performance descriptors consultation, reproduced above, appears to suggest that KS1 teacher assessments in reading, writing and maths do contribute to a floor standard – whether for infant or all-through primary schools is unclear.

.

The aforementioned Centre Forum Report ‘Progress matters in Primary too’ (January 2015) also appears to call into question the results of the modelling reported in the initial consultation document.

It says:

‘…the likelihood is that, based on current performance, progress will be the measure used for the vast majority of schools, at least in the short to medium term. Even those schools which achieve the attainment floor target will only do so by ensuring at least average progress is made by their pupils. As a result, progress will in practice be the dominant accountability metric.’

It undertakes modelling based on 2013 attainment data – ie simulating the effect of the new standards had they been in place in 2013, using selected learning areas within the EYFSP as a proxy for the reception baseline – which suggests that just 10% of schools in 2013 would have met the new attainment floor.

It concludes that:

‘For the vast majority of schools, progress will be their only option for avoiding intervention when the reforms come into effect.’

Unfortunately though, it does not provide an estimate of the proportion of schools likely to achieve the progress floor standard, with either the current KS1 baseline or its proxy for a reception baseline.

Outstanding Tasks

  • Confirm the detailed methodology for deriving both the attainment and progress elements of the floor standards, in relation to both the new reception baseline and the interim KS1 baseline (summer 2015?)
  • Set the amount of progress required to achieve the progress element of the floor standards (summer 2016)
  • (In the consultation document) Consider whether schools should make at least average progress as part of floor standards and ‘move to three year rolling averages for floor standard measures’ (long term)

.

Overall progress, Purdah and General Election outcomes

Progress to date and actions outstanding

The lists of outstanding actions above record some 40 tasks necessary to the successful implementation of the primary assessment and accountability reforms.

If the ‘advance notice’ conventions are observed, roughly half of these require completion by the end of the summer term in July 2015, within the two windows of 50 working days on either side of Purdah.

These conventions have already been set aside in some cases, most obviously in respect of reception baseline assessment and the performance descriptors for statutory teacher assessment.

Unsurprisingly, the commentary above suggests that these two strands of the reform programme are the most complex and potentially the most problematic.

The sheer number of outstanding tasks and the limited time in which to complete them could pose problems.

It is important to remember that there are similar reforms in the secondary and post-16 sectors that need to be managed in parallel.

The leaked amber/red rating was attributed solely to the negative reaction to the draft performance descriptors, but it could also reflect a wider concern that all the necessary steps may not be completed in time to give schools the optimal period for planning and preparation.

Schools may be able to cope with shorter notice in a few instances, where the stakes are relatively low, but if too substantive a proportion of the overall reform programme is delayed into next academic year, they will find the cumulative impact much harder to manage.

In a worst case scenario, implementation of some elements might need to be delayed by a year, although the corollary would be an extended transition period for schools that would be less than ideal. It may also be difficult to disentangle the different strands given the degree of interdependency between them.

Given the proximity of a General Election, it may not be politic to confirm such delays before Purdah intervenes: the path of least resistance is probably to postpone any difficult decisions for consideration by the incoming government.

.

The implications of Purdah

As noted above, if the General Election result is clear-cut, Purdah will last some five-and-a-half weeks and will occur at a critical point in the implementation timetable.

The impact of Purdah should not be under-estimated.

From the point at which Parliament is dissolved on Monday 30 March, the Government must abstain from major policy decisions and announcements.

The Election is typically announced a few days before the dissolution of Parliament. This ‘wash up’ period between announcement and dissolution is typically used to complete essential unfinished business.

The Cabinet Office issues guidance on conduct during Purdah shortly before it begins.

The 2015 guidance has not yet issued so the 2010 guidance is the best source of information about what to expect.

.

[Postscript: 2015 Guidance was posted on 30 March 2015 and is substantively the same as the 2010 edition.]

.

Key points include:

  • ‘Decisions on matters of policy on which a new Government might be expected to want the opportunity to take a different view from the present Government should be postponed until after the Election, provided that such postponement would not be detrimental to the national interest or wasteful of public money.’
  • ‘Officials should not… be asked to devise new policies or arguments…’
  • ‘Departmental communications staff may…properly continue to discharge during the Election period their normal function only to the extent of providing factual explanation of current Government policy, statements and decisions.’
  • ‘There would normally be no objection to issuing routine factual publications, for example, health and safety advice but these will have to be decided on a case by case basis taking account of the subject matter and the intended audience.’
  • ‘Regular statistical releases and research reports (e.g. press notices, bulletins, publications or electronic releases) will continue to be issued and published on dates which have been pre-announced. Ad hoc statistical releases or research reports should be released only where a precise release date has been published prior to the Election period. Where a pre-announcement has specified that the information would be released during a specified period (e.g. a week, or longer time period), but did not specify a precise day, releases should not be published within the Election period.’
  • ‘Research: Fieldwork involving interviews with the public or sections of it will be postponed or abandoned although regular, continuous and on-going statistical surveys may continue.’
  • ‘Official websites…the release of new online services and publication of reworked content should not occur until after the General Election… Content may be updated for factual accuracy but no substantial revisions should be made and distributed.’
  • The general principles and conventions set out in this guidance apply to NDPBs and similar public bodies.

Assuming similar provisions in 2015, most if not all of the assessment and accountability work programme would grind to a halt.

To take an example, it is conceivable that those awarded baseline assessment contracts would be able to recruit schools after 30 March, but they will receive little or no help from the DfE during the Purdah period. Given that the recruitment deadline is 30 April, this may be expected to depress recruitment significantly.

.

The impact of different General Election outcomes

Forming a Government in the case of a Hung Parliament may also take some time, further delaying the process.

The six days taken in 2010 may not be a guide to what will happen in 2015.

The Cabinet Manual (2011) says:

‘Where an election does not result in an overall majority for a single party, the incumbent government remains in office unless and until the Prime Minister tenders his or her resignation and the Government’s resignation to the Sovereign. An incumbent government is entitled to wait until the new Parliament has met to see if it can command the confidence of the House of Commons, but is expected to resign if it becomes clear that it is unlikely to be able to command that confidence and there is a clear alternative…

…The nature of the government formed will be dependent on discussions between political parties and any resulting agreement. Where there is no overall majority, there are essentially three broad types of government that could be formed:

  • single-party, minority government, where the party may (although not necessarily) be supported by a series of ad hoc agreements based on common interests;
  • formal inter-party agreement, for example the Liberal–Labour pact from 1977 to 1978; or
  • formal coalition government, which generally consists of ministers from more than one political party, and typically commands a majority in the House of Commons’.

If one or more of the parties forming the next government has a different policy on assessment and accountability, this could result in pressure to amend or withdraw parts of the reform programme.

If a single party is involved, pre-Election contact with civil servants may have clarified its intentions, enabling work to resume as soon as the new government is in place but, if more than one party is involved, it may take longer to agree the preferred way forward.

Under a worst case scenario, planners might need to allow for Purdah and post-Election negotiations to consume eight weeks or longer.

The impact of the Election on the shape and scope of the primary assessment and accountability reforms will also depend on which party or parties enter government.

If the same Coalition partners are returned, one might expect uninterrupted implementation, unless the minority Lib Dems seek to negotiate different arrangements, which seems unlikely.

But if a different party or a differently constituted Coalition forms the Government, one might expect decisions to abandon or delay some aspects of the programme.

If Labour forms the Government, or is the major party in a Coalition, some unravelling will be necessary.

They are broadly committed to the status quo:

‘Yet when it comes to many of the technical day-to-day aspects of school leadership – child protection, curriculum reform, assessment and accountability – we believe that a period of stability could prove beneficial for raising pupil achievement. This may not be an exciting rallying cry, but it is crucial that the incoming government takes account of the classroom realities.’

Hunt has also declared:

‘Do not mistake me: I am zealot for minimum standards, rigorous assessment and intelligent accountability.

But if we choose to focus upon exam results and league tables to the detriment of everything else, then we are simply not preparing our young people for the demands of the 21st century.’

And, thus far, Labour has made few specific commitments in this territory.

  • They support reception baseline assessment but whether that extends to sustaining a market of providers is unknown. Might they be inclined to replace this with a single national assessment?.
  • There is very little about floor targets – a Labour invention – although the Blunkett Review appears to suggest that Directors of School Standards will enjoy some discretion in respect of their enforcement.

Reading between the lines, it seems likely that they would delay some of the strands described above – and potentially simplify others.

.

Conclusion

The primary assessment reform programme is both extensive and highly complex, comprising several strands and many interdependencies.

Progress to date can best be described as halting.

There are still many steps to be taken and difficult issues to resolve, about half of which should be completed by the end of this academic year. Pre-Election Purdah will cut significantly into the time available.

More announcements may be delayed into the summer holidays or the following autumn term, but this reduces the planning and preparation time available to schools and has potentially significant workload implications.

Alternatively, implementation of some elements or strands may be delayed by a year, but this extends the transition period between old and new arrangements. Any such rationalisation seems likely to be delayed until after the Election and decisions will be influenced by its outcome.

.

[Postscript: The commitment in the Government’s Workload Challenge response to a one-year lead time, now encapsulated in the Protocol published on 23 March, has not resulted in any specific commitments to delay ahead of the descent of Purdah.

At the onset of Purdah on 30 March some 18 actions appear to be outstanding and requiring completion by the end of the summer term. This will be a tall order for a new Government, especially one of a different complexion.]

.

If Labour is the dominant party, they may be more inclined to simplify some strands, especially baseline assessment and statutory teacher assessment, while also providing much more intensive support for schools wrestling with the removal of levels.

Given the evidence set out above, ‘amber/red’ seems an appropriate rating for the programme as a whole.

It seems increasingly likely that some significant adjustments will be essential, regardless of the Election outcome.

.

GP

January 2015

What Happened to the Level 6 Reading Results?

 

Provisional 2014 key stage 2 results were published on 28 August.

500px-Japanese_Urban_Expwy_Sign_Number_6.svgThis brief supplementary post considers the Level 6 test results – in reading, in maths and in grammar, punctuation and spelling (GPS) – and how they compare with Level 6 outcomes in 2012 and 2013.

An earlier post, A Closer Look at Level 6, published in May 2014, provides a fuller analysis of these earlier results.

Those not familiar with the 2014 L6 test materials can consult the papers, mark schemes and level thresholds at these links:

 

Number of Entries

Entry levels for the 2014 Level 6 tests were published in the media in May 2014. Chart 1 below shows the number of entries for each test since 2012 (2013 in the case of GPS). These figures are for all schools, independent as well as state-funded.

 

L6 Sept chart 1

Chart 1: Entry rates for Level 6 tests 2012 to 2014 – all schools

 

In 2014, reading entries were up 36%, GPS entries up 52% and maths entries up 36%. There is as yet no indication of a backlash from the decision to withdraw Level 6 tests after 2015, though this may have an impact next year.

The postscript to A Closer Look estimated that, if entries continue to increase at current rates, we might expect something approaching 120,000 in reading, 130,000 in GPS and 140,000 in maths.

Chart 2 shows the percentage of all eligible learners entered for Level 6 tests, again for all schools. Nationally, between one in six and one in five eligible learners are now entered for Level 6 tests. Entry rates for reading and maths have almost doubled since 2012.

 

L6 Sept chart 2

Chart 2: Percentage of eligible learners entered for Level 6 tests 2012 to 2014, all schools

 

Success Rates

The headline percentages in the SFR show:

  • 0% achieving L6 reading (unchanged from 2013)
  • 4% achieving L6 GPS (up from 2% in 2013) and
  • 9% achieving L6 maths (up from 7% in 2013).

Local authority and regional percentages are also supplied.

  • Only in Richmond did the L6 pass rate in reading register above 0% (at 1%). Hence all regions are at 0%.
  • For GPS the highest percentages are 14% in Richmond, 10% in Kensington and Chelsea and Kingston, 9% in Sutton and 8% in Barnet, Harrow and Trafford. Regional rates vary between 2% in Yorkshire and Humberside and 6% in Outer London.
  • In maths, Richmond recorded 22%, Kingston 19%, Trafford, Harrow and Sutton were at 18% and Kensington and Chelsea at 17%. Regional rates range from 7% in Yorkshire and Humberside and the East Midlands to 13% in Outer London.

Further insight into the national figures can be obtained by analysing the raw numbers supplied in the SFR.

Chart 3 shows how many of those entered for each test were successful in each year. Here there is something of a surprise.

 

L6 Sept chart 3

Chart 3: Percentage of learners entered achieving Level 6, 2012 to 2014, all schools

 

Nearly half of all entrants are now successful in L6 maths, though the improvement in the success rate has slowed markedly compared with the nine percentage point jump in 2013.

In GPS, the success rate has improved by nine percentage points between 2013 and 2014 and almost one in four entrants is now successful. Hence the GPS success rate is roughly half that for maths. This may be attributable in part to its shorter history, although the 2014 success rate is significantly below the rate for maths in 2013.

But in reading an already very low success rate has declined markedly, following a solid improvement in 2013 from a very low base in 2012. The 2014 success rate is now less than half what it was in 2012. Fewer than one in a hundred of those entered have passed this test.

Chart 4 shows how many learners were successful in the L6 reading test in 2014 compared with previous years, giving results for boys and girls separately.

 

L6 Sept chart 4

Chart 4: Percentage of learners entered achieving Level 6 in reading, 2012 to 2014, by gender

 

The total number of successful learners in 2014 is over 5% lower than in 2012, when the reading test was introduced, and down 62% on the success rate achieved in 2013.

Girls appear to have suffered disproportionately from the decline in 2014 success rates. While the success rate for girls is down 63%, the decline for boys is slightly less, at 61%. The success rate for boys remains above where it was in 2012 but, for girls, it is about 12% down on where it was in 2012.

In 2012, only 22% of successful candidates were boys. This rose to 26% in 2013 and has again increased slightly, to 28% in 2014. The gap between girls’ and boys’ performance remains substantially bigger than those for GPS and maths.

Charts 5 and 6 give the comparable figures for GPS and maths respectively.

In GPS, the total number of successful entries has increased by almost 140% compared with 2013. Girls form a slightly lower proportion of this group than in 2013, their share falling from 62% to 60%. Boys are therefore beginning to close what remains a substantial performance gap.

 

L6 Sept chart 5

Chart 5: Percentage of learners entered achieving Level 6 in GPS, 2012 to 2014, by gender

 

In maths, the total number of successful entries is up by about 40% on 2013 and demonstrates rapid improvement over the three year period.

Compared with 2013, the success rate for girls has increased by 43%, whereas the corresponding increase for boys is closer to 41%. Boys formed 65% of the successful cohort in 2012, 61% in 2013 and 60% in 2014, so girls’ progress in narrowing this substantial performance gap is slowing.

 

L6 Sept chart 6

Chart 6: Percentage of learners entered achieving Level 6 in maths, 2012 to 2014, by gender

 

Progress

The SFR also provides a table, this time for state-funded schools only, showing the KS1 outcomes of those successful in achieving Level 6. (For maths and reading, this data includes those with a non-numerical grade in the test who have been awarded L6 via teacher assessment. The data for writing is derived solely from teacher assessment.)

Not surprisingly, over 94% of those achieving Level 6 in reading had achieved Level 3 in KS1, but 4.8% were at L2A and a single learner was recorded at Level 1. The proportion with KS1 Level 3 in 2013 was higher, at almost 96%.

In maths, however, only some 78% of those achieving Level 6 were at Level 3 in KS1. A further 18% were at 2A and almost 3% were at 2B. A further 165 were recorded as 2C or 1. In 2013, over 82% had KS1 L3 while almost 15% had 2A.

It seems, therefore, that KS1 performance was a slightly weaker indicator of KS2 level 6 success in 2014 than in the previous year, but this trend was apparent in both reading and maths – and KS1 performance remains a significantly weaker indicator in maths than it is in reading.

 

Why did the L6 reading results decline so drastically?

Given that the number of entries for the Level 6 reading test increased dramatically, the declining pass rate suggests either a problematic test or that schools entered a higher proportion of learners who had relatively little chance of success. A third possibility is that the test was deliberately made more difficult.

The level threshold for the 2014 Level 6 reading test was 24 marks, compared with 22 marks in 2013, but there are supposed to be sophisticated procedures in place to ensure that standards are maintained. We should be able to discount the third cause.

The second cause is also unlikely to be significant, since schools are strongly advised only to enter learners who are already demonstrating attainment beyond KS2 Level 5.There is no benefit to learners or schools from entering pupils for tests that they are almost certain to fail.

The existing pass rate was very low, but it was on an upward trajectory. Increasing familiarity with the test ought to have improved schools’ capacity to enter the right learners and to prepare them to pass it.

That leaves only the first possibility – something must have been wrong with the test.

Press coverage from May 2014, immediately after the test was administered, explained that it contained different rules for learners and invigilators about the length of time available for answering questions.

The paper gave learners one hour for completion, while invigilators were told pupils had 10 minutes’ reading time followed by 50 minutes in which to answer the questions. Schools interpreted this contradiction differently and several reported disruption to the examination as a consequence.

The NAHT was reported to have written to the Standards and Testing Agency:

‘…asking for a swift review into this error and to seek assurance that no child will be disadvantaged after having possibly been given incorrect advice on how to manage their time and answers’.

The STA statement says:

‘We apologise for this error. All children had the same amount of time to complete the test and were able to consult the reading booklet at any time. We expect it will have taken pupils around 10 minutes to read the booklet, so this discrepancy should not have led to any significant advantage for those pupils where reading time was not correctly allotted.’

NAHT has now posted the reply it received from STA on 16 May. It says:

‘Ofqual, our regulator, is aware of the error and of the information set out below and will, of
course, have to independently assure itself that the test remains valid. We would not
expect this to occur until marking and level setting processes are complete, in line with
their normal timescales.’

It then sets out the reasons why it believes the test remains valid. These suggest the advantage to the learners following the incorrect instructions was minimal since:

  • few would need less than 10 minutes’ reading time;
  • pre-testing showed 90% of learners completed the test within 50 minutes;
  • in 2013 only 3.5% of learners were within 1 or 2 marks of the threshold;
  • a comparative study to change the timing of the Levels 3-5 test made little difference to item difficulty.

NAHT says it will now review the test results in the light of this response.

 

 

Who is responsible?

According to its most recent business plan, STA:

‘is responsible for setting and maintaining test standards’ (p3)

but it publishes little or nothing about the process involved, or how it handles representations such as that from NAHT.

Meanwhile, Ofqual says its role is:

‘to make sure the assessments are valid and fit for purpose, that the assessments are fair and manageable, that the standards are properly set and maintained and the results are used appropriately.

We have two specific objectives as set out by law:

  • to promote assessment arrangements which are valid, reliable and comparable
  • to promote public confidence in the arrangements.

We keep national assessments under review at all times. If we think at any point there might be a significant problem with the system, then we notify the Secretary of State for Education.’

Ofqual’s Chair has confirmed via Twitter that Ofqual was:

‘made aware at the time, considered the issues and observed level setting’.

Ofqual was content that the level-setting was properly undertaken.

 

 

I asked whether, in the light of that, Ofqual saw a role for itself in investigating the atypical results. I envisaged that this might take place under the Regulatory Framework for National Curriculum Assessments (2011).

This commits Ofqual to publishing annually its ‘programme for reviewing National Assessment arrangements’ (p14) as well as ‘an annual report on the outcomes of the review programme’ (p18).

However the most recent of these relates to 2011/12 and appeared in November of that year.

 

 

I infer from this that we may seem some reaction from Ofqual, if and when it finally produces an annual report on National Curriculum Assessments in 2014, but that’s not going to appear before 2015 at the earliest.

I can’t help but feel that this is not quite satisfactory – that atypical test performance of this magnitude ought to trigger an automatic and transparent review, even if the overall number of learners affected is comparatively small.

If I were part of the system I would want to understand promptly exactly what happened, for fear that it might happen again.

If you are in any doubt quite how out of kilter the reading test outcomes were, consider the parallel results for Level 6 teacher assessment.

In 2013, 5,698 learners were assessed at Level 6 in reading through teacher assessment – almost exactly two-and-a-half times as many as achieved Level 6 in the test.

In 2014, a whopping 17,582 learners were assessed at Level 6 through teacher assessment, around 20 times as many as secured a Level 6 in the reading test.

If the ratio between test and teacher assessment results in 2014 had been the same as it was in 2013, the number successful on the test would have been over 7,000, eight-fold higher than the reported 851.

I rest my case.

 

The new regime

In February 2013, a DfE-commissioned report Investigation of Key Stage 2 Level 6 Tests recommended that:

‘There is a need to review whether the L6 test in Reading is the most appropriate test to use to discriminate between the highest ability pupils and others given:

a) that only around 0.3 per cent of the pupils that achieved at least a level 5 went on to achieve a level 6 in Reading compared to 9 per cent for Mathematics

b) there was a particular lack of guidance and school expertise in this area

c) pupil maturity was seen to be an issue

d) the cost of supporting and administering a test for such a small proportion of the school population appears to outweigh the benefits.’

This has been overtaken by the decision to withdraw all three Level 6 tests and to rely on single tests of reading GPS and maths for all learners when the new assessment regime is introduced from 2016.

Draft test frameworks were published in March 2014, supplemented in July by sample questions, mark schemes and commentary.

Given the imminent introduction of this new regime, together with schools’ experience in 2014, it seems increasingly unlikely that 2015 Level 6 test entries in reading will approach the 120,000 figure suggested by the trend.

Perhaps more importantly, schools and assessment experts alike seem remarkably sanguine about the prospect of single tests for pupils demonstrating the full range of prior attainment, apart from those assessed via the P-Scales. (The draft test frameworks are worryingly vague about whether those operating at the equivalent of Levels 1 and 2 will be included.)

I could wish to be equally sanguine, on behalf of all those learners capable of achieving at least the equivalent of Level 6 after 2015. But, as things stand, the evidence to support that position is seemingly non-existent.

In October 2013, Ofqual commented that:

‘There are also some significant technical challenges in designing assessments which can discriminate effectively and consistently across the attainment range so they can be reported at this level of precision.’

A year on, we still have no inkling whether those challenges have been overcome.

 

GP

September 2014

 

 

 

 

Unpacking the Primary Assessment and Accountability Reforms

This post examines the Government response to consultation on primary assessment and accountability.

pencil-145970_640It sets out exactly what is planned, what further steps will be necessary to make these plans viable and the implementation timetable.

It is part of a sequence of posts I have devoted to this topic, most recently:

Earlier posts in the series include The Removal of National Curriculum Levels and the Implications for Able Pupils’ Progression (June 2012) and Whither National Curriculum Assessment Without Levels? (February 2013).

The consultation response contrives to be both minimal and dense. It is necessary to unpick each element carefully, to consider its implications for the package as a whole and to reflect on how that package fits in the context of wider education reform.

I have organised the post so that it considers sequentially:

  • The case for change, including the aims and core principles, to establish the policy frame for the planned reforms.
  • The impact on the assessment experience of children aged 2-11 and how that is likely to change.
  • The introduction of baseline assessment in Year R.
  • The future shape of end of KS1 and end of KS2 assessment respectively.
  • How the new assessment outcomes will be derived, reported and published.
  • The impact on floor standards.

Towards the end of the post I have also provided a composite ‘to do’ list containing all the declared further steps necessary to make the plan viable, with a suggested deadline for each.

And the post concludes with an overall judgement on the plans, in the form of a summary of key issues and unanswered questions arising from the earlier commentary. Impatient readers may wish to jump straight to that section.

I am indebted to Warwick Mansell for his previous post on this topic. I shall try hard not to parrot the important points he has already made, though there is inevitably some overlap.

Readers should also look to Michael Tidd for more information about the shape and content of the new tests.

What has been published?

The original consultation document ‘Primary assessment and accountability under the new national curriculum’ was published on 17 July 2013 with a deadline for response of 17 October 2013. At that stage the Government’s response was due ‘in autumn 2013’.

The response was finally published on 27 March, some four months later than planned and only five months prior to the introduction of the revised national curriculum which these arrangements are designed to support.

It is likely that the Government will have decided that 31 March was the latest feasible date to issue the response, so they were right up against the wire.

It was accompanied by:

  • A press release which focused on the full range of assessment reforms – for primary, secondary and post-16.

Shortly before the response was published, the reply to a Parliamentary question asked on 17 March explained that test frameworks were expected to be included within it:

‘Guidance on the nature of the revised key stage 1 and key stage 2 tests, including mathematics, will be published by the Standards and Testing Agency in the form of test framework documents. The frameworks are due to be released as part of the Government’s response to the primary assessment and accountability consultation. In addition, some example test questions will be made available to schools this summer and a full sample test will be made available in the summer of 2015.’ (Col 383W)

.

.

In the event, these documents – seven in all – did not appear until 31 March and there was no reference to any of the three commitments above in what appeared on 27 March.

Finally, the Standards and Testing Agency published on 3 April a guidance page on national curriculum tests from 2016. At present it contains very little information but further material will be added as and when it is published.

Partly because the initial consultation document was extremely ‘drafty’, the reaction of many key external respondents to the consultation was largely negative. One imagines that much of the period since 17 October has been devoted to finding the common ground.

Policy makers will have had to do most of their work after the consultation document issued because they were not ready beforehand.

But the length of the delay in issuing the response would suggest that they also encountered significant dissent amongst internal stakeholders – and that the eventual outcome is likely to be a compromise of sorts between these competing interests.

Such compromises tend to have observable weaknesses and/or put off problematic issues for another day.

A brief summary of consultation responses is included within the Government’s response. I will refer to this at relevant points during the discussion below.

 .

The Case for Change

 .

Aims

The consultation response begins – as did the original consultation document – with a section setting out the case for reform.

It provides a framework of aims and principles intended to underpin the changes that are being set in place.

The aims are:

  • The most important outcome of primary education is to ‘give as many pupils as possible the knowledge and skills to flourish in the later phases of education’. This is a broader restatement of the ‘secondary ready’ concept adopted in the original consultation document.
  • The primary national curriculum and accountability reforms ‘set high expectations so that all children can reach their potential and are well prepared for secondary school’. Here the ‘secondary ready’ hurdle is more baldly stated. The parallel notion is that all children should do as well as they can – and that they may well achieve different levels of performance. (‘Reach their potential’ is disliked by some because it is considered to imply a fixed ceiling for each child and fixed mindset thinking.)
  • To raise current threshold expectations. These are set too low, since too few learners (47%) with KS2 level 4C in both English and maths go on to achieve five or more GCSE grades A*-C including English and maths, while 72% of those with KS2 level 4B do so. So the new KS2 bar will be set at this higher level, but with the expectation that 85% of learners per school will jump it, 13% more than the current national figure. Meanwhile the KS4 outcome will also change, to achievement across eight GCSEs rather than five, quite probably at a more demanding level than the present C grade. In the true sense, this is a moving target.
  • No child should be allowed to fall behind’. This is a reference to the notion of ‘mastery’ in its crudest sense, though the model proposed will not deliver this outcome. We have noted already a reference to ‘as many children as possible’ and the school-level target – initially at least – will be set at 85%. In reality, a significant minority of learners will progress more slowly and will fall short of the threshold at the end of KS2.
  • The new system ‘will set a higher bar’ but ‘almost all pupils should leave primary school well-placed to succeed in the next phase of their education’. Another nuanced version of ‘secondary ready’ is introduced. This marks a recognition that some learners will not jump over the higher bar. In the light of subsequent references to 85%, ‘almost all’ is rather over-optimistic.
  • We also want to celebrate the progress that pupils make in schools with more challenging intakes’. Getting ‘nearly all pupils to meet this standard…’ (the standard of secondary readiness?) ‘…is very demanding, at least in the short term’. There will therefore be recognition of progress ‘from a low starting point’ – even though these learners have, by definition, been allowed to fall behind and will continue to do so.

So there is something of a muddle here, no doubt engendered by a spirit of compromise.

The black and white distinction of ‘secondary-readiness’ has been replaced by various verbal approximations, but the bottom line is that there will be a defined threshold denoting preparedness that is pitched higher than the current threshold.

And the proportion likely to fall short is downplayed – there is apparent unwillingness at this stage to acknowledge the norm that up to 15% of learners in each school will undershoot the threshold – substantially more in schools with ‘challenging intakes’.

What this boils down to is a desire that all will achieve the new higher hurdle – and that all will be encouraged to exceed it if they can – tempered by recognition that this is presently impossible. No child should be allowed to fall behind but many inevitably will do so.

It might have been better to express these aims in the form of future aspirations – and our collective efforts to bridge the gap between present reality and those ambitious aspirations.

Principles

The section concludes with a new set of principles governing pedagogy, assessment and accountability:

  • ‘Ongoing, teacher-led assessment is a crucial part of effective teaching;
  • Schools should have the freedom to decide how to teach their curriculum and how to track the progress that pupils make;
  • Both summative teacher assessment and external testing are important;
  • Accountability is key to a successful school system, and therefore must be fair and transparent;
  • Measures of both progress and attainment are important for understanding school performance; and
  • A broad range of information should be published to help parents and the wider public know how well schools are performing.’

These are generic ‘motherhood and apple pie’ statements and so largely uncontroversial. I might have added a seventh – that schools’ in-house assessment and reporting systems must complement summative assessment and testing, including by predicting for parents the anticipated outcomes of the latter.

Perhaps interestingly, there is no repetition of the defence for the removal of national curriculum levels. Instead, the response concentrates on the support available to schools.

It mentions discussion with an ‘expert group on assessment’ about ‘how to support schools to make best use of the new assessment freedoms’. We are not told the membership of this group (which, as far as I know, has not been made public) or the nature of its remit.

There is also a link to information about the Assessment Innovation Fund, which will provide up to 10 grants of up to £10,000 which schools and organisations can use to develop packages that share their innovative practice with others.

 

Children’s experience of assessment up to the end of KS2

The response mentions the full range of national assessments that will impact on children between the ages of two and 11:

  • The statutory progress check at two years of age.
  • A new baseline assessment undertaken within a few weeks of the start of Year R, introduced from September 2015.
  • An Early Years Foundation Stage Profile undertaken in the final term of the year in which children reach the age of five. A revised profile was introduced from September 2012. It is currently compulsory but will be optional from September 2016. The original consultation document said that the profile would no longer be moderated and data would no longer be collected. Neither of those commitments is repeated here.
  • The Phonics Screening Check, normally undertaken in Year 1. The possibility of making these assessments non-statutory for all-through primary schools, suggested in the consultation document, has not been pursued: 53% of respondents opposed this idea, whereas 32% supported it.
  • End of KS1 assessment and
  • End of KS2 assessment.

So a total of six assessments are in place between the ages of two and 11. At least four – and possibly five – will be undertaken between ages two and seven.

It is likely that early years’ professionals will baulk at this amount of assessment, no matter how sensitively it is designed. But the cost and inefficiency of the model is also open to criticism.

The Reception Baseline

Approach

The original consultation document asked whether:

  • KS1 assessment should be retained as a baseline – 45% supported this and 41% were opposed.
  • A baseline check should be introduced at the start of Reception – 51% supported this and 34% were opposed.
  • Such a baseline check should be optional – 68% agreed and 19% disagreed.
  • Schools should be allowed to choose from a range of commercially available materials for this baseline check – 73% said no and only 15% said yes.

So, whereas views were mixed on where the baseline should be set, there were substantial majorities in favour of any Year R baseline check being optional and following a single, standard national format.

The response argues that Year R is the most sensible point at which to position the baseline since that is:

‘…the earliest point that nearly all children are in school’.

What happens in respect of children who are not in school at this point is not discussed.

There is no explanation of why the Government has disregarded the clear majority of respondents by choosing to permit a range of assessment approaches, so this decision must be ideologically motivated.

The response says ‘most’ are likely to be administered by teaching staff, leaving open the possibility that some options will be administered externally.

Design

Such assessments will need to be:

‘…strong predictors of key stage 1 and key stage 2 attainment, whilst reflecting the age and abilities of children in Reception’.

Presumably this means predictors of attainment in each of the three core subjects – English, maths and science – rather than any broader notion of attainment. The challenge inherent in securing a reasonable predictor of attainment across these domains seven years further on in a child’s development should not be under-estimated.

The response points out that such assessment tools are already available for use in Year R, some are used widely and some schools have long experience of using them. But there is no information about how many of these are deemed to meet already the description above.

In any case, new criteria need to be devised which all such assessments must meet. Some degree of modification will be necessary for all existing products and new products will be launched to compete in the market.

There is an opportunity to use this process to ratchet up the Year R Baseline beyond current expectations, so matching the corresponding process at the end of KS2. The consultation response says nothing about whether this is on the cards.

Interestingly, in his subsequent ‘Unsure start’ speech about early years inspection, HMCI refers to:

‘…the government’s announcement last week that they will be introducing a readiness-for-school test at age four. This is an ideal opportunity to improve accountability. But I think it should go further.

I hope that the published outcomes of these tests will be detailed enough to show parents how their own child has performed. I fear that an overall school grade will fail to illuminate the progress of poor children. I ask government to think again about this issue.’

The terminology – ‘readiness for school’ is markedly blunter than the references to a reception baseline in the consultation response. There is nothing in the response about the outcomes of these tests being published, nor anything about ‘an overall school grade’.

Does this suggest that decisions have already been made that were not communicated in the consultation response?

.

Timeline, options, questions

Several pieces of further work are required in short order to inform schools and providers about what will be required – and to enable both to prepare for introduction of the assessments from September 2015. All these should feature in the ‘to do’ list below.

One might reasonably have hoped that – especially given the long delay – some attempt might have been made to publish suggested draft criteria for the baseline alongside the consultation response. The fact that even preliminary research into existing practice has not been undertaken is a cause for concern.

Although the baseline will be introduced from September 2015, there is a one-year interim measure which can only apply to all-through primary schools:

  • They can opt out of the Year R baseline measure entirely, relying instead on KS1 outcomes as their baseline; or
  • They can use an approved Year R baseline assessment and have this cohort’s progress measured at the end of KS2 (which will be in 2022) by either the Year R or the KS1 baseline, whichever demonstrates the most progress.

In the period up to and including 2021, progress will continue to be measured from the end of KS1. So learners who complete KS2 in 2021 for example will be assessed on progress since their KS1 tests in 2017.

Junior and middle schools will also continue to use a KS1 baseline.

Arrangements for infant and first schools are still to be determined, another rather worrying omission at this stage in proceedings.

It is also clear that all-through primary schools (and infant/first schools?) will continue to be able to opt out from the Year R baseline from September 2016 onwards, since the response says:

‘Schools that choose not to use an approved baseline assessment from 2016 will be judged on an attainment floor standard alone’.

Hence the Year R baseline check is entirely optional and a majority of schools could choose not to undertake it.

However, they would need to be confident of meeting the demanding 85% attainment threshold in the floor standard.

They might be wise to postpone that decision until the pitch of the progress expectation is determined. For neither the Year R baseline nor the amount of progress that learners are expected to make from their starting point in Year R is yet defined.

This latter point applies at the average school level (for the purposes of the floor standard) and in respect of the individual learner. For example, if a four year-old is particularly precocious in, say, maths, what scaled scores must they register seven years later to be judged to have made sufficient progress?

There are several associated questions that follow on from this.

Will it be in schools’ interests to acknowledge that they have precocious four year-olds at all? Will the Year R baseline reinforce the tendency to use Reception to bring all children to the same starting point in readiness for Year 1, regardless of their precocity?

Will the moderation arrangements be hard-edged enough to stop all-through primary schools gaming the system by artificially depressing their baseline outcomes?

Who will undertake this moderation and how much will it cost? Will not the decision to permit schools to choose from a range of measures unnecessarily complicate the moderation process and add to the expense?

The consultation response neither poses these questions nor supplies answers.

The future shape of end KS1 and end KS2 assessment

.

What assessment will take place?

At KS1 learners will be assessed in:

  • Reading – test plus teacher assessment
  • Writing – test (of grammar, punctuation and spelling) plus teacher assessment
  • Speaking and listening – teacher assessment
  • Maths – test plus teacher assessment
  • Science  – teacher assessment

The new test of grammar, punctuation and spelling did not feature in the original consultation and has presumably been introduced to strengthen the marker of progress to which four year-olds should aspire at age seven.

The draft test specifications for the KS1 tests in reading, GPS and maths outline the requirements placed on the test developers, so it is straightforward to compare the specifications for reading and maths with the current tests.

The GPS test will include a 20 minute written grammar and punctuation task; a 20 minute test comprising short grammar, punctuation and vocabulary questions; and a 15 minute spelling task.

There is a passing reference to further work on KS1 moderation which is included in the ‘to do’ list below.

At KS2 learners will be assessed in

  • Reading – test plus teacher assessment
  • Writing – test (of grammar spelling and punctuation) plus teacher assessment
  • Maths – test plus teacher assessment
  • Science  – teacher assessment plus a science sampling test.

Once again, the draft test specifications – reading, GPS, maths and science sampling – describe the shape of each test and the content they are expected to assess.

I will leave it to experts to comment on the content of the tests.

 .

Academies and free schools

It is important to note that the framing of this content – by means of detailed ‘performance descriptors’ – means that the freedom academies and free schools enjoy in departing from the national curriculum will be largely illusory.

I raised this issue back in February 2013:

  • ‘We know that there will be a new grading system in the core subjects at the end of KS2. If this were to be based on the ATs as drafted, it could only reflect whether or not learners can demonstrate that they know, can apply and understand ‘the matters, skills and processes specified’ in the PoS as a whole. Since there is no provision for ATs that reflect sub-elements of the PoS – such as reading, writing, spelling – grades will have to be awarded on the basis of separate syllabuses for end of KS2 tests associated with these sub-elements.
  • This grading system must anyway be applied universally if it is to inform the publication of performance tables. Since some schools are exempt from National Curriculum requirements, it follows that grading cannot be derived directly from the ATs and/or the PoS, but must be independent of them. So this once more points to end of KS2 tests based on entirely separate syllabuses which nevertheless reflect the relevant part of the draft PoS. The KS2 arrangements are therefore very similar to those planned at KS4.’

I have more to say about the ‘performance descriptors’ below.

 .

Single tests for all learners

A critical point I want to emphasise at this juncture – not mentioned at all in the consultation document or the response – is the test development challenge inherent in producing single papers suitable for all learners, regardless of their attainment.

We know from the response that the P-scales will be retained for those who are unable to access the end of key stage tests. (Incidentally, the content of the P-scales will remain unchanged so they will not be aligned with the revised national curriculum, as suggested in the consultation document.)

There will also be provision for pupils who are working ‘above the P-scales but below the level of the test’.

Now the P-scales are for learners working below level 1 (in old currency). This is the first indication I have seen that the tests may not cater for the full range from Level 1-equivalent to Level 6-equivalent and above. But no further information is provided.

It may be that this is a reference to learners who are working towards level 1 (in old currency) but do not have SEN.

The 2014 KS2 ARA booklet notes:

‘Children working towards level 1 of the national curriculum who do not have a special educational need should be reported to STA as ‘W’ (Working below the level). This includes children who are working towards level 1 solely because they have English as an additional language. Schools should use the code ‘NOTSEN’ to explain why a child working towards level 1 does not have P scales reported. ‘NOTSEN’ replaces the code ‘EAL’ that was used in previous years.’

The consultation document said:

‘We do not propose to develop an equivalent to the current level 6 tests, which are used to challenge the highest-attaining pupils. Key stage 2 national curriculum tests will include challenging material (at least of the standard of the current level 6 test) which all pupils will have the opportunity to answer, without the need for a separate test’.

The draft test specifications make it clear that the tests should:

‘provide a suitable challenge for all children and give every child the opportunity to achieve as high a standard…as possible.’

Moreover:

‘In order to improve general accessibility for all children, where possible, questions will be placed in order of difficulty.’

The development of single tests covering this span of attainment – from level 1 to above level 6 – tests in which the questions are posed in order of difficulty and even the highest attainers must answer all questions – seem to me to be a very tall order, especially in maths.

More than that, I urgently need persuading that this is not a waste of high attainers’ time and poor assessment practice.

 .

How assessment outcomes will be derived, reported and published

Deriving assessment outcomes

One of the reasons cited for replacing national curriculum levels was the complexity of the system and the difficulty parents experienced in understanding it.

The Ministerial response to the original report from the National Curriculum Expert Panel said:

‘As you rightly identified, the current system is confusing for parents and restrictive for teachers. I agree with your recommendation that there should be a direct relationship between what children are taught and what is assessed. We will therefore describe subject content in a way which makes clear both what should be taught and what pupils should know and be able to do as a result.’

The consultation document glossed the same point thus:

‘Schools will be able to focus their teaching, assessment and reporting not on a set of opaque level descriptions, but on the essential knowledge that all pupils should learn.’

However, the consultation response introduces for the first time the concept of a ‘performance descriptor’.

This term is defined in the glossaries at the end of each draft test specification:

Description of the typical characteristics of children working at a particular standard. For these tests, the performance descriptor will characterise the minimum performance required to be working at the appropriate standard for the end of the key stage.’

Essentially this is a collective term for something very similar to old-style level descriptions.

Except that, in the case of the tests, they are all describing the same level of performance.

They have been rendered necessary by the odd decision to provide only a single generic attainment target for each programme of study. But, as noted back in February 2013, the test developers need a more sophisticated framework on which to base their assessments.

According to the draft test specifications they will also be used

‘By a panel of teachers to set the standards on the new tests following their first administration in May 2016’.

When it comes to teacher assessment, the consultation response says:

‘New performance descriptors will be introduced to inform the statutory teacher assessments at the end of key stage one [and]…key stage two.’

But there are two models in play simultaneously.

In four cases – science at KS1 and reading, maths and science at KS2 – there will be ‘a single performance descriptor of the new expected standard’, in the same way as there are in the test specifications.

But in five cases – reading, writing, speaking and listening and maths at KS1; and writing at KS2 :

‘teachers will assess pupils as meeting one of several performance descriptors’.

These are old-style level descriptors by another name. They perform exactly the same function.

The response says that the KS1 teacher assessment performance descriptors will be drafted by an expert group for introduction in autumn 2014. It does not mention whether KS2 teacher assessment performance descriptors will be devised in the same way and to the same timetable.

 .

Reporting assessment outcomes to parents

When it comes to reporting to parents, there will be three different arrangements in play at both KS1 and KS2:

  • Test results will be reported by means of scaled scores (of which more in a moment).
  • One set of teacher assessments will be reported by selecting from a set of differentiated performance descriptors.
  • A second set of teacher assessments will be reported according to whether learners have achieved a single threshold performance descriptor.

This is already significantly more complex than the previous system, which applied the same framework of national curriculum levels across the piece.

It seems that KS1 test outcomes will be reported as straightforward scaled scores (though this is only mentioned on page 8 of the main text of the response and not in Annex B, which compares the new arrangements with those currently in place).

But, in the case of KS2:

‘Parents will be provided with their child’s score alongside the average for their school, the local area and nationally. In the light of the consultation responses, we will not give parents a decile ranking for their child due to concerns about whether decile rankings are meaningful and their reliability at individual pupil level.’

The consultation document proposed a tripartite reporting system comprising:

  • A scaled score for each KS2 test, derived from raw test marks and built around a ‘secondary readiness standard’. This standard would be set at a scaled score of 100, which would remain unchanged. It was suggested for illustrative purposes that a scale based on the current national curriculum tests might run from 80 to 130.
  • An average scaled score in each test for other pupils nationally with the same prior attainment at the baseline. Comparison of a learner’s scaled score with the average scaled score would show whether they had made more or less progress than the national average.
  • A national ranking in each test – expressed in terms of deciles – showing how a learner’s scaled score compared with the range of performance nationally.

The latter has been dispensed with, given that 35% of consultation respondents disagreed with it, but there were clearly technical reservations too.

In its place, the ‘value added’ progress measure has been expanded so that there is a comparison with other pupils in the learner’s own school and the ‘local area’ (which presumably means local authority). This beefs up the progression element in reporting at the expense of information about the attainment level achieved.

So at the end of KS2 parents will receive scaled scores and three average scaled scores for each of reading, writing and maths – twelve scores in all – plus four performance descriptors, of which three will be singleton threshold descriptors (reading, maths and science) and one will be selected from a differentiated series (writing). That makes sixteen assessment outcomes altogether, provided in four different formats.

The consultation response tells us nothing more about the range of the scale that will be used to provide scaled scores. We do not even know if it will be the same for each test.

The draft test specifications say that:

‘The exact scale for the scaled scores will be determined following further analysis of trialling data. This will include a full review of the reporting of confidence intervals for scaled scores.’

But they also contain this worrying statement:

‘The provision of a scaled score will aid in the interpretation of children’s performance over time as the scaled score which represents the expected standard will be the same year on year. However, at the extremes of the scaled score distribution, as is standard practice, the scores will be truncated such that above and below a certain point, all children will be awarded the same scaled score in order to minimise the effect for children at the ends of the distribution where the test is not measuring optimally.’

This appears to suggest that scaled scores will not accurately describe performance at the extremes of the distribution, because the tests will not accurately measure such performance. This might be describing a statistical truism, but it again begs the question whether the highest attainers are being short-changed by the selected approach.

.

Publication of assessment outcomes

The response introduces the idea that ‘a suite of indicators’ will be published on each school’s own website in a standard format. These are:

  • The average progress made by pupils in reading, writing and maths. (This is presumably relevant to both KS1 and KS2 and to both tests and teacher assessment.)
  • The percentage of pupils reaching the expected standard in reading, writing and mathematics at the end of key stage 2. (This is presumably relevant to both tests and teacher assessment.)
  • The average score of pupils in their end of key stage 2 assessments. (The final word suggests teacher assessment as well as tests, even though there will not be a score from the former.)
  • The percentage of pupils who achieve a high score in all areas at the end of key stage 2. (Does ‘all areas’ imply something more than statutory tests and teacher assessments? Does it mean treating each area separately, or providing details only of those who have achieved high scores across all areas?)

The latter is the only reference to high attainers in the entire response. It does not give any indication of what will count as a high score for these purposes. Will it be designed to catch the top-third of attainers or something more demanding, perhaps equivalent to the top decile?

A decision has been taken not to report the outcomes of assessment against the P-scales because the need to contextualise such information is perceived to be relatively greater.

And, as noted above, HMCI let slip the fact that the outcomes of reception baselines would also be published, but apparently in the form of a single overall grade.

We are not told when these requirements will be introduced, but presumably they must be in place to report the outcomes of assessments undertaken in spring 2016.

Additionally:

‘So that parents can make comparisons between schools, we would like to show each school’s position in the country on these measures and present these results in a manner that is clear for all audiences to understand. We will discuss how best to do so with stakeholders, to ensure that the presentation of the data is clear, fair and statistically robust.’

This suggests inclusion in the 2016 School Performance Tables, but this is not stated explicitly.

Indeed, apart from references to the publication of progress measures in the 2022 Performance Tables, there is no explicit coverage of their contribution in the response, nor any reference to the planned supporting data portal, or how data will be distributed between the Tables and the portal.

The original consultation document gave several commitments on the future content of performance tables. They included:

  • How many of a school’s pupils are amongst the highest attaining nationally, by showing the percentage of pupils achieving a high scaled score in each subject.
  • Measures to show the attainment and progress of learners attracting the Pupil Premium.
  • Comparison of each school’s performance with that of schools with similar intakes.

None are mentioned here, nor are any of the suggestions advanced by respondents taken up.

Floor standards

Changes are proposed to the floor standards with effect from September 2016.

This section of the response begins by committing to:

‘…a new floor standard that holds schools to account both on the progress that they make and on how well their pupils achieve.’

But the plans set out subsequently do not meet this description.

The progress element of the current floor standard relates to any of reading, writing or mathematics but, under the new floor standard, it will relate to all three of these together.

An all-though primary school must demonstrate that:

‘…pupils make sufficient progress at key stage 2 from their starting point…’

As we have noted above, all-through primaries can opt to use the KS1 baseline or the Year R baseline in 2015. Moreover, from 2016 they can choose not to use the Year R baseline and be assessed solely on the attainment measure in the floor standards (see below).

Junior and middle schools obviously apply the KS1 baseline, while arrangements for infant and first schools have yet to be finalised.

What constitutes ‘sufficient progress’ is not defined. Annex C of the response says:

‘For 2016 we will set the precise extent of progress required once key stage 2 tests have been sat for the first time.’

Presumably this will be progress from KS1 to KS2, since progress from the Year R baseline will not be introduced until 2023.

The attainment element of the new floor standards is for schools to have 85% or more of pupils meeting the new, higher threshold standard at the end of KS2 in all of reading, writing and maths. The text says explicitly that this threshold is ‘similar to a level 4b under the current system’.

Annex C clarifies that this will be judged by the achievement of a scaled score of 100 or more in each of the reading and maths tests, plus teacher assessment that learners have reached the expected standard in writing (so the GPS test does not count in the same way, simply informing the teacher assessment).

As noted above, this a far bigger ask than the current reference to 65% of learners meeting the expected (and lower 4c) standard. The summary at the beginning of the response refers to it as ‘a challenging aspiration’:

‘Over time we expect more and more schools to achieve this standard.’

The statement in the first paragraph of this section of the response led us to believe that these two requirements – for progress and attainment respectively – would be combined, so that schools would be held account for both (unless, presumably, they exercised their right to opt out of the Year R baseline assessment).

But this is not the case. Schools need only achieve one or the other.

It follows that schools with a very high performing intake may exceed the floor standards on the basis of all-round high attainment alone, regardless of the progress made by their learners.

The reason for this provision is unclear, though one suspects that schools with an extremely high attaining intake, whether at Reception or Year 3, will be harder pressed to achieve sufficient progress, presumably because some ceiling effects come into play at the end of KS2.

This in turn might suggest that the planned tests do not have sufficient headroom for the highest attainers, even though they are supposed to provide similar challenge to level 6 and potentially extend beyond it.

Meanwhile, schools with less than stellar attainment results will be obliged to follow the progress route to jump the floor standard. This too will be demanding because all three domains will be in play.

There will have been some internal modelling undertaken to judge how many schools would be likely to fall short of the floor standards given these arrangements and it would be very useful to know these estimates, however unreliable they prove to be.

In their absence, one suspects that the majority of schools will be below the floor standards, at least initially. That of course materially changes the nature and purpose of the standards.

To Do List

The response and the draft specifications together contain a long list of work to be carried out over the next two years or so. I have included below my best guess as to the latest possible date for each decision to be completed and communicated:

  • Decide how progress will be measured for infants and first schools between the Year R baseline and the end of KS1 (April 2014)
  • Make available to schools a ‘small number’ of sample test questions for each key stage and subject (Summer 2014)
  • Work with experts to establish the criteria for the Year R baseline (September 2014)
  • KS1 [and KS2?] teacher assessment performance descriptors to be drafted by an expert group (September 2014)
  • Complete and report outcomes of a study with schools that already use Year R baseline assessments (December 2014)
  • Decide how Year R baseline assessments will be moderated (December 2014)
  • Publish a list of assessments that meet the Year R baseline criteria (March 2015)
  • Decide how Year R baseline results will be communicated to parents and to Ofsted (March 2015)
  • Make available to schools a full set of sample materials including tests and mark schemes for all KS1 and KS2 tests (September 2015)
  • Complete work with Ofsted and Teachers to improve KS1 moderation (September 2015)
  • Provide further information to enable teachers to assess pupils at the end of KS1 and KS2 who are ‘working above the P-scales but below the level of the test’ (September 2015)
  • Decide whether to move to external moderation of P-scale teacher assessment (September 2015)
  • Agree with stakeholders how to compare schools’ performance on a suite of assessment outcomes published in a standard format (September 2015)
  • Publish all final test frameworks (Autumn 2015)
  • Introduce new requirements for schools to publish a suite of assessment outcomes in a standard format (Spring 2016)
  • Panels of teacher use level descriptors to set the standards on the new tests following their first administration in May 2016 (Summer 2016)
  • Define what counts as sufficient progress from the Year R baseline to end KS1 and end KS2 respectively (Summer 2016)

Conclusion

Overall the response is rather more cogent and coherent than the original consultation document, though there are several inconsistencies and many sins of omission.

Drawing together the key issues emerging from the commentary above, I would highlight twelve key points:

  • The declared aims express the policy direction clumsily and without conviction. The ultimate aspirations are universal ‘secondary readiness’ (though expressed in broader terms), ‘no child left behind’ and ‘every child fulfilling their potential’ but there is no real effort to reconcile these potentially conflicting notions into a consensual vision of what primary education is for. Moreover, an inconvenient truth lurks behind these statements. By raising expectations so significantly – 4b equivalent rather than 4c; 85% over the attainment threshold rather than 65%; ‘sufficient progress’ rather than median progress and across three domains rather than one – there will be much more failure in the short to medium term. More learners will fall behind and fall short of the thresholds; many more schools are likely to undershoot the floor standards. It may also prove harder for some learners to demonstrate their potential. It might have been better to acknowledge this reality and to frame the vision in terms of creating the conditions necessary for subsequent progress towards the ultimate aspirations.
  • Younger children are increasingly caught in the crossbeam from the twin searchlights of assessment and accountability. HMCI’s subsequent intervention has raised the stakes still further. This creates obvious tensions in the sector which can be traced back to disagreements over the respective purposes of early years and primary provision and how they relate to each other. (HMCI’s notion of ‘school readiness’ is no doubt as narrow to early years practitioners as ‘secondary readiness’ is to primary educators.) But this is not just a theoretical point. Additional demands for focused inspection, moderation and publication of outcomes all carry a significant price tag. It must be open to question whether the sheer weight of assessment activity is optimal and delivers value for money. Should a radical future Government – probably with a cost-cutting remit – have rationalisation in mind?
  • Giving schools the freedom to choose from a range of Year R baseline assessment tools also seems inherently inefficient and flies in the face of the clear majority of consultation responses. We are told nothing of the perceived quality of existing services, none of which can – by definition – satisfy these new expectations without significant adjustment. It will not be straightforward to construct a universal and child-friendly instrument that is a sufficiently strong predictor of Level 4b-equivalent performance in KS2 reading, writing and maths assessments undertaken seven years later. Moreover, there will be a strong temptation for the Government to pitch the baseline higher than current expectations, so matching the  realignment at the other end of the process. Making the Reception baseline assessment optional – albeit with strings attached – seems rather half-hearted, almost an insurance against failure. Effective (and expensive) moderation may protect against widespread gaming, but the risk remains that Reception teachers will be even more predisposed to prioritise universal school readiness over stretching their more precocious four year-olds.
  • The task of designing an effective test for all levels of prior attainment at the end of key stage 2 is equally fraught with difficulty. The P-scales will be retained (in their existing format, unaligned with the revised national curriculum) for learners with special needs working below the equivalent of what is currently level 1. There will also be undefined provision ‘for those working above the level of the P-scales but below the level of the test’, even though the draft test development frameworks say:

‘All eligible children who are registered at maintained schools, special schools, or academies (including free schools) in England and are at the end of key stage 2 will be required to take the…test, unless they have taken it in the past.’

And this applies to all learners other than those in the exempted categories set out in the ARA booklets. The draft specifications add that test questions will be placed in order of difficulty. I have grave difficulty in understanding how such assessments can be optimal for high attainers and fear that this is bad assessment practice.

  • On top of this there is the worrying statement in the test development frameworks that scaled scores will be ‘truncated’ at the extremes of the distribution’. This does not fill one with confidence that the highest and lowest attainers will have their test performance properly recognised and reported.
  • The necessary invention of ‘performance descriptors’ removes any lingering illusion that academies and free schools have significant freedom to depart from the national curriculum, at least as far as the core subjects are concerned. It is hard to understand why these descriptors could not have been published alongside the programmes of study within the national curriculum.
  • The ‘performance descriptors’ in the draft test specifications carry all sorts of health warnings that they are inappropriate for teacher assessment because they cover only material that can be assessed in a written test. But there will be significant overlap between the test and teacher assessment versions, particularly in those that describe threshold performance at the equivalent of level 4b. For we know now that there will also be hierarchies of performance descriptors – aka level descriptors – for KS1 teacher assessment in reading, writing, speaking and listening and maths, as well as for KS2 teacher assessment in writing. Levels were so problematic that it has been necessary to reinvent them!
  • What with scaled scores, average scaled scores, threshold performance descriptors and ‘levelled’ performance descriptors, schools face an uphill battle in convincing parents that the reporting of test outcomes under this system will be simpler and more understandable. At the end of KS2 they will receive 16 different assessments in four different formats. (Remember that parents will also need to cope with schools’ approaches to internal assessment, which may or may not align with these arrangements.)
  • We are told about new requirements to be placed on schools to publish assessment outcomes, but the description is infuriatingly vague. We do not know whether certain requirements apply to both KS1 and 2, and/or to both tests and teacher assessment. The reference to ‘the percentage of pupils who achieve a high score in all areas at the end of key stage 2’ is additionally vague because it is unclear whether it applies to performance in each assessment, or across all assessments combined. Nor is the pitch of the high score explained. This is the only reference to high attainers in the entire response and it raises more questions than it answers.
  • We also have negligible information about what will appear in the school performance tables and what will be relegated to the accompanying data portal. We know there is an intention to compare schools’ performance on the measures they are required to publish and that is all. Much of the further detail in the original consultation document may or may not have fallen by the wayside.
  • The new floor standards have all the characteristics of a last-minute compromise hastily stitched together. The consultation document was explicit that floor standards would:

‘…focus on threshold attainment measures and value-added progress measures’

It anticipated that the progress measure would require average scaled scores of between 98.5 and 99.0 adding:

‘Our modelling suggests that a progress measure set at this level, combined with the 85% threshold attainment measure, would result in a similar number of schools falling below the floor as at present.’

But the analysis of responses fails to report at all on the question ‘Do you have any comments about these proposals for the Department’s floor standards?’ It does include the response to a subsequent question about including an average point score attainment measure in the floor standards (39% of respondents were in favour of this against 31% against). But the main text does not discuss this option at all. It begins by stating that both an attainment and a progress dimension are in play, but then describes a system in which schools can choose one or the other. There is no attempt to quantify ‘sufficient progress’ and no revised modelling of the impact of standards set at this level. We are left with the suspicion that a very significant proportion of schools will not exceed the floor. There is also a potential perverse incentive for schools with very high attaining intakes not to bother about progress at all.

  • Finally, the ‘to do’ list is substantial. Several of those with the tightest deadlines ought really to have been completed ahead of the consultation response, especially given the significant delay. There is nothing about the interaction between this work programme and that proposed by NAHT’s Commission on Assessment. Much of this work would need to take place on the other side of a General Election, while the lead time for assessing KS2 progress against a Year R baseline is a full nine years. This makes the project as a whole particularly vulnerable to the whims of future governments.

I’m struggling to find the right description for the overall package. I don’t think it’s quite substantial or messy enough to count as a dog’s breakfast. But, like a poorly airbrushed portrait, it flatters to deceive. Seen from a distance it appears convincing but, on closer inspection, there are too many wrinkles that have not been properly smoothed out

GP

April 2014

 

 

Challenging NAHT’s Commission on Assessment

.

This post reviews the Report of the NAHT’s National Commission on Assessment, published on 13 February 2014.

pencil-145970_640Since I previously subjected the Government’s consultation document on primary assessment and accountability to a forensic examination, I thought it only fair that I should apply the same high standards to this document.

I conclude that the Report is broadly helpful, but there are several internal inconsistencies and a few serious flaws.

Impatient readers may wish to skip the detailed analysis and jump straight to the summary at the end of the post which sets out my reservations in the form of 23 recommendations addressed to the Commission and the NAHT.

.

Other perspectives

Immediate reaction to the Report was almost entirely positive.

The TES included a brief Ministerial statement in its coverage, attributed to Michael Gove:

‘The NAHT’s report gives practical, helpful ideas to schools preparing for the removal of levels. It also encourages them to make the most of the freedom they now have to develop innovative approaches to assessment that meet the needs of pupils and give far more useful information to parents.’

ASCL and ATL both welcomed the Report, as did the National Governors’ Association, though there was no substantive comment from NASUWT or NUT.

The Blogosphere exhibited relatively little interest, although a smattering of posts began to expose some issues:

  • LKMco supported the key recommendations, but wondered whether the Commission might not be guilty of reinventing National Curriculum levels;
  • Mr Thomas Maths was more critical, identifying three key shortcomings, one being the proposed approach to differentiation within assessment;
  • Warwick Mansell, probably because he blogs for NAHT, confined himself largely to summarising the Report, which he found ‘impressive’, though he did raise two key points – the cost of implementing these proposals and how the recommendations relate to the as yet uncertain position of teacher assessment in the Government’s primary assessment and accountability reforms.

All of these points – and others – are fleshed out in the critique below.

.

Background

.

Remit, Membership and Evidence Base

The Commission was first announced in July 2013, when it was described as:

‘a commission of practitioners to shape the future of assessment in a system without levels.’

By September, Lord Sutherland had agreed to Chair the body and its broad remit had been established:

‘To:

  • establish a set of principles to underpin national approaches to assessment and create consistency;
  • identify and highlight examples of good practice; and
  • build confidence in the assessment system by securing the trust and support of officials and inspectors.’

Written evidence was requested by 16 October.

The first meeting took place on 21 October and five more were scheduled before the end of November.

Members’ names were not included at this stage (beyond the fact that NAHT’s President – a Staffordshire primary head – was involved) though membership was now described as ‘drawn from across education’.

Several members had in fact been named in an early October blog post from NAHT and a November press release from the Chartered Institute of Educational Assessors (CIEA) named all but one – NAHT’s Director of Education. This list was confirmed in the published Report.

The Commission had 14 members but only six of them – four primary heads one primary deputy and one secondary deputy – could be described as practitioners.

The others included two NAHT officials in addition to the secretariat, one being General Secretary Russell Hobby, and one from ASCL;  John Dunford, a consultant with several other strings to his bow, one of those being Chairmanship of the CIEA; Gordon Stobart an academic specialist in assessment with a long pedigree in the field; Hilary Emery, the outgoing Chief Executive of the National Children’s Bureau; and Sam Freedman of Teach First.

There were also unnamed observers from DfE, Ofqual and Ofsted.

The Report says the Commission took oral evidence from a wide range of sources. A list of 25 sources is provided but it does not indicate how much of their evidence was written and how much oral.

Three of these sources are bodies represented on the Commission, two of them schools. Overall seven are from schools. One source is Tim Oates, the former Chair of the National Curriculum Review Expert Panel.

The written evidence is not published and I could find only a handful of responses online, from:

Overall one has to say that the response to the call for evidence was rather limited. Nevertheless, it would be helpful for NAHT to publish all the evidence it received. It might be helpful for NAHT to consult formally on key provisions in its Report.

 .

Structure of the Report and Further Stages Proposed

The main body of the Report is sandwiched between a foreword by the Chair and a series of Annexes containing case studies, historical and international background.  This analysis concentrates almost entirely on the main body.

The 21 Recommendations are presented twice, first as a list within the Executive Summary and subsequently interspersed within a thematic commentary that summarises the evidence received and also conveys the Commission’s views.

The Executive Summary also sets out a series of Underpinning Principles for Assessment and a Design Checklist for assessment in schools, the latter accompanied by a set of five explanatory notes.

It offers a slightly different version of the Commission’s Remit:

‘In carrying out its task, the Commission was asked to achieve three distinct elements:

  • A set of agreed principles for good assessment
  • Examples of current best practice in assessment that meet these principles
  • Buy-in to the principles by those who hold schools to account.’

These are markedly less ambitious than their predecessors, having dropped the reference to ‘national approaches’ and any aspiration to secure support from officials and inspectors for anything beyond the Principles.

Significantly, the Report is presented as only the first stage in a longer process, an urgent response to schools’ need for guidance in the short term.

It recommends that further work should comprise:

  • ‘A set of model assessment criteria based on the new National Curriculum.’ (NAHT is called upon to develop and promote these. The text says that a model document is being  commissioned but doesn’t reveal the timescale or who is preparing it);
  • ‘A full model assessment policy and procedures, backed by appropriate professional development’ that would expand upon the Principles and Design Checklist. (NAHT is called upon to take the lead in this, but there is no indication that they plan to do so. No timescale is attached)
  • ‘A system-wide review of assessment’ covering ages 2-19. It is not explicitly stated, but one assumes that this recommendation is directed towards the Government. Again no timescale is attached.

The analysis below looks first at the assessment Principles, then the Design Checklist and finally the recommendations plus associated commentary. It concludes with an overall assessment of the Report as a whole.

.

Assessment Principles

As noted above, it seems that national level commitment is only sought in respect of these Principles, but there is no indication in the Report – or elsewhere for that matter – that DfE, Ofsted and Ofqual have indeed signed up to them.

Certainly the Ministerial statement quoted above stops well short of doing so.

The consultation document on primary assessment and accountability also sought comments on a set of core principles to underpin schools’ curriculum and assessment frameworks. It remains to be seen whether the version set out in the consultation response will match those advanced by the Commission.

The Report recommends that schools should review their own assessment practice against the Principles and Checklist together, and that all schools should have their own clear assessment principles, presumably derived or adjusted in the light of this process.

Many of the principles are unexceptionable, but there are a few interesting features that are directly relevant to the commentary below.

For it is of course critical to the internal coherence of the Report that the Design Checklist and recommendations are entirely consistent with these Principles.

I want to highlight three in particular:

  • ‘Assessment is inclusive of all abilities…Assessment embodies, through objective criteria, a pathway of progress and development for every child…Assessment objectives set high expectations for learners’.

One assumes that ‘abilities’ is intended to stand proxy for both attainment and potential, so that there should be ‘high expectations’ and a ‘pathway of progress and development’ for the lowest and highest attainers alike.

  • ‘Assessment places achievement in context against nationally standardised criteria and expected standards’.

This begs the question whether the ‘model document’ containing assessment criteria commissioned by NAHT will be ‘nationally standardised’ and, if so, what standardisation process will be applied.

  • ‘Assessment is consistent…The results are readily understandable by third parties…A school’s results are capable of comparison with other schools, both locally and nationally’.

The implication behind these statements must be that results of assessment in each school are transparent and comparable through the accountability regime, presumably by means of the performance tables (and the data portal that we expect to be introduced to support them).

This cannot be taken as confined to statutory tests, since the text later points out that:

‘The remit did not extend to KS2 tests, floor standards and other related issues of formal accountability.’

It isn’t clear, from the Principles at least, whether the Commission believes that teacher assessment outcomes should also be comparable. Here, as elsewhere, the Report does a poor job of distinguishing between statutory teacher assessment and assessment internal to the school.

.

Design Checklist.

 

Approach to Assessment and Use of Assessment

The Design Checklist is described as:

‘an evaluation checklist for schools seeking to develop or acquire an assessment system. They could also form the seed of a revised assessment policy.’

It is addressed explicitly to schools and comprises three sections covering, respectively, a school’s approach to assessment, method of assessment and use of assessment.

The middle section is by far the most significant and also the most complex, requiring five explanatory notes.

I have taken the more straightforward first and third sections first.

‘Our approach to assessment’ simply makes the point that assessment is integral to teaching and learning, while also setting expectations for regular, universal professional development and ‘a senior leader who is responsible for assessment’.

It is not clear whether this individual is the same as, or additional to, the ‘trained assessment lead’ mentioned in the Report’s recommendations.

I can find no justification in the Report for the requirement that this person must be a senior leader.

A more flexible approach would be preferable, in which the functions to be undertaken are outlined and schools are given flexibility over how those are distributed between staff. There is more on this below.

The final section ‘Our use of assessment’ refers to staff:

  • Summarising and analysing attainment and progress;
  • Planning pupils’ learning to ensure every pupil meets or exceeds expectations (Either this is a counsel of perfection, or expectations for some learners are pitched below the level required to satisfy the assessment criteria for the subject and year in question. The latter is much more likely, but this is confusing since satisfying the assessment criteria is also described in the Checklist in terms of ‘meeting…expectations’.)
  • Analysing data across the school to ensure all pupils are stretched while the vulnerable and those at risk make appropriate progress (‘appropriate’ is not defined within the Checklist itself but an explanatory note appended to the central section  – see below – glosses this phrase);
  • Communicating assessment information each term to pupils and parents through ‘a structured conversation’ and the provision of ‘rich, qualitative profiles of what has been achieved and indications of what they [ie parents as well as pupils] need to do next’; and
  • Celebrating a broad range of achievements, extending across the full school curriculum and encompassing social, emotional and behavioural development.

.

Method of Assessment: Purposes

‘Our method of assessment’ is by far the longest section, containing 11 separate bullet points. It could be further subdivided for clarity’s sake.

The first three bullets are devoted principally to some purposes of assessment. Some of this material might be placed more logically in the ‘Our Use of Assessment’ section, so that the central section is shortened and restricted to methodology.

The main purpose is stipulated as ‘to help teachers, parents and pupils plan their next steps in learning’.

So the phrasing suggests that assessment should help to drive forward the learning of parents and teachers, as well as to the learning of pupils. I’m not sure if this is deliberate or accidental.

Two subsidiary purposes are mentioned: providing a check on teaching standards and support for their improvement; and providing a comparator with other schools via collaboration and the use of ‘external tests and assessments’.

It is not clear why these three purposes are singled out. There is some overlap with the Principles but also a degree of inconsistency between the two pieces of documentation. It might have been better to cross-reference them more carefully.

In short, the internal logic of the Checklist and its relationship with the Principles could both do with some attention.

The real meat of the section is incorporated in the eight remaining bullet points. The first four are about what pupils are assessed against and when that assessment takes place. The last four explain how assessment judgements are differentiated, evidenced and moderated.

.

Method of Assessment: What Learners Are Assessed Against – and When

The next four bullets specify that learners are to be assessed against ‘assessment criteria which are short, discrete, qualitative and concrete descriptions of what a pupil is expected to know and be able to do.’

These are derived from the school curriculum ‘which is composed of the National Curriculum and our own local design’ (Of course that is not strictly the position in academies, as another section of the Report subsequently points out.)

The criteria ‘for periodic assessment are arranged into a hierarchy setting out what children are normally expected to have mastered by the end of each year’.

Each learner’s achievement ‘is assessed against all the relevant criteria at appropriate times of the school year’.

.

The Span of the Assessment Criteria

The first explanatory note (A) clarifies that the assessment criteria are ‘discrete, tangible descriptive statements of attainment’ derived from ‘the National Curriculum (and any school curricula)’.

There is no repetition of the provision in the Principles that they should be ‘nationally standardised’ but ‘there is little room for meaningful variety’, even though academies are not obliged to follow the National Curriculum and schools have complete flexibility over the remainder of the school curriculum.

The Recommendations have a different emphasis, saying that NAHT’s model criteria should be ‘based on the new National Curriculum’ (Recommendation 6), but the clear impression here is that they will encompass the National Curriculum ‘and any school curricula’ alike.

This inconsistency needs to be resolved. NAHT might be better off confining its model criteria to the National Curriculum only – and making it clear that even these may not be relevant to academies.

.

The Hierarchy of Assessment Criteria

The second explanatory note (B) relates to the arrangement of the assessment criteria

‘…into a hierarchy, setting out what children are normally expected to have mastered by the end of each year’.

This note is rather muddled.

It begins by suggesting that a hierarchy divided chronologically by school year is the most natural choice, because:

‘The curriculum is usually organised into years and terms for planned delivery’

That may be true, but only the Programmes of Study for the three core subjects are organised by year, and each clearly states that:

‘Schools are…only required to teach the relevant programme of study by the end of the key stage. Within each key stage, schools therefore have the flexibility to introduce content earlier or later than set out in the programme of study. In addition, schools can introduce key stage content during an earlier key stage if appropriate.’

All schools – academies and non-academies alike – therefore enjoy considerable flexibility over the distribution of the Programmes of Study between academic years.

(Later in the Report – in the commentary preceding the first six recommendations – the text mistakenly suggests that the entirety of ‘the revised curriculum is presented in a model of year-by-year progress’ (page 14) It does not mention the provision above).

The note goes on to suggest that the Commission has chosen a different route, not because of this flexibility, but because ‘children’s progress may not fit neatly into school years’:

‘…we have chosen the language of a hierarchy of expectations to avoid misunderstandings. Children may be working above or below their school year…’

But this is not an absolute hierarchy of expectations – in the sense that learners are free to progress entirely according to ability (or, more accurately, their prior attainment) rather than in age-related lock steps.

In a true hierarchy of expectations, learners would be able to progress as fast or as slowly as they are able to, within the boundaries set by:

  • On one hand, high expectations, commensurate challenge and progression;
  • On the other hand, protection against excessive pressure and hot-housing and a judicious blending of faster pace with more breadth and depth (of which more below).

This is no more than a hierarchy by school year with some limited flexibility at the margins.

.

The timing of assessment against the criteria

The third explanatory note (C) confirms the Commission’s assumption that formal assessments will be conducted at least termly – and possibly more frequently than that.

It adds:

‘It will take time before schools develop a sense of how many criteria from each year’s expectations are normally met in the autumn, spring and summer terms, and this will also vary by subject’.

This is again unclear. It could mean that a future aspiration is to judge progress termly, by breaking down the assessment criteria still further – so that a learner who met the assessment criteria for, say, the autumn term is deemed to be meeting the criteria for the year as a whole at that point.

Without this additional layer of lock-stepping, presumably the default position for the assessments conducted in the autumn and spring terms is that learners will still be working towards the assessment criteria for the year in question.

The note also mentions in passing that:

‘For some years to come, it will be hard to make predictions from outcomes of these assessments to the results in KS2 tests. Such data may emerge over time, although there are question marks over how reliable predictions may be if schools are using incompatible approaches and applying differing standards of performance and therefore cannot pool data to form large samples.’

This is one of very few places where the Report picks up on the problems that are likely to emerge from the dissonance between internal and external statutory assessment.

But it avoids the central issue, this being that the approach to internal assessment it advocates may not be entirely compatible with predicting future achievement in the KS2 tests. If so, its value is seriously diminished, both for parents and teachers, let alone the learners themselves.  This issue also reappears below.

.

Method of Assessment: How Assessment Judgements are Differentiated, Evidenced and Moderated

The four final bullet points in this section of the Design Checklist explain that all learners will be assessed as either ‘developing’, ‘meeting’, or ‘exceeding’ each relevant criterion for that year’.

Learners deemed to be exceeding the relevant criteria in a subject for a given year ‘will also be assessed against the criteria in that subject for the next year.’

Assessment judgements are supported by evidence comprising observations, records of work and test outcomes and are subject to moderation by teachers in the same school and in other schools to ensure they are fair, reliable and valid.

I will set moderation to one side until later in the post, since that too lies outside the scope of methodology.

.

Differentiation against the hierarchy of assessment criteria

The fourth explanatory note (D) addresses the vexed question of differentiation.

As readers may recall, the Report by the National Curriculum Review Expert Panel failed abjectly to explain how they would provide stretch and challenge in a system that focused exclusively on universal mastery and ‘readiness to progress’, saying only that further work was required to address the issue.

Paragraph 8.21 implied that they favoured what might be termed an ‘enrichment and extension’ model:

‘There are issues regarding ‘stretch and challenge’ for those pupils who, for a particular body of content, grasp material more swiftly than others. There are different responses to this in different national settings, but frequently there is a focus on additional activities that allow greater application and practice, additional topic study within the same area of content, and engagement in demonstration and discussion with others…These systems achieve comparatively low spread at the end of primary education, a factor vital in a high proportion of pupils being well positioned to make good use of more intensive subject-based provision in secondary schooling.’

Meanwhile, something akin to the P Scales might come into play for those children with learning difficulties.

On this latter point, the primary assessment and accountability consultation document said DfE would:

‘…explore whether P-scales should be reviewed so that they align with the revised national curriculum and provide a clear route to progress to higher attainment.’

We do not yet know whether this will happen, but Explanatory Note B to the Design Checklist conveys the clear message that the P-Scales need to be retained:

‘…must ensure we value the progress of children with special needs as much as any other group. The use of P scales here is important to ensure appropriate challenge and progression for pupils with SEN.’

By contrast, for high attainers, the Commission favours what might be called a ‘mildly accelerative’ model whereby learners who ‘exceed’ the assessment criteria applying to a subject for their year group may be given work that enables them to demonstrate progress against the criteria for the year above.

I describe it as mildly accelerative because there is no provision for learners to be assessed more than one year ahead of their chronological year group. This is a fairly low ceiling to impose on such accelerative progress.

It is also unclear whether the NAHT’s model assessment criteria will cover Year 7, the first year of the KS3 Programmes of Study, to enable this provision to extend into Year 6.

The optimal approach for high attainers would combine the ‘enrichment and extension’ approach apparently favoured by the Expert Panel with an accelerative approach that provides a higher ceiling, to accommodate those learners furthest ahead of their peers.

High attaining learners could then access a customised blend of enrichment (more breadth), extension (greater depth) and acceleration (faster pace) according to their needs.

This is good curricular practice and it should be reflected in assessment practice too, otherwise the risk is that a mildly accelerative assessment process will have an undesirable wash-back effect on teaching and learning.

Elsewhere, the Report advocates the important principle that curriculum, assessment and pedagogy should be developed in parallel, otherwise there is a risk that one – typically assessment – has an undesirable effect on the others. This would be an excellent exemplar of that statement.

The judgement whether a learner is exceeding the assessment criteria for their chronological year would be evidenced by enrichment and extension activity as well as by pre-empting the assessment criteria for the year ahead. Exceeding the criteria in terms of greater breadth or more depth should be equally valued.

This more rounded approach, incorporating a higher ceiling, should also be supported by the addition of a fourth ‘far exceeded’ judgement, otherwise the ‘exceeded’ judgement has to cover far too wide a span of attainment, from those who are marginally beyond their peers to those who are streets ahead.

These concerns need urgently to be addressed, before NAHT gets much further with its model criteria.

.

The aggregation of criteria

In order to make the overall judgement for each subject, learners’ performance against individual assessment criteria has to be combined to give an aggregate measure.

The note says:

‘The criteria themselves can be combined to provide the qualitative statement of a pupil’s achievements, although teachers and schools may need a quantitative summary. Few schools appear to favour a pure binary approach of yes/no. The most popular choice seems to be a three phase judgement of working towards (or emerging, developing), meeting (or mastered, confident, secure, expected) and exceeded. Where a student has exceeded a criterion, it may make sense to assess them also against the criteria for the next year.’

This, too, begs some questions. The statement above is consistent with one of the Report’s central recommendations:

‘Pupil progress and achievement should be communicated in terms of descriptive profiles rather than condensed to numerical summaries (although schools may wish to use numerical data for internal purposes).’

Frankly it seems unlikely that such ‘condensed numerical summaries’ can be kept hidden from parents. Indeed, one might argue that they have a reasonable right to know them.

These aggregations – whether qualitative or quantitative – will be differentiated at three levels, according to whether the learner best fits a ‘working towards’, ‘meeting’ or ‘exceeding’ judgement for the criteria relating to the appropriate year in each programme of study.

I have just recommended that there needs to be an additional level at the top end, to remove undesirable ceiling effects that lower expectations and are inconsistent with the Principles set out in the Report. I leave it to others to judge whether, if this was accepted, a fifth level is also required at the lower end to preserve the symmetry of the scale.

There is also a ‘chicken and egg’ issue here. It is not clear whether a learner must already be meeting some of the criteria for the succeeding year in order to show they are exceeding the criteria for their own year – or whether assessment against the criteria for the succeeding year is one potential consequence of a judgement that they are exceeding the criteria for their own year.

This confusion is reinforced by a difference of emphasis between the checklist – which says clearly that learners will be assessed against the criteria for the succeeding year if they exceeded the criteria for their own – and the explanatory note, which says only that this may happen.

Moreover, the note suggests that this applies criterion by criterion – ‘where a student has exceeded a criterion’ – rather than after the criteria have been aggregated, which is the logical assumption from the wording in the checklist – ‘exceeded the relevant criteria’.

This too needs clarifying.

.

.

Recommendations and Commentary

I will try not to repeat in this section material already covered above.

I found that the recommendations did not always sit logically with the preceding commentary, so I have departed from the subsections used in the Report, grouping the material into four broad sections: further methodological issues; in-school and school-to school support; national support; and phased implementation.

Each section leads with the relevant Recommendations and folds in additional relevant material from different sections of the commentary. I have repeated recommendations where they are relevant to more than one section.

.

Further methodological issues

Recommendation 4: Pupils should be assessed against objective criteria rather than ranked against each other

Recommendation 5: Pupil progress and achievements should be communicated in terms of descriptive profiles rather than condensed to numerical summaries (although schools may wish to use numerical data for internal purposes.

Recommendation 6: In respect of the National Curriculum, we believe it is valuable – to aid communication and comparison – for schools to be using consistent criteria for assessment. To this end, we call upon NAHT to develop and promote a set of model assessment criteria based on the new National Curriculum.

The commentary discusses the evolution of National Curriculum levels, including the use of sub-levels and their application to progress as well as achievement. In doing so, it summarises the arguments for and against the retention of levels.

In favour of retention:

  • The system of levels provides a common language used by schools to summarise attainment and progress;
  • It is argued (by some professionals) that parents have grown up with levels and have an adequate grasp of what they mean;
  • The numerical basis of levels was useful to schools in analysing and tracking the performance of large numbers of pupils;
  • The decision to remove levels was unexpected and caused concern within the profession, especially as it was also announced that being ‘secondary ready’ was to be associated with the achievement of Level 4B;
  • If levels are removed, they must be replaced by a different common language, or at least ‘an element of compatibility or common understanding’ should several different assessment systems emerge.

In favour of removal:

  • It is argued (by the Government) that levels are not understood by parents and other stakeholders;
  • The numerical basis of levels does not have the richness of a more rounded description of achievement. The important narrative behind the headline number is often lost through over-simplification.
  • There are adverse effects from labelling learners with levels.

The Commission is also clear that the Government places too great a reliance on tests, particularly for accountability purposes. This has narrowed the curriculum and resulted in ‘teaching to the test’.

It also creates other perverse incentives, including the inflation of assessment outcomes for performance management purposes or, conversely, the deflation of assessment outcomes to increase the rate of progress during the subsequent key stage.

Moreover, curriculum, assessment and pedagogy must be mutually supportive. Although the Government has not allowed the assessment tail to wag the curricular dog:

‘…curriculum and assessment should be developed in tandem.’

Self-evidently, this has not happened, since the National Curriculum was finalised way ahead of the associated assessment arrangements which, in the primary sector, are still unconfirmed.

There is a strong argument that such assessment criteria should have been developed by the Government and made integral to the National Curriculum.

Indeed, in Chapter 7 of its Report on ‘The Framework for the National Curriculum’, the National Curriculum Expert Panel proposed that attainment targets should be retained, not in the form of level descriptors but as ‘statements of specific learning outcomes related to essential knowledge’ that  would be ’both detailed and precise’. They might be presented alongside the Programmes of Study.

The Government ignored this, opting for a very broad single, standard attainment target in each programme of study:

‘By the end of each key stage, pupils are expected to know, apply and understand the matters, skills and processes specified in the relevant programme of study.’

As I pointed out in a previous post, one particularly glaring omission from the Consultation Document on Primary Assessment and Accountability was any explanation of how Key Stage Two tests and statutory teacher assessments would be developed from these singleton ‘lowest common denominator’ attainment targets, especially in a context where academies, while not obliged to follow the National Curriculum, would undertake the associated tests.

We must await the long-delayed response to the consultation to see if it throws any light on this matter.

Will it commit the Government to producing a framework, at least for statutory tests in the core subjects, or will it throw its weight behind the NAHT’s model criteria instead?

I have summarised this section of the Report in some detail as it is the nearest it gets to providing a rational justification for the approach set out in the recommendations above.

The model criteria appear confined to the National Curriculum at this point, though we have already noted that is not the case elsewhere in the Report.

I have also discussed briefly the inconsistency in permitting the translation of descriptive profiles into numerical data ‘for internal purposes’, but undertook to develop that further, for there is a wider case that the Report does not entertain.

We know that there will be scores attached to KS2 tests, since those are needed to inform parents and for accountability purposes.

The Primary Assessment and Accountability consultation document proposed a tripartite approach:

  • Scaled scores to show attainment, built around a new ‘secondary-ready’ standard, broadly comparable with the current Level 4B;
  • Allocation to a decile within the range of scaled scores achieved nationally, showing attainment compared with one’s peers; and
  • Comparison with the average scaled score of those nationally with the same prior attainment at the baseline, to show relative progress.

Crudely speaking, the first of these measures is criterion-referenced while the second and third are norm-referenced.

We do not yet know whether these proposals will proceed – there has been some suggestion that deciles at least will be dropped – but parents will undoubtedly want schools to be able to tell them what scaled scores their children are on target to achieve, and how those compare with the average for those with similar prior attainment.

It will be exceptionally difficult for schools to convey that information within the descriptive profiles, insofar as they relate to English and maths, without adopting the same numerical measures.

It might be more helpful to schools if the NAHT’s recommendations recognised that fact. For the brutal truth is that, if schools’ internal assessment processes do not respond to this need, they will have to set up parallel processes that do so.

In order to derive descriptive profiles, there must be objective assessment criteria that supply the building blocks, hence the first part of Recommendation 4. But I can find nothing in the Report that explains explicitly why pupils cannot also be ranked against each other. This can only be a veiled and unsubstantiated objection to deciles.

Of course it would be quite possible to rank pupils at school level and, in effect, that is what schools will do when they condense the descriptive profiles into numerical summaries.

The real position here is that such rankings would exist, but would not be communicated to parents, for fear of ‘labelling’. But the labelling has already occurred, so the resistance is attributable solely to communicating these numerical outcomes to parents. That is not a sustainable position.

.

In-school and school-to-school support

Recommendation 1: Schools should review their assessment practice against the principles and checklist set out in this report. Staff should be involved in the evaluation of existing practice and the development of a new, rigorous assessment system and procedures to enable the school to promote high quality teaching and learning.

Recommendation 2: All schools should have clear assessment principles and practices to which all staff are committed and which are implemented. These principles should be supported by school governors and accessible to parents, other stakeholders and the wider school community.

Recommendation 3: Assessment should be part of all school development plans and should be reviewed regularly. This review process should involve every school identifying its own learning and development needs for assessment. Schools should allocate specific time and resources for professional development in this area and should monitor how the identified needs are being met.

Recommendation 7 (part): Schools should work in collaboration, for example in clusters, to ensure a consistent approach to assessment. Furthermore, excellent practice in assessment should be identified and publicised…

Recommendation 9: Schools should identify a trained assessment lead, who will work with other local leads and nationally accredited assessment experts on moderation activities.

Recommendation 16: All those responsible for children’s learning should undertake rigorous training in formative, diagnostic and summative assessment, which covers how assessment can be used to support teaching and learning for all pupils, including those with special educational needs. The government should provide support and resources for accredited training for school assessment leads and schools should make assessment training a priority.

Recommendation 20: Schools should be asked to publish their principles of assessment from September 2014, rather than being required to publish a detailed assessment framework, which instead should be published by 2016. The development of the full framework should be outlined in the school development plan with appropriate milestones that allow the school sufficient time to develop an effective model.

All these recommendations are perfectly reasonable in themselves, but it is worth reflecting for a while on the likely cost and workload implications, particularly for smaller primary schools:

Each school must have a ‘trained assessment lead’ who may or may not be the same as the ‘senior leader who is responsible for assessment’ mentioned in the Design Checklist. There is no list of responsibilities for that person, but it would presumably include:

  • Leading the review of assessment practice and developing a new assessment system;
  • Leading the definition of the school’s assessment principles and practices and communicating these to governors, parents, stakeholders and the wider community;
  • Lead responsibility for the coverage of assessment within the school’s development plan and the regular review of that coverage;
  • Leading the identification and monitoring of the school’s learning and development needs for assessment;
  • Ensuring that all staff receive appropriate professional development – including ‘rigorous training in formative diagnostic and summative assessment’;
  • Leading the provision of in-school and school-to-school professional development relating to assessment;
  • Allocating time and resources for all assessment-related professional development and monitoring its impact;
  • Leading collaborative work with other schools to ensure a consistent approach to assessment;
  • Dissemination of effective practice;
  • Working with other local assessment leads and external assessment experts on moderation activities.

And, on top of this, there is a range of unspecified additional responsibilities associated with the statutory tests.

It is highly unlikely that this range of responsibilities could be undertaken effectively by a single person in less than half a day a week, as a bare minimum. There will also be periods of more intense pressure when a substantially larger time allocation is essential.

The corresponding salary cost for a ‘senior leader’ might be £3,000-£4,000 per year, not to mention the cost of undertaking the other responsibilities displaced.

There will also need to be a sizeable school budget and time allocation for staff to undertake reviews, professional development and moderation activities.

Moderation itself will bear a significant cost. Internal moderation may have a bigger opportunity cost but external moderation will otherwise be more expensive.

Explanatory note (E), attached to the Design Checklist, says:

‘The exact form of moderation will vary from school to school and from subject to subject. The majority of moderation (in schools large enough to support it) will be internal but all schools should undertake a proportion of external moderation each year, working with partner schools and local agencies.’

Hence the cost of external moderation will fall disproportionately on smaller schools with smaller budgets.

It would be wrong to suggest that this workload is completely new. To some extent these various responsibilities will be undertaken already, but the Commission’s recommendations are effectively a ratcheting up of the demand on schools.

Rather than insisting on these responsibilities being allocated to a single individual with other senior management responsibilities, it might be preferable to set out the responsibilities in more detail and give schools greater flexibility over how they should be distributed between staff.

Some of these tasks might require senior management input, but others could be handled by other staff, including paraprofessionals.

.

National support

Recommendation 7 (part): Furthermore, excellent practice in assessment should be identified and publicised, with the Department for Education responsible for ensuring that this is undertaken.

Recommendation 8 (part): Schools should be prepared to submit their assessment to external moderators, who should have the right to provide a written report to the head teacher and governors setting out a judgement on the quality and reliability of assessment in the school, on which the school should act. The Commission is of the view that at least some external moderation should be undertaken by moderators with no vested interest in the outcomes of the school’s assessment. This will avoid any conflicts of interest and provide objective scrutiny and broader alignment of standards across schools.

Recommendation 9: Schools should identify a trained assessment lead, who will work with other local leads and nationally accredited assessment experts on moderation activities.

Recommendation 11: The Ofsted school inspection framework should explore whether schools have effective assessment systems in place and consider how effectively schools are using pupil assessment information and data to improve learning in the classroom and at key points of transition between key stages and schools.

Recommendation 14: Further work should be undertaken to improve training for assessment within initial teacher training (ITT), the newly qualified teacher (NQT) induction year and on-going professional development. This will help to build assessment capacity and support a process of continual strengthening of practice within the school system.

Recommendation 15: The Universities’ Council for the Education of Teachers (UCET) should build provision in initial teacher training for delivery of the essential assessment knowledge.

Recommendation 16: All those responsible for children’s learning should undertake rigorous training in formative, diagnostic and summative assessment, which covers how assessment can be used to support teaching and learning for all pupils, including those with special educational needs. The government should provide support and resources for accredited training for school assessment leads and schools should make assessment training a priority.

Recommendation 17: A number of pilot studies should be undertaken to look at the use of information technology (IT) to support and broaden understanding and application of assessment practice.

Recommendation 19: To assist schools in developing a robust framework and language for assessment, we call upon the NAHT to take the lead in expanding the principles and design checklist contained in this report into a full model assessment policy and procedures, backed by appropriate professional development.

There are also several additional proposals in the commentary that do not make it into the formal recommendations:

  • Schools should be held accountable for the quality of their assessment practice as well as their assessment results, with headteachers also appraising teachers on their use of assessment. (The first part of this formulation appears in Recommendation 11 but not the second.) (p17);
  • It could be useful for the teaching standards to reflect further assessment knowledge, skills and understanding (p17);
  • A national standard in assessment practice for teachers would be a useful addition (p18);
  • The Commission also favoured the approach of having a lead assessor to work with each school or possibly a group of schools, helping to embed good practice across the profession (p18).

We need to take stock of the sheer scale of the infrastructure that is being proposed and its likely cost.

In respect of moderation alone, the Report is calling for sufficient external moderators, ‘nationally accredited assessment experts’ and possibly lead assessors to service some 17,000 primary schools.

Even if we assume that these roles are combined in the same person and that each person can service, say, 25 schools, that still demands something approaching a cadre of 700 people who also need to be supported, managed and trained.

If they are serving teachers there is an obvious opportunity cost. Providing a service of this scale would cost tens of millions of pounds a year.

Turning to training and professional development, the Commission is proposing:

  • Accredited training for some 17,000 school assessment leads (with an ongoing requirement to train new appointees and refresh the training of those who undertook it too far in the past);
  • ‘Rigorous training in formative, diagnostic and summative assessment, which covers how assessment can be used to support teaching and learning for all pupils, including those with special educational needs’ for everyone deemed responsible for children’s learning, so not just teachers. This will include hundreds of thousands of people in the primary sector alone.
  • Revitalised coverage of assessment in ITE and induction, on top of the requisite professional development package.

The Report says nothing of the cost of developing, providing and managing this huge training programme, which would cost some more tens of millions of pounds a year.

I am plucking a figure out of the air, but it would be reasonable to suggest that moderation and training costs combined might require an annual budget of some £50 million – and quite possibly double that. 

Unless one argues that the testing regime should be replaced by a national sampling process – and while the Report says some of the Commission’s members supported that, it stops short of recommending it – there are no obvious offsetting savings.

It is disappointing that the Commission made no effort at all to quantify the cost of its proposals.

These recommendations provide an excellent marketing opportunity for some of the bodies represented on the Commission.

For example, the CIEA press release welcoming the Report says:

‘One of the challenges, and one that schools will need to meet, is in working together, and with local and national assessment experts, to moderate their judgements and ensure they are working to common standards across the country. The CIEA has an important role to play in training these experts.’

Responsibility for undertaking pilot studies on the role of IT in assessment is not allocated, but one assumes it would be overseen by central government and also funded by the taxpayer.

Any rollout from the pilots would have additional costs attached and would more than likely create additional demand for professional development.

The reference to DfE taking responsibility for sharing excellent practice is already a commitment in the consultation document:

‘…we will provide examples of good practice which schools may wish to follow. We will work with professional associations, subject experts, education publishers and external test developers to signpost schools to a range of potential approaches.’ (paragraph 3.8).

Revision of the School Inspection Framework will require schools to give due priority to the quality of their assessment practice, though Ofsted might reasonably argue that it is already there.

Paragraph 116 of the School Inspection Handbook says:

‘Evidence gathered by inspectors during the course of the inspection should include… the quality and rigour of assessment, particularly in nursery, reception and Key Stage 1.’

We do not yet know whether NAHT will respond positively to the recommendation that it should go beyond the model assessment criteria it has already commissioned by leading work to expand the Principles and Design Checklist into ‘a full model assessment policy and procedures backed by appropriate professional development’.

There was no reference to such plans in the press release accompanying the Report.

Maybe the decision could not be ratified in time by the Association’s decision-making machinery – but this did not prevent the immediate commissioning of the model criteria.

.

Phased Implementation

Recommendation 10: Ofsted should articulate clearly how inspectors will take account of assessment practice in making judgements and ensure both guidance and training for inspectors is consistent with this.

Recommendation 12: The Department for Education should make a clear and unambiguous statement on the teacher assessment data that schools will be required to report to parents and submit to the Department for Education. Local authorities and other employers should provide similar clarity about requirements in their area of accountability.

Recommendation 13: The education system is entering a period of significant change in curriculum and assessment, where schools will be creating, testing and revising their policies and procedures. The government should make clear how they will take this into consideration when reviewing the way they hold schools accountable as new national assessment arrangements are introduced during 2014/15. Conclusions about trends in performance may not be robust.

Recommendation 18: The use by schools of suitably modified National Curriculum levels as an interim measure in 2014 should be supported by the government. However, schools need to be clear that any use of levels in relation to the new curriculum can only be a temporary arrangement to enable them to develop, implement and embed a robust new framework for assessment. Schools need to be conscious that the new curriculum is not in alignment with the old National Curriculum levels.

Recommendation 20: Schools should be asked to publish their principles of assessment from September 2014, rather than being required to publish a detailed assessment framework, which instead should be published by 2016. The development of the full framework should be outlined in the school development plan with appropriate milestones that allow the school sufficient time to develop an effective model.

Recommendation 21: A system wide review of assessment should be undertaken. This would help to repair the disjointed nature of assessment through all ages, 2-19.

The Commission quite rightly identifies a number of issues caused by the implementation timetable, combined with continuing uncertainty over aspects of the Government’s plans.

At the time of writing, the response to the consultation document has still not been published (though it was due in autumn 2013) yet schools will be implementing the new National Curriculum from this September.

The Report says:

‘There was strong concern expressed about the requirement for schools to publish their detailed curriculum and assessment framework in September 2014.’

This is repeated in Recommendation 20, together with the suggestion that this timeline should be amended so that only a school’s principles for assessment need be published by this September.

I have been trying to pin down the source of this requirement.

Schedule 4 of The School Information (England) (Amendment) Regulations 2012 do not require the publication of a detailed assessment framework, referring only to

‘The following information about the school curriculum—

(a)  in relation to each academic year, the content of the curriculum followed by the school for each subject and details as to how additional information relating to the curriculum may be obtained;

(b)  in relation to key stage 1, the names of any phonics or reading schemes in operation; and

(c)  in relation to key stage 4—

(i)            a list of the courses provided which lead to a GCSE qualification,

(ii)          a list of other courses offered at key stage 4 and the qualifications that may be acquired.’

I could find no Government guidance stating unequivocally that this requires schools to carve up all the National Curriculum programmes of study into year-by-year chunks.  (Though there is no additional burden attached to publication if they have already undertaken this task for planning purposes.)

There are references to the publication of Key Stage 2 results (which will presumably need updating to reflect the removal of levels), but nothing on the assessment framework.

Moreover, the DfE mandatory timeline says that from the Spring Term of 2014:

‘All schools must publish their school curriculum by subject and academic year, including their provision of personal, social, health and economic education (PSHE).’

(The hyperlink returns one to the Regulations quoted above.)

There is no requirement for publication of further information in September.

I wonder therefore if this is a misunderstanding. I stand to be corrected if readers can point me to the source.

It may arise from the primary assessment and accountability consultation document, which discusses publication of curricular details and then proceeds immediately to discuss the relationship between curriculum and assessment:

‘Schools are required to publish this curriculum on their website…In turn schools will be free to design their approaches to assessment, to support pupil attainment and progression. The assessment framework must be built into the school curriculum, so that schools can check what pupils have learned and whether they are on track to meet expectations at the end of the key stage, and so that they can report regularly to parents.’ (paras 3.4-3.5)

But this conflation isn’t supported by the evidence above and, anyway, these are merely proposals.

That said, it must be assumed that the Commission consulted its DfE observer on this point before basing recommendations on this interpretation.

If the observer’s response was consistent with the Commission’s interpretation, then it is apparently inconsistent with all the material so far published by the Department!

It may be necessary for NAHT to obtain clarification of this point given the evidence cited above.

That aside, there are issues associated with the transition from the current system to the future system.

The DfE’s January 2014 ‘myths and facts’ publication says:

‘As part of our reforms to the national curriculum, the current system of “levels” used to report children’s attainment and progress will be removed from September 2014. Levels are not being banned, but will not be updated to reflect the new national curriculum and will not be used to report the results of national curriculum tests. Key Stage 1 and Key Stage KS2 [sic] tests taken in the 2014 to 2015 academic year will be against the previous national curriculum, and will continue to use levels for reporting purposes

Schools will be expected to have in place approaches to formative assessment that support pupil attainment and progression. The assessment framework should be built into the school curriculum, so that schools can check what pupils have learned and whether they are on track to meet expectations at the end of the key stage, and so that they can report regularly to parents. Schools will have the flexibility to use approaches that work for their pupils and circumstances, without being constrained by a single national approach.’

The reference here to having approaches in place – rather than the publication of a ‘detailed curriculum and assessment framework’ – would not seem wildly inconsistent with the Commission’s idea that schools should establish their principles by September 2014, and develop their detailed assessment frameworks iteratively over the two succeeding years. However, the Government needs to clarify the position.

Since Key Stage 2 tests will not dispense with levels until May 2016 (and they will be published in the December 2015 Performance Tables), there will be an extended interregnum in which National Curriculum Levels will continue to have official currency.

Moreover, levels may still be used in schools – they are not being banned – though they will not be aligned to the new National Curriculum.

The Report says:

‘…it is important to recognise that, even if schools decide to continue with some form of levels, the new National Curriculum does not align to the existing levels and level descriptors and this alignment is a piece of work that needs to be undertaken now.’ (p19).

However, the undertaking of this work does not feature in the Recommendations, unless it is implicit in the production by NAHT of ‘a full model assessment policy and procedures’, which seems unlikely.

One suspects that the Government would be unwilling to endorse such a process, even as a temporary arrangement, since what is to stop schools from continuing to use this new improved levels structure more permanently?

The Commission would appear to be on stronger ground in asking Ofsted to make allowances during the interregnum (which is what I think Recommendation 10 is about) especially given that, as Recommendation 13 points out, evidence of ‘trends in performance may not be robust’.

The point about clarity over teacher assessment is well made – and one hopes it will form part of the response to the primary assessment and accountability consultation document when that is eventually published.

The Report itself could have made progress in this direction by establishing and maintaining a clearer distinction between statutory and internal teacher assessment.

The consultation document itself made clear that KS2 writing would continue to be assessed via teacher assessment rather than a test, and, moreover:

‘At the end of each key stage schools are required to report teacher assessment judgements in all national curriculum subjects to parents. Teachers will judge whether each pupil has met the expectations set out in the new national curriculum. We propose to continue publishing this teacher assessment in English, mathematics and science, as Lord Bew recommended.’ (para 3.9)

But what it does not say is what requirements will be imposed to ensure consistency across this data. Aside from KS2 writing, will they also be subject to the new scaled scores, and potentially deciles too?

Until schools have answers to that question, they cannot consider the overall shape of their assessment processes.

The final recommendation, for a system-wide review of assessment from 2-19 is whistling in the wind, especially given the level of disruption already caused by the decision to remove levels.

Neither this Government nor the next is likely to act upon it.

 

Conclusion

The Commission’s Report moves us forward in broadly the right direction.

The Principles, Design Checklist and wider recommendations help to fill some of the void created by the decision to remove National Curriculum levels, the limited nature of the primary assessment and accountability consultation document and the inordinate delay in the Government’s response to that consultation.

We are in a significantly better place as a consequence of this work being undertaken.

But there are some worrying inconsistencies in the Report as well as some significant shortcomings to the proposals it contains. There are also several unanswered questions.

Not to be outdone, I have bound these up into a series of recommendations directed at NAHT and its Commission. There are 23 in all and I have given mine letters rather than numerals, to distinguish them from the Commission’s own recommendations.

  • Recommendation A: The Commission should publish all the written evidence it received.
  • Recommendation B: The Commission should consult on key provisions within the Report, seeking explicit commitment to the Principles from DfE, Ofqual and Ofsted.
  •  Recommendation C: The Commission should ensure that its Design Checklist is fully consistent with the Principles in all respects. It should also revisit the internal logic of the Design Checklist.
  • Recommendation D: So far as possible, ahead of the primary assessment and accountability consultation response, the Commission should distinguish clearly how its proposals relate to statutory teacher assessment, alongside schools’ internal assessment processes.
  • Recommendation E: NAHT should confirm who it has commissioned to produce model assessment criteria and to what timetable. It should also explain how these criteria will be ‘nationally standardised’.
  • Recommendation F: The Commission should clarify whether the trained assessment lead mentioned in Recommendation 9 is the same or different to the ‘senior leader who is responsible for assessment’ mentioned in the Design Checklist.
  • Recommendation G: The Commission should set out more fully the responsibilities allocated to this role or roles and clarify that schools have flexibility over how they distribute those responsibilities between staff.
  • Recommendation H:  NAHT should clarify how the model criteria under development apply – if at all – to the wider school curriculum in all schools and to academies not following the National Curriculum.
  • Recommendation I: NAHT should clarify how the model criteria under development will allow for the fact that in all subjects all schools enjoy flexibility over the positioning of content in different years within the same key stage – and can also anticipate parts of the subsequent key stage.
  • Recommendation J: NAHT should clarify whether the intention is that the model criteria should reflect the allocation of content to specific terms as well as to specific years.
  • Recommendation K: The Commission should explain how its approach to internal assessment will help predict future performance in end of Key Stage tests.
  • Recommendation L: The Commission should shift from its narrow and ‘mildly accelerative’ view of high attainment to accommodate a richer concept that combines enrichment (breadth), extension (depth) and acceleration (faster pace) according to learners’ individual needs.
  • Recommendation M: The Commission should incorporate a fourth ‘far exceeded’ assessment judgement, since the ‘exceeded’ judgement covers too wide a span of attainment.
  • Recommendation N: NAHT should clarify whether its model criteria will extend into KS3, to accommodate assessment against the criteria for at least year 7, and ideally beyond.
  • Recommendation O: The Commission should clarify whether anticipating criteria for a subsequent year is a cause or a consequence of being judged to be ‘exceeding’ expectations in the learner’s own chronological year.
  • Recommendation P: The Commission should confirm that numerical summaries of assessment criteria – as well as any associated ranking positions – should be made available to parents who request them.
  • Recommendation Q: The Commission should explain why schools should be forbidden from ranking learners against each other (or allocating them to deciles).
  • Recommendation R: The Commission should assess the financial impact of its proposals on schools of different sizes.
  • Recommendation S: The Commission should cost its proposals for training and moderation, identifying the burden on the taxpayer and any offsetting savings.
  • Recommendation T: NAHT should clarify its response to Recommendation 19, that it should lead the development of a full model assessment policy and procedures.
  • Recommendation U: The Commission should clarify with DfE its understanding that schools are required to publish a detailed curriculum and assessment framework by September 2014.
  • Recommendation V: The Commission should clarify with DfE the expectation that it should have in place ‘approaches to formative assessment’ and whether the proposed assessment principles satisfy this requirement.
  • Recommendation W: The commission should clarify whether it is proposing that work is undertaken to align National Curriculum levels with the new National Curriculum and, if so, who it proposes should undertake this.

So – good overall – subject to these 23 reservations!

Some are more significant than others. Given my area of specialism, I feel particularly strongly about those that relate directly to high attainers, especially L and M above.

Those are the two I would nail to the door of 1 Heath Square.

.

GP

March 2014

High Attainment in the 2013 Primary School Performance Tables

.

This is a distillation of data about  high attainment and the performance of high attaining learners in the 2013 Primary School Performance Tables.

It draws on the statistics contained in SFR51/2013 – National curriculum assessments at key stage 2: 2012-13.

For the purposes of this post, high attainment is Level 5 and above at KS2.

The definition of high attainers is taken from the School Performance Tables. A distinction between the performance of low, medium and high attaining pupils was first introduced into the 2011 Tables. It is based on prior attainment four years earlier at the end of Key Stage 1.

The User Guide to the Tables explains the distinction thus:

‘Prior attainment definitions are based on KS1 Teacher Assessment (using the KS1 Average Point Score) as follows:

  • Low attaining = those below Level 2 at KS1 (ie those with a KS1 APS < 12);
  • Middle attaining = those at Level 2 at KS1 (ie those with a KS1 APS >= 12 but <18);
  • High attaining = those above Level 2 at KS1 (ie those with a KS1 APS >= 18).

Where a pupil does not have a KS1 assessment (eg. because they weren’t in the country at the time), they will not be included in these figures.’

It follows that this definition will not include learners who are particularly strong in one area and comparatively weak in another, but it will include those who achieve relatively strongly across the board.

The proportions of the KS2 cohort defined as high, middle and low attainers in state-funded schools in 2013 are

  High % Middle % Low %
2013 25 57 18

 

Headlines

  • The percentage of pupils achieving Level 5 and above is down 4% in reading but up 2% in maths.
  • 7% of pupils achieved Level 6 in maths, up from 3% in 2012. This includes a staggering 29% of Chinese pupils. Some 2% of pupils achieved Level 6 in writing and in grammar, spelling and punctuation (GSP), but less than 1% achieved Level 6 in reading.
  • According to the Tables, there is a 16% achievement gap between the proportions of advantaged and disadvantaged learners achieving Level 5 and above in reading, writing and maths, up 1% on 2012. But this is a smaller gap than exists at Level 4B and above (21%) and at Level 4 and above (18%).
  • On the other hand, the SFR shows that the FSM/non-FSM and advantaged/disadvantaged gaps for each assessment are invariably significantly higher at Level 5 and above than they are at Level 4 and above. The biggest differences are in reading (10 percentage points worse for disadvantaged; 8 percentage points worse for FSM) and in maths (10 percentage points worse for disadvantaged; 7 percentage points worse for FSM).
  • A worrying 37% of high attainers in state-funded schools did not achieve Level 5 or above in reading, writing and maths. Not one high attainer achieved this in 64 primary schools.
  • Significant numbers of schools had no pupils at Level 6 in each assessment: some 12,700 had none in reading; about 10,750 had none in writing; some 10,200 had none in GSP and over 5,100 had none in maths.

 

Summary of Outcomes in the 2013 Primary Performance Tables

.

Aggregated – Reading, Writing and Maths

  • Overall, 21% of pupils in state-funded schools achieved Level 5 or above in reading, writing and maths (up 1% from 20% in 2012).
  • 25% of girls achieved this (up from 23% in 2012) and 18% of boys did so (up from 17% in 2012) giving an unchanged 7% gender gap. Some 19% of EAL pupils achieved this outcome.
  • 10% of disadvantaged pupils achieved this, compared with 26% of other pupils, giving an achievement gap of 16% (in 2012 9% of disadvantaged pupils and 24% of other pupils achieved this, so the gap has increased by 1% since last year). However, this gap is significantly smaller than the 21% gap at Level 4B and the 19% gap at Level 4.
  • 63% of the pupils in state-funded schools achieving this benchmark were high attainers, meaning that a worrying 37% of high attainers fell short.  Meanwhile, 10% of middle attainers were successful.
  • Almost all high attainers secured Level 4B and above (97%) and Level 4 and above (99%).
  • The percentage achieving this benchmark varied by school type from 25% (converter academies); to 21% (LA maintained mainstream schools);  to 14% (free schools); and 10% (sponsored academies).
  • One school – Litton C of E Primary (Buxton) – achieved 100% on this measure (six pupils). A dozen schools managed 75% or more, including two with 1FE – Grinling Gibbons Primary School (Lewisham) and Lowbrook Academy (Maidenhead).
  • At Grinling Gibbons, 88% of disadvantaged pupils achieved this measure (cohort of 16). Almost 40 schools recorded over 50%, two of them with cohorts of 30+ – Nelson Mandela School (Birmingham) and Tollgate Primary (Newham).
  • Seven schools achieved an average point score of 34.0 or above (equivalent to an average Level 5A) the largest being Lowbrook Academy and Fox Primary School (Kensington and Chelsea).
  • In over 600 primary schools no pupils achieved this benchmark.  In 64 schools, not one high attainer managed to do so (though, in a handful of these, up to 20% of middle attainers did so).

.

Reading

  • 44% of pupils in state-funded schools achieved Level 5 or above in reading (This is 4% lower – rounded – than the 48% who did so in 2012).
  • Around 2,300 pupils achieved Level 6 – 592 boys and 1,670 girls.
  • 18% of boys and 25% of girls achieved Level 5 or higher, giving a gender gap of 7%. Compared with 2012, Level 5 attainment declined significantly more amongst girls compared (down 5%) than boys (down 2%) so the gender gap closed by 3%.
  • 86% of high attainers achieved Level 5 or above.
  • 87% of those with KS1 reading at Level 3 or higher managed Level 5 – and a further  2% achieved Level 6.
  • The FSM gap at Level 5 and above is 21% (48% versus 27%) compared with 13% at Level 4 and above.
  • The advantaged/disadvantaged gap at Level 5 and above is 21% (51% versus 30%) compared with 11% at Level 4 and above.
  • 89% of high attainers made the expected progress in reading (compared with 92% of middle attainers).
  • One primary school – Ilford and Kingston C of E Primary School (Lewes) recorded 19% of its pupils achieving Level 6.
  • 18 primary schools recorded 100% achieving Level 5 or above in Reading – no pupils in any of those schools achieved Level 6.
  • About 12,700 schools had no pupils at Level 6 in Reading

.

Grammar, Punctuation and Spelling (GPS)

  • 47% of pupils in state-funded schools achieved Level 5 or above in GPS.
  • 2% (around 8,600) achieved Level 6 including 3,233 boys and 5,373 girls.
  • 7% of Chinese pupils achieved Level 6.
  • 42% of boys and 54% of girls achieved Level 5 or above giving a gender gap of 12%.
  • 91% of high attainers achieved Level 5 or above.
  • The FSM gap at Level 5 and above is 20% (51% versus 31%) compared with 18% at Level 4 and above.
  • The advantaged/disadvantaged gap at Level 5 and above is 19% (53% versus 34%) compared with 17% at Level 4 and above.
  • In two primary schools – St Joseph’s Catholic Primary (Southwark) and The Vineyard School (Richmond) 38% of pupils achieved Level 6.
  • 20 schools had 100% of pupils at Level 5 or above.
  • About 10,200 schools posted zero Level 6 results.

.

Writing

  • 30% of pupils in state-funded schools achieved Level 5 or above in writing teacher assessment.
  • 2% (over 8,400 pupils) achieved Level 6 including 2,861 boys and 5,549 girls.
  • 80% of those with Level 3 writing at KS1 achieved Level 5 and a further 9% achieved Level 6.
  • 76% of high attainers achieved Level 5.
  • The FSM gap at Level 5 and above is 19% (34% versus 15%) compared with 16% at Level 4 and above.
  • The disadvantaged/non-disadvantaged gap at Level 5 and above is 18% (36% versus 18%) compared with 13% at Level 4 and above.
  • 94% of high attainers made the expected progress in writing (compared with 93% of middle attainers).
  • At Newton Farm Nursery Infant and Junior School (Harrow) 63% of pupils achieved Level 6.
  • Just 4 schools achieved 100% at Level 5 or above – Litton C of E Primary (Buxton), Newton Farm (Harrow), St Joseph’s Hurst Green (Clitheroe) and St Oswald’s C of E Primary (Chester).
  • 10,750 schools had no pupils at Level 6.

.

Maths

  • 41% of pupils in state-funded schools achieved Level 5 or above in maths (up 2% from 39% in 2012).
  • 7% (around 35,000 pupils) achieved Level 6 (up 3% – rounded – from 3% in 2012) including 21,388 boys and 13,749 girls.
  • 29% of Chinese pupils achieved Level 6  (19% did so in 2012).
  • 2% of FSM pupils achieved Level 6.
  • 43% of those at Level 5 are boys and 39% are girls (compared with 2012 girls improved by 2% whereas boys improved by only 1%, so narrowing the gender gap slightly).
  • 64% of those with Level 3 or above in maths at KS1 made it to Level 5 at KS2 and a further 26% achieved Level 6.
  • 83% of high attainers achieved Level 5 or above.
  • 93% of high attainers made the expected progress in maths (compared with 90% of middle attainers).
  • The FSM gap at Level 5 and above is 20% (44% versus 24%) compared with 13% at Level 4 and above.
  • The advantaged/disadvantaged gap at Level 5 and above is 21% (47% versus 26%) compared with 11% at level 4 and above.
  • St Oswald’s CE Aided Primary (Chester) had 75% of its entry achieve Level 6 in maths and two other schools exceeded 50% – St Joseph’s RC Primary Hurst Green (Clitheroe) and Haselor School (Alcester).
  • 17 schools had 100% of their entry at Level 5 or above.
  • In over 5,100 schools no pupils achieved Level 6.

GP

December 2013