A Primary Assessment Progress Report

.

This post tracks progress towards the introduction of the primary assessment and accountability reforms introduced by England’s Coalition Government.

pencil-145970_640It reviews developments since the Government’s consultation response was published, as well as the further action required to ensure full and timely implementation.

It considers the possibility of delay as a consequence of the May 2015 General Election and the potential impact of a new government with a different political complexion.

An introductory section outlines the timeline for reform. This is followed by seven thematic sections dealing with:

There are page jumps from each of the bullets above, should readers wish to refer to these specific sections.

Each section summarises briefly the changes and commitments set out in the consultation response (and in the original consultation document where these appear not to have been superseded).

Each then reviews in more detail the progress made to date, itemising the tasks that remain outstanding.

I have included deadlines for all outstanding tasks. Where these are unknown I have made a ‘best guess’ (indicated by a question mark after the date).

I have done my best to steer a consistent path through the variety of material associated with these reforms, pointing out apparent conflicts between sources wherever these exist.

A final section considers progress across the reform programme as a whole – and how much remains to be done.

It discusses the likely impact of Election Purdah and the prospects for changes in direction consequent upon the outcome of the Election.

I have devoted previous posts to ‘Analysis of the Primary Assessment and Accountability Consultation Document’ (July 2013) and to the response in ‘Unpacking the Primary Assessment and Accountability Reforms’ (April 2014) so there is inevitably some repetition here, for which I apologise.

This is a long and complex post, even by my standards. I have tried to construct the big picture from a variety of different sources, to itemise all the jigsaw pieces already in place and all those that are still missing.

If you spot any errors or omissions, do let me know and I will do my best to correct them.

.

[Postscript: Please note that I have added several further postscripts to this document since the original date of publication. If you are revisiting, do pause at the new emboldened paragraphs below.]

Timeline for Reform

The consultation document ‘Primary assessment and accountability under the new national curriculum’ was published on 7 July 2013.

It contained a commitment to publish a response in ‘autumn 2013’, but ‘Reforming assessment and accountability for primary schools’ did not appear until March 2014.

The implementation timetable has to be inferred from a variety of sources but seems to be as shown in the table below. (I have set aside interim milestones until the thematic sections below.)

Month/year Action
Sept 2014 Schools no longer expected to use levels for non-statutory assessment
May 2015 End of KS1 and KS2 national curriculum tests and statutory teacher assessment reported through levels for the final time. .
Summer term 2015 Final 2016 KS1 and KS2 test frameworks, sample materials and mark schemes published.
Guidance published on reporting of test results.
Sept 2015 Schools can use approved reception baseline assessments (or a KS1 baseline).
Sept/Autumn term 2015 New performance descriptors for statutory teacher assessment published.
Dec 2015 Primary Performance Tables use levels for the final time.
May 2016 New KS1 and KS tests introduced, reported through new attainment and progress measures.
June 2016 Statutory teacher assessment reported through new performance descriptors.
Sept 2016 Reception baseline assessment the only baseline option for all-through primaries
Schools must publish new headline measures on their websites.
New floor standards come into effect (with progress element still derived from KS1 baseline).
Dec 2016 New attainment and performance measures published in Primary Performance Tables.

The General Election takes place on 7 May 2015, but pre-Election Purdah will commence on 30 March, almost exactly a year on from publication of the consultation response.

At the time of writing, some 40 weeks have elapsed since the response was published – and there are some 10 weeks before Purdah descends.

Assuming that the next Government is formed within a week of the Election (which might be optimistic), there is a second working period of roughly 10 weeks between that and the end of the AY 2014/15 summer term.

The convention is that all significant assessment and accountability reforms are notified to schools a full academic year before implementation, so allowing them sufficient time to plan for implementation.

A full year’s lead time is no longer sacrosanct (and has already been set aside in some instances below) but any shorter notification period may have significant implications for teacher workload – something that the Government is committed to tackling.

.

[Postscript: On 6 February the Government published its response to the Workload Challenge, which contained a commitment to introduce, from ‘Spring 2015’, a:

‘DfE Protocol setting out minimum lead-in times for significant curriculum, qualifications and accountability changes…’

Elsewhere the text says that the minimum lead time will be a year, thus reinforcing the convention described above.

The term ‘significant’ allows some wriggle room, but one might reasonably expect it to be applied to some of the outstanding actions below.

The Protocol was published on 23 March. The first numbered paragraph implicitly defines a significant change as one having ‘a significant workload impact on schools’, though what constitutes significance (and who determines it) is left unanswered.

There is provision for override ‘in cases where change is urgently required’ but criteria for introducing an override are not supplied.]

.

.

We now know that a minimum lead time will not be applied to the introduction of new performance descriptors for statutory teacher assessment (see below). The original timescale did not fit this description and it has not been adjusted in the light of consultation.]

.

Announcements made during the long summer holiday are much disliked by schools, so the end of summer term 2015 becomes the de facto target for any reforms requiring implementation from September 2016.

One might therefore conclude that:

  • We are about two-thirds of the way through the main implementation period.
  • There is a period of some 100 working days in which to complete the reforms expected to be notified to schools before the end of the AY2014/15 summer term. This is divided into two windows of some 50 working days on either side of Purdah.
  • There is some scope to extend more deadlines into the summer break and autumn 2015, but the costs of doing so – including loss of professional goodwill – might outweigh the benefits.

Purdah will act as a brake on progress across the piece. It will delay announcements that might otherwise have been made in April and early May, such as those related to new tests scheduled for May 2016.

The implications of Purdah are discussed further in the final section of this post.

.

Reception Baseline Assessment

Consultation response

A new Reception Baseline will be introduced from September 2015. This will be undertaken by children within their first few weeks of school (so not necessarily during the first half of the autumn term).

Teachers will be able to select from a range of assessments ‘but most are likely to be administered by the reception teaching staff’.  Assessments will be ‘short’ and ‘sit within teachers’ broader assessments of children’s development’.

They will be:

‘…strong predictors of key stage 1 and key stage 2 attainment whilst reflecting the age and abilities of children in reception’

Schools that use an approved baseline assessment ‘in September 2015’ (and presumably later during the 2015/16 academic year) will have their progress measured in 2022 against that or a KS1 baseline, whichever gives the best result.

However, only the reception baseline will be available from September 2016 and, from this point, the Early Years Foundation Stage (EYFS) profile will no longer be compulsory.

The reception baseline will not be compulsory either, since:

‘Schools that choose not to use an approved baseline assessment from 2016 will be judged on an attainment floor standard alone.’

But, since the attainment floor standard is so demanding (see below), this apparent choice may prove illusory for most schools.

Further work includes:

  • Engaging experts to develop criteria for the baselines.
  • A study in autumn 2014 of schools that already use such assessments, to inform decisions on moderation and the reporting of results to parents.
  • Communicating those decisions about moderation and reporting results – to Ofsted as well as to parents – ensuring they are ‘contextualised by teachers’ broader assessments’.
  • Publishing a list of assessments that meet the prescribed criteria.

.

Developments to date

Baseline criteria were published by the STA in May 2014.

The purpose of the assessments is described thus:

‘…to support the accountability framework and help assess school effectiveness by providing a score for each child at the start of reception which reflects their attainment against a pre-determined content domain and which will be used as the basis for an accountability measure of the relative progress of a cohort of children through primary school.’

This emphasis on the relevance of the baseline to floor targets is in marked contrast with the emphasis on reporting progress to parents in the original consultation document.

Towards the end of the document here is a request for ‘supporting information in addition to the criteria’:

‘What guidance will suppliers provide to schools in order to enable them to interpret the results and report them to parents in a contextualised way, for example alongside teacher observation?’

This seems to refer to the immediate reporting of baseline outcomes rather than of subsequent progress measures. Suitability for this purpose does not appear within the criteria themselves.

Interestingly, the criteria specify that the content domain:

‘…must demonstrate a clear progression towards the key stage 1 national curriculum in English and mathematics’,

but there is no reference to progression to KS2, and nothing about assessments being ‘strong predictors’ of future attainment, whether at KS1 or KS2.

Have expectations been lowered, perhaps because of concerns about the predictive validity of the assessments currently available?

A research study was commissioned in June 2014 (so earlier than anticipated) with broader parameters than originally envisaged.

The Government awarded a 9-month contract to NFER worth £49.7K, to undertake surveys of teachers’, school leaders’ and parents’ views on baseline assessment.

The documentation reveals that CEM is also involved in a parallel quantitative study which will ‘simulate an accountability environment’ for a group of schools, to judge changes in their behaviour.

Both of these organisations are also in the running for concession contracts to deliver the assessments from September 2015 (see below).

The aims of the project are to identify:

  • The impact of the introduction of baseline assessments in an accountability context.
  • Challenges to the smooth introduction of baseline assessments as a means to constructing an accountability measure.
  • Potential needs for monitoring and moderation approaches.
  • What reporting mechanisms and formats stakeholders find most useful.

Objectives are set out for an accountability strand and a reporting strand respectively. The former refer explicitly to identification of ‘gaming’ and the exploration of ‘perverse incentives’.

It is not entirely clear from the latter whether researchers are focused solely on initial contexualised reporting of reception baseline outcomes, or are also exploring the subsequent reporting of progress.

The full objectives are reproduced below

.

Reception baseline capture

.

The final ‘publishable’ report is to be delivered by March 2015. It will be touch and go whether this can be released before Purdah descends. Confirmation of policy decisions based on the research will likely be delayed until after the Election.

.

The process has begun to identify and publish a list of assessments that meet the criteria.

A tender appeared on Contracts Finder in September 2014 and has been updated several times subsequently, the most recent version appearing in early December.

The purpose is to award several concession contracts, giving holders the right to compete with each other to deliver baseline assessments.

Contracts were scheduled to be awarded on 26 January 2015, but there was no announcement. Each will last 19 months (to August 2016), with an option to extend for a further year. The total value of the contracts, including extensions, is calculated at £4.2m.

There is no limit to the number of concessions to be awarded, but providers must meet specified (and complex) school recruitment and delivery targets which essentially translate into a 10% sample of all eligible schools.

Under-recruiting providers can be included if fewer than four meet the 10% target, as long as they have recruited at least 1,000 eligible schools.

Moreover:

‘The minimum volume requirement may be waived if the number of schools choosing to administer the reception baseline is fewer than 8,887 [50% of the total number of schools with a reception class].’

Hence the number of suppliers in the market is likely to be limited to 10 or so: there will be some choice, but not too much.

My online researches unearthed four obvious candidates:

And suggestions that this might constitute the entire field

.

.

The initial deadline for recruiting the target number of schools is 30 April 2015, slap-bang in the middle of Purdah. This may prove problematic.

.

[Postscript: The award of six concession contracts was quietly confirmed on Wednesday 4 February, via new guidance on DfE’s website. The two contractors missing from the list above are Early Excellence and Hodder Education.

The guidance confirms that schools must sign up with their preferred supplier. They can do so after the initial deadline of 30 April but, on 3 June, schools will be told if they have chosen a provider that has been suspended for failing to recruit sufficient schools.  They will then need to choose an alternative provider.

It adds that, in AY2015/16, LA-maintained schools, academies and free schools will be reimbursed for the ‘basic cost’ of approved reception baselines. Thereafter, school budgets will include the necessary funding.

In the event, the Government has barely contributed to publicity for the assessment, leaving it to suppliers to make the running. The initial low-key approach (including links to the contractors’ home pages rather than to details of their baseline offers) has been maintained.

The only addition to the guidance has been the inclusion, from 20 March, of the criteria used to evaluate the original bids. This seems unlikely to help schools select their preferred solution since, by definition, all the successful bids must have satisifed these criteria!

Purdah will now prevent any further Government publicity.]

.

It seems likely that the decision to allow a range of baseline assessments – as opposed to a single national measure – will create significant comparability issues.

One of the ‘clarification questions’ posed by potential suppliers is:

‘We can find no reference to providing a comparability score between provider assessments. Therefore, can we assume that each battery of assessments will be independent, stand-alone and with no need to cross reference to other suppliers?’

The answer given is:

‘The assumption is correct at this stage. However, STA will be conducting a comparability study with successful suppliers in September 2015 to determine whether concordance tables can be constructed between assessments.’

This implies that progress measures will need to be calculated separately for users of each baseline assessment – and that these will be comparable only through additional ‘concordance tables’, should these prove feasible.

There are associated administrative and workload issues for schools, particularly those with high mobility rates, which may find themselves needing to engage with several different baseline assessment products.

One answer to a supplier’s question reveals that:

‘As currently, children will be included in performance measures for the school in which they take their final assessment (i.e. key stage 2 tests) regardless of which school they were at for the input measure (i.e. reception baseline on key stage 1). We are currently reviewing how long a child needs to have attended a school in order for their progress outcome to be included in the measure.’

The issue of comparability also raises questions about their aggregation for floor target purposes. Will targets based on several different baseline assessments be comparable with those based on only one? Will schools with high mobility rates be disadvantaged?

Schools will pay for the assessments. The supporting documentation says that:

‘The amount of funding that schools will be provided with is still to be determined. This will not be determined until after bids have been submitted to avoid accusations of price fixing.’

One of the answers to a clarification question says:

‘The funding will be available to schools from October 2015 to cover the reception baseline for the academic year 2015/16.’

Another says this funding is unlikely to be ringfenced.

There is some confusion over the payment mechanism. One answer says:

‘…the mechanism for this is still to be determined. In the longer term, money will be provided to schools through the Dedicated Schools Grant (DSG) to purchase the reception baseline. However, the Department is still considering options for the first year and may pay suppliers directly depending on the amount of data provided.’

But yet another is confident that:

‘Suppliers will be paid directly by schools. The Department will reimburse schools separately.’

The documentation also reveals that there has as yet been no decision on how to measure progress between the baseline and the end of KS2:

‘The Department is still considering how to measure this and is keen for suppliers to provide their thoughts.’

The ‘Statement of requirements’ once again foregrounds the use of the baseline for floor targets rather than reporting individual learners’ progress.

‘On 27 March 2014, the Department for Education (DfE) announced plans to introduce a new floor standard from September 2016. This will be based on the progress made by pupils from reception to the end of primary school.  The DfE will use a new Reception Baseline Assessment to capture the starting point from which the progress that schools make with their pupils will be measured.  The content of the Reception Baseline will reflect the knowledge and understanding of children at the start of reception, and will be clearly linked to the learning and development requirements of the Early Years Foundation Stage and key stage 1 national curriculum in English and mathematics.  The Reception Baseline will be administered within the first half term of a pupil’s entry to a reception class.’

In relation to reporting to parents, one of the answers to suppliers’ questions states:

‘Some parents will be aware of the reception baseline from the national media coverage of the policy announcement. We anticipate that awareness of the reception baseline will develop over time. As with other assessments carried out by a school, we would expect schools to share information with parents if asked, though there will be no requirement to report the outcome of the reception baseline to parents.’

So it appears that, regardless of the outcomes of the research above, initial short term reporting of reception baseline outcomes will be optional.

.

[Postscript: This position is still more vigorously stated in a letter dated November 2014 from Ministers to a primary group formed by two maths associations. It says (my emphasis):

‘Let me be clear that we do not intend the baseline assessment to be used to monitor the progress of individual children. You rightly point out that any assessment that was designed to be reliable at individual child level would need to take into account the different ages at which children start reception and be sufficiently detailed to account for the variation in performance one expects from young children day-to-day. Rather, the baseline assessment is about capturing the starting point for the cohort which can then be used to assess the progress of that cohort at the end of primary school,’

This distinction has not been made sufficiently explicit in material published elsewhere.]

.

The overall picture is of a process in which procurement is running in parallel with research and development work intended to help resolve several significant and outstanding issues. This is a consequence of the September 2015 deadline for introduction, which seems increasingly problematic.

Particularly so given that many professionals are yet to be convinced of the case for reception baseline assessment, expressing reservations on several fundamental grounds, extending well beyond the issues highlighted above.

A January 2015 Report from the Centre Forum – Progress matters in Primary too – defends the plan against its detractors, citing six key points of concern. Some of the counter-arguments summarised below are rather more convincing than others:

  • Validity: The contention that reception level assessments are accurate predictors of attainment at the end of KS2 is justified by reference to CEM’s PIPS assessment, which was judged in 2001 to give a correlation of 0.7. But of course KS2 tests were very different in those days.
  • Reliability: The notion that attainment can be reliably determined in reception is again justified with reference to PIPS data from 2001 (showing a 0.98 correlation on retesting). The authors argue that the potentially negative effects of test conditions on young children and the risks of bias should be ‘mitigated’ (but not eliminated) through the development and selection process.
  • Contextualisation: The risk of over-simplification through reporting a single numerical score, independent of factors such as age, needs to be set against the arguments in favour of a relatively simple and transparent methodology. Schools are free to add such context when communicating with parents.
  • Labelling: The argument that baseline outcomes will tend to undermine universally high expectations is countered by the view that assessment may actually challenge labelling attributable to other causes, and can in any case be managed in reporting to parents by providing additional contextual information.
  • Pupil mobility: Concern that the assessment will be unfair on schools with high levels of mobility is met by reference to planned guidance on ‘how long a pupil needs to have attended a school in order to be included in the progress measure’. However, the broader problems associated with a choice of assessments are acknowledged.
  • Gaming: The risk that schools will artificially depress baseline outcomes will be managed through effective moderation and monitoring.

The overall conclusion is that:

‘…the legitimate concerns raised by stakeholders around the reliability and fairness of a baseline assessment do not present fundamental impediments to implementing the progress measure. Overall, a well-designed assessment and appropriate moderation could address these concerns to the extent that a baseline assessment could provide a reasonable basis for constructing a progress measure.

That said, the Department for Education and baseline assessment providers need to address, and, where indicated, mitigate the concerns. However, in principle, there is nothing to prevent a well-designed baseline test being used to create a progress-based accountability measure.’

The report adds:

‘However, this argument still needs to be won and teachers’ concerns assuaged….

.. Since the majority of schools will be reliant on the progress measure under the new system, they need to be better informed about the validity, reliability and purpose of the baseline assessment. To win the support of school leaders and teachers, the Department for Education must release clear, defensible evidence that the baseline assessment is indeed valid, fair and reliable.’

.

[Postscript: On 25 March the STA tendered for a supplier to ‘determine appropriate models for assuring the national data from the reception baseline’. The notice continues:

‘Once models have been determined, STA will agree up to three approaches to be implemented by the supplier in small scale pilots during September/October 2015. The supplier will also be responsible for evaluating the approaches using evidence from the pilots with the aim of recommending an approach to be implemented from September 2016.’

The need for quality assurance is compounded by the fact that there are six different assessment models. The documentation makes clear that monitoring, moderation and other quality assurance methods will be considered.

The contract runs from 1 July 2015 to 31 January 2016 with the possibility of extension for a further 12 months. It will be let by 19 June.]

 .

Outstanding tasks

  • Publish list of contracts for approved baseline assessments (26 January 2015) COMPLETED
  • Explain funding arrangements for baseline assessments and how FY2015-16 funding will be distributed (January 2015?) COMPLETED
  • Publish research on baseline assessment (March/April 2015) 
  • Confirm monitoring and moderation arrangements (March/April 2015?) 
  • Deadline for contractors recruiting schools for initial baseline assessments (30 April 2015) 
  • Publish guidance on the reporting of baseline assessment results (May 2015?) 
  • Award quality assurance tender (June 2016)
  • Undertake comparability study with successful suppliers to determine whether concordance tables can be constructed (Autumn 2015) 
  • Determine funding required for AY2015/16 assessment and distribute to schools (or suppliers?) (October 2015?)
  • Pilot quality assurance models (October 2015)

KS1 and KS2 tests

.

Consultation response

The new tests will comprise:

  • At KS1 – externally set and internally marked tests of maths and reading and an externally set test of grammar, punctuation and spelling (GPS). It is unclear from the text whether the GPS test will be externally marked.
  • At KS2 – externally set and externally marked tests of maths, reading and science, plus a sampling test in science.

Outcomes of both KS1 and KS2 tests (other than the science sampling test) will be expressed as scaled scores. A footnote makes it clear that, in both cases, a score of ‘100 will represent the new expected standard for that stage’

The consultation document says of the scaled scores:

‘Because it is not possible to create tests of precisely the same difficulty every year, the number of marks needed to meet the secondary readiness standard will fluctuate slightly from one year to another. To ensure that results are comparable over time, we propose to convert raw test marks into a scaled score, where the secondary readiness standard will remain the same from year to year. Scaled scores are used in all international surveys and ensure that test outcomes are comparable over time.’

It adds that the Standards and Testing Agency (STA) will develop the scale.

Otherwise very little detail is provided about next steps. The consultation response is silent on the issue. The original consultation document says only that:

‘The Standards and Testing Agency will develop new national curriculum tests, to reflect the new national curriculum programmes of study.’

Adding, in relation to the science sampling test:

‘We will continue with national sample tests in science, designed to monitor national standards over time. A nationally-representative sample of pupils will sit a range of tests, designed to produce detailed information on the cohort’s performance across the whole science curriculum. The design of the tests will mean that results cannot be used to hold individual schools or pupils accountable.’

.

Developments to date

On March 31 2014, the STA published  draft test frameworks for the seven KS1 and KS2 tests to be introduced from 2016:

  • KS1 GPS: a short written task (20 mins); short answer questions (20 mins) and a spelling task (15 mins)
  • KS1 reading: two reading tests, one with texts and questions together, the other with a separate answer booklet (2 x 20 mins)
  • KS1 maths: an arithmetic test (15 mins) and a test of fluency, problem-solving and reasoning (35 mins)
  • KS2 GPS: a grammar and punctuation test (45 mins) and a spelling task (15 mins)
  • KS2 reading: a single test (60 mins)
  • KS2 maths: an arithmetic test (30 mins) and two tests of fluency, problem-solving and reasoning (2 x 40 mins)
  • KS2 science (sampling): tests in physics, chemistry and biology contexts (3 x 25 mins).

Each test will be designed for the full range of prior attainment and questions will typically be posed in order of difficulty.

Each framework explains that all eligible children at state-funded schools will be required to take the tests, but some learners will be exempt.

For further details of which learners will be exempted, readers are referred to the current Assessment and Reporting Arrangements (ARA) booklets.

According to these, the KS1 tests should be taken by all learners working at level 1 or above and the KS2 tests by all learners working at level 3 and above. Teacher assessment data must be submitted for pupils working below the level of the tests.

But of course levels will no longer exist – and we have no equivalent in the form of scaled scores – so the draft frameworks do not define clearly the lower parameter of the range of prior attainment the tests are intended to accommodate.

It will not be straightforward to design workable tests for such broad spans of prior attainment.

Each framework has a common section on the derivation of scaled scores:

‘The raw score on the test…will be converted into a scaled score. Translating raw scores into scaled scores ensures performance can be reported on a consistent scale for all children. Scaled scores retain the same meaning from one year to the next. Therefore, a particular scaled score reflects the same level of attainment in one year as in the previous year, having been adjusted for any differences in difficulty of the test.

Additionally, each child will receive an overall result indicating whether or not he or she has achieved the required standard on the test. A standard-setting exercise will be conducted on the first live test in 2016 in order to determine the scaled score needed for a child to be considered to have met the standard. This process will be facilitated by the performance descriptor… which defines the performance level required to meet the standard. In subsequent years, the standard will be maintained using appropriate statistical methods to translate raw scores on a new test into scaled scores with an additional judgemental exercise at the expected standard. The scaled score required to achieve the expected level on the test will always remain the same.

The exact scale for the scaled scores will be determined following further analysis of trialling data. This will include a full review of the reporting of confidence intervals for scaled scores.’

In July 2014 STA also published sample questions, mark schemes and associated commentaries for each test.

.

Outstanding tasks

I have been unable to trace any details of the timetable for test development and trialling.

As far as I can establish, STA has not published an equivalent to QCDA’s ‘Test development, level setting and maintaining standards’ (March 2010) which describes in some detail the different stages of the test development process.

This old QCA web-page describes a 22-month cycle, from the initial stages of test development to the administration of the tests.

This aligns reasonably well with the 25-month period between publication of the draft test frameworks on 31 March 2014 and the administration of the tests in early May 2016.

Applying the same timetable to the 2016 tests – using publication of the draft frameworks as the starting point – suggests that:

  • The first pre-test should have been completed by November 2014
  • The second pre-test should take place by February 2015 
  • Mark schemes and tests should be finalised by July 2015

STA commits to publishing, the final test frameworks and a full set of sample tests and mark schemes for each of the national curriculum tests at key stages 1 and 2 ‘during the 2015 summer term’.

Given Purdah, these seem most likely to appear towards the end of the summer term rather than a full year ahead of the tests.

In relation to the test frameworks, STA says:

‘We may make small changes as a result of this work; however, we do not expect the main elements of the frameworks to change.’

They will also produce, to the same deadline, guidance on how the results of national curriculum tests will be reported, including an explanation of scaled scores.

So we have three further outstanding tasks:

  • Publishing the final test frameworks (summer term 2015) 
  • Finalising the scale to be used for the tests (summer term 2015) 
  • Publishing guidance explaining the use and reporting of scaled scores (summer term 2015)

.

[Postscript: Since publishing this post, I have found on Contracts Finder various STA contracts, as follows:

How these square with the timetable above is, as yet, unclear. If there is a possibility that final test frameworks cannot be finalised until Autumn 2015, the Workload Challenge Protocol may well bite here too.]

.

Statutory teacher assessment

.

Consultation response

The response confirms statutory teacher assessment of:

  • KS1 maths, reading, writing, speaking and listening and science
  • KS2 maths, reading, writing and science.

There are to be performance descriptors for each statutory teacher assessment:

  • a single descriptor for KS1 science and KS2 science, reading and maths
  • several descriptors for KS1 maths, reading, writing and speaking and listening, and also for KS2 writing.

There is a commitment to improve KS1 moderation, given concerns expressed by Ofsted and the NAHT Commission.

In respect of low attaining pupils the response says:

‘All pupils who are not able to access the relevant end of key stage test will continue to have their attainment assessed by teachers. We will retain P-scales for reporting teachers’ judgements. The content of the P-scales will remain unchanged. Where pupils are working above the P-scales but below the level of the test, we will provide further information to enable teachers to assess attainment at the end of the relevant key stage in the context of the new national curriculum.’

And there is to be further consideration of whether to move to external moderation of P-scale teacher assessment.

So, to summarise, the further work involves:

  • Developing new performance descriptors – to be drafted by an expert group. According to the response, the KS1 descriptors would be introduced in ‘autumn 2014’. No date is given for the KS2 descriptors.
  • Improving moderation of KS1 teacher assessment, working closely with schools and Ofsted.
  • Providing guidance to support teacher assessment of those working above the P-scales but below the level of the tests.
  • Deciding whether to move to external moderation of P-scale teacher assessment.

.

Developments to date

Updated statutory guidance on the P-Scale attainment targets for pupils with SEN was released in July 2014, but neither it nor the existing guidance on when to use the P-Scales relates them to the new scaled scores, or discusses the issue of moderation.

.

In September 2014, a guidance noteNational curriculum and assessment from September 2014: Information for schools’ revised the timeline for the development of performance descriptors:

‘New performance descriptors will be published (in draft) in autumn 2014 which will inform statutory teacher assessment at the end of key stage 1 and 2 in summer 2016. Final versions will be published by September 2015.’

.

A consultation document on performance descriptors: ‘Performance descriptors for use in key stage 1 and 2 statutory teacher assessment for 2015 to 2016’ was published on 23 October 2014.

The descriptors were:

‘… drafted with experts, including teachers, representatives from Local Authorities, curriculum and subject experts. Also Ofsted and Ofqual have observed and supported the drafting process’

A November 2014 FoI response revealed the names of the experts involved and brief biographies were provided in the media.

A further FoI has been submitted requesting details of their remit but, at the time of writing, this has not been answered.

.

[Postscript: The FoI response setting out the remit was published on 5 February.]

.

The consultation document revealed for the first time the complex structure of the performance descriptor framework.

It prescribes four descriptors for KS1 reading, writing and maths but five for KS2 writing.

The singleton descriptors reflect ‘working at the national standard’.

Where four descriptors are required these are termed (from the top down): ‘mastery’, ‘national’, ‘working towards national’ and ‘below national’ standard.

In the case of KS2 writing ‘above national standard’ is sandwiched between ‘mastery’ and ‘national’.

.

Performance descriptor Capture 1Perfromance Decriptor Capture 2

The document explains how these different levels cross-reference to the assessment of learners exempted from the tests.

In the case of assessments with only a single descriptor, it becomes clear that a further distinction is needed:

‘In subjects with only one performance descriptor, all pupils not assessed against the P-scales will be marked in the same way – meeting, or not meeting, the ‘national standard’.

So ‘not meeting the national standard’ should also be included in the table above. The relation between ‘not meeting’ and ‘below’ national standard is not explained.

But still further complexity is added since:

‘There will be some pupils who are not assessed against the P-scales (because they are working above P8 or because they do not have special educational needs), but who have not yet achieved the contents of the ‘below national standard’ performance descriptor (in subjects with several descriptors). In such cases, pupils will be given a code (which will be determined) to ensure that their attainment is still captured.’

This produces a hierarchy as follows (from the bottom up):

  • P Scales
  • In cases of assessments with several descriptors, an attainment code yet to be determined
  • In case of assessments with single descriptors, an undeclared ‘not meeting the national standard’ descriptor
  • The single descriptor or four/five descriptors listed above.

However, the document says:

‘The performance descriptors do not include any aspects of performance from the programme of study for the following key stage. Any pupils considered to have attained the ‘Mastery standard’ are expected to explore the curriculum in greater depth and build on the breadth of their knowledge and skills within that key stage.’

This places an inappropriate brake on the progress of the highest attainers because the assessment ceiling is pitched too low to accommodate them.

It is acknowledging that some high attainers will be performing above the level of the highest descriptors but, regardless of whether or not they move into the programme for the next key stage, there is no mechanism to record their performance.

This raises the further question whether the mastery standard is pitched at the equivalent of level 6, or below it. It will be interesting to see whether this is addressed in the consultation response.

The consultation document says that the draft descriptors will be trialled during summer term 2015 in a representative sample of schools.

These trials and the consultation feedback will together inform the development of the final descriptors, but also:

  • ‘statutory arrangements for teacher assessment using the performance descriptors;
  • final guidance for schools (and those responsible for external moderation arrangements) on how the performance descriptors should be used;
  • an updated national model for the external moderation of teacher assessment; and
  • nationally developed exemplification of the work of pupils for each performance descriptor at the end of each key stage.’

Published comments on the draft descriptors have been almost entirely negative, which might suggest that the response could be delayed. The consultation document said it should appear ‘around 26 February 2015’.

According to the document, the final descriptors will be published either ‘in September 2015’ or ‘in the autumn term 2015’, depending whether you rely on the section headed ‘Purpose’ or the one called ‘Next Steps’. The first option would allow them to appear as late as December 2015.

A recent newspaper report suggested that the negative reception had resulted in an ‘amber/red’ assessment of primary assessment reform as a whole. The leaked commentary said that any decision to review the approach would increase the risk that the descriptors could not be finalised ‘by September as planned’.

However, the story concludes:

‘The DfE says: “We do not comment on leaks,” but there are indications from the department that the guidance will be finalised by September. Perhaps ministers chose, in the end, not to “review their approach”, despite the concerns.’

Hence it would appear that delay until after the beginning of AY2015/16 will not be countenanced

Note that the descriptors are for use in academic year 2015/16, so even publication in September is problematic, since teachers will begin the year not knowing which descriptors to apply.

The consultation document refers only to descriptors for AY2015/16, which might imply that they will be further refined for subsequent years. Essentially therefore, the arrangements proposed here would be an imperfect interim solution.

.

[Postscript: On 26 February 2015 the Consultation Response was published – so on the date commited to in the consultation document. 

As expected, it revealed significant opposition to the original proposals:

  • 74% of respondents were concerned about nomenclature
  • 76% considered that the descriptors were not spaced effectively across the range of pupils’ performance
  • 69% of respondents considered them not clear or easy to understand

The response acknowledges that the issues raised:

‘….amount to a request for greater simplicity, clarity and consistency to support teachers in applying performance descriptors and to help parents understand their meaning.’

But goes on to allege that: 

‘…there are some stakeholders who valued the levels system and would like performance descriptors to function in a similar way across the key stages, which is not their intention.’

Even so, although the Descriptors are not intended to inform formative assessment, respondents have raised concerns that they could be applied in this manner.

There is also the issue of comparability between formative and summative assessment measures, but this is not addressed.

The response does not entirely acknowledge that opposition to the original proposals is sending it back to the drawing board but:

‘As a result of some of the conflicting responses to the consultation, we will work with relevant experts to determine the most appropriate course of action to address the concerns raised and will inform schools of the agreed approach according to the timetable set out in the consultation document – i.e. by September 2015.

The new assessment commission (see below) will have an as yet undefined role in this process:

‘In the meantime, and to help with this [ie determining the most appropriate course of action] the Government is establishing a Commission on Assessment Without Levels….’

Unfortunately, this role has not been clarified in the Commission’s Statement of Intended Outputs

There is no reference to the trials in schools, which may or may not continue. A DfE Memorandum to the Education Select Committee on its 2014-15 Supplementary Estimates reveals that £0.3m has been reallocated to pay for them, but this is no guarantee that they will take place.

Implementation will not be delayed by a year, despite the commitment to allow a full year’s notice for significant reforms announced in the response to the Workload Challenge.

This part of the timetable is now seriously concertina’d and there must be serious doubt whether the timescale is feasible, especially if proper trialling is to be accommodated.]

.

Outstanding tasks 

  • Publish response to performance descriptors consultation document (26 February 2015) COMPLETED
  • Trial (revised?) draft performance descriptors (summer term 2015) 
  • Publish adjusted descriptors, revised in the light of consultation with experts and input from the commission (summer term 2015)
  • Experts and commission on assessment produce response to concerns raised and inform schools of outcomes (September 2015)
  • Confirm statutory arrangements for use of the performance descriptors (September/autumn term 2015) 
  • Publish final performance descriptors for AY2015/16 (September/autumn term 2015) 
  • Publish final guidance on the use of performance descriptors (September/autumn term 2015) 
  • Publish exemplification of each performance descriptor at each key stage (September/autumn term 2015)
  • Publish an updated model for the external moderation of teacher assessment (September/autumn term 2015?) 
  • Confirm plans for the moderation of KS1 teacher assessment and use of the P-scales (September/autumn term 2015?) 
  • Publish guidance on assessment of those working above the P-scales but below the level of the tests (September/autumn term 2015?) 
  • Decide whether performance descriptors require adjustment for AY2016/17 onwards (summer term 2016)

.

Schools’ internal assessment and tracking systems

.

Consultation response

The consultation document outlined some of the Government’s justification for the removal of national curriculum levels. The statement that:

‘Schools will be able to focus their teaching, assessment and reporting not on a set of opaque level descriptions, but on the essential knowledge that all pupils should learn’

may be somewhat called into question by the preceding discussion of performance descriptors.

The consultation document continues:

‘There will be a clear separation between ongoing, formative assessment (wholly owned by schools) and the statutory summative assessment which the government will prescribe to provide robust external accountability and national benchmarking. Ofsted will expect to see evidence of pupils’ progress, with inspections informed by the school’s chosen pupil tracking data.’

A subsequent section adds:

‘We will not prescribe a national system for schools’ ongoing assessment….

…. We expect schools to have a curriculum and assessment framework that meets a set of core principles…

 … Although schools will be free to devise their own curriculum and assessment system, we will provide examples of good practice which schools may wish to follow. We will work with professional associations, subject experts, education publishers and external test developers to signpost schools to a range of potential approaches.’

The consultation response does not cover this familiar territory again, saying only:

‘Since we launched the consultation, we have had conversations with our expert group on assessment about how to support schools to make best use of the new assessment freedoms. We have launched an Assessment Innovation Fund to enable assessment methods developed by schools and expert organisations to be scaled up into easy-to-use packages for other schools to use.’

Further work is therefore confined to the promulgation of core principles, the application of the Assessment Innovation Fund and possibly further work to ‘signpost schools to a range of potential approaches’.

.

Developments to date

The Assessment Innovation Fund was originally announced initially in December 2013.

A factsheet released at that time explains that many schools are developing new curriculum and assessment systems and that the Fund is intended to enable schools to share these.

Funding of up to £10K per school is made available to help up to 10 schools to prepare simple, easy-to-use packages that can be made freely available to other schools.

They must commit to:

‘…make their approach available on an open licence basis. This means that anyone who wishes to use the package (and any trade-marked name) must be granted a non-revocable, perpetual, royalty-free licence to do so with the right to sub-licence. The intellectual property rights to the system will remain with the school/group which devised it.’

Successful applicants were to be confirmed ‘in the week commencing 21 April 2014’

In the event, nine successful applications were announced on 1 May, although one subsequently withdrew, apparently over the licensing terms.

The packages developed with this funding are stored – in a rather user-unfriendly fashion – on this TES Community Blog, along with other material supportive of the decision to dispense with levels.

Much other useful material has been published online which has not been collected into this repository and it is not clear to what extent it will develop beyond its present limits, since the most recent addition was in early November 2014.

A recent survey by Capita Sims (itself a provider of assessment support) conducted between June and September 2014, suggested that:

  • 25% of primary and secondary schools were unprepared for and 53% had not yet finalised plans for replacing levels.
  • 28% were planning to keep the existing system of levels, 21% intended to introduce a new system and 28% had not yet made a decision.
  • 50% of those introducing an alternative expected to do so by September 2015, while 23% intended to do so by September 2016.
  • Schools’ biggest concern (53% of respondents) is measuring progress and setting targets for learners.

Although the survey is four months old and has clear limitations (there were only 126 respondents) this would suggest further support may be necessary, ideally targeted towards the least confident schools.

.

In April 2014 the Government published a set of Assessment Principles, building on earlier material in the primary consultation document. These had been developed by an ‘independent expert panel’.

It is not entirely clear whether the principles apply solely to primary schools and to schools’ own assessment processes (as opposed to statutory assessment).

The introductory statement says:

‘The principles are designed to help all schools as they implement arrangements for assessing pupils’ progress against their school curriculum; Government will not impose a single system for ongoing assessment.

Schools will be expected to demonstrate (with evidence) their assessment of pupils’ progress, to keep parents informed, to enable governors to make judgements about the school’s effectiveness, and to inform Ofsted inspections.’

This might suggest they are not intended to cover statutory assessment and testing but are relevant to secondary schools.

There are nine principles in all, divided into three groups:

.

Principles Capture

.

The last of these seems particularly demanding.

 .

In July 2014, Ofsted published guidance in the form of a ‘Note for inspectors: use of assessment information during inspections in 2014/15’. This says that:

‘In 2014/15, most schools, academies and free schools will have historic performance data expressed in national curriculum levels, except for those pupils in Year 1. Inspectors may find that schools are tracking attainment and progress using a mixture of measures for some, or all, year groups and subjects.

As now, inspectors will use a range of evidence to make judgements, including by looking at test results, pupils’ work and pupils’ own perceptions of their learning. Inspectors will not expect to see a particular assessment system in place and will recognise that schools are still working towards full implementation of their preferred approach.’

It goes on to itemise the ways in which inspectors will check that these systems are effective, without judging the systems themselves, but by gathering evidence of effective implementation through leadership and management, the accuracy of assessment, effectiveness in securing progress and quality of reporting to parents.

. 

In September 2014, NCTL published a research reportBeyond Levels: alternative assessment approaches developed by teaching schools.’

The report summarises the outcomes of small-scale research conducted in 34 teaching school alliances. It offers six rather prolix recommendations for schools and DfE to consider, which can be summarised as follows:

  • A culture shift is necessary in recognition of the new opportunities provided by the new national curriculum and the removal of levels.
  • Schools need access to conferences and seminars to help develop their assessment expertise.
  • Schools would benefit from access to peer reviewed commercial tracking systems relating to the new national curriculum. Clarification is needed about what data will be collected centrally.
  • Teaching school alliances and schools need financial support to further develop assessment practice, especially practical classroom tools, which should be made freely available online.
  • Financial support is needed for teachers to undertake postgraduate research and courses in this field.
  • It is essential to develop professional knowledge about emerging effective assessment practice.

I can find no government response to these recommendations and so have not addressed them in the list of outstanding tasks below.

.

[Postscript: On 25 February 2015, the Government announced the establishment of a ‘Commission on Assessment Without Levels’:

‘To help schools as they develop effective and valuable assessment schemes, and to help us to identify model approaches we are today announcing the formation of a commission on assessment without levels. This commission will continue the evidence-based approach to assessment which we have put in place, and will support primary and secondary schools with the transition to assessment without levels, identifying and sharing good practice in assessment.’

This appears to suggest belated recognition that the steps outlined above have provided schools with insufficient support for the transition to levels-free internal assessment. It is also a response to the possibility that Labour might revisit the decision to remove them (see below).

The Consultation Response on Performance Descriptors released on 26 February (see above) says that the Commission will help to determine the most appropriate response to concerns raised about the Descriptors, while also suggesting that this task will not be devolved exclusively to them.

It adds that the Commission will:

‘…collate, quality assure, publish and share best practice in assessment with schools across the country…and will help to foster innovation and success in assessment practice more widely.’

The membership of the Commission was announced on 9 March.

.

.

The Commission met on 10 March and 23 March 2015 and will meet four more times – in April, May, June and July.

Its Terms of Reference have been published. The Statement of Intended Outputs mentioned in the consultation response on Performance Descriptors appeared without any publicity on 27 March

It seemed that the Commission, together with the further consultation of experts, supplied a convenient mechanism for ‘parking’ some difficult issues until the other side of the Election.

However, neither the terms of reference nor the statement of outputs mentions the Performance Descriptors, so the Commission’s role in relation to them remains shrouded in mystery.

.

.

The authors of the Statement of Outputs feel it necessary to mention in passing that it:

‘…supports the decision to removel levels, but appreciates that the reasons for removing levels are not widely understood’.

It sets out a 10-point list of outputs comprising:

  • Another statement of the purposes of assessment and another set of principles to support schools in developing effective assessment systems, presumably different to those published by the previous expert group in April 2014. (It will be interesting to compare the two sets of principles, to establish whether Government policy on what constitutes effective assessment has changed over the last 12 months. It will also be worthwhile monitoring the gap between the principles and the views of Alison Peacock, one of the Commission’s members. She also sat on the expert panel that developed the original principles, some of which seem rather at odds with her own practice and preferences. Meanwhile, another member – Sam Freedman – has stated

.

.

  • An explanation of ‘how assessment without levels can better serve the needs of pupils and teachers’.
  • Guidance to ‘help schools create assessment policies which reflect the principles of effective assessment without levels’.
  • Clear information about ‘the legal and regulatory assessment requirements’, intende to clarify what they are now, how they will change and when. (The fact that the Commission concludes that such information is not already available is a searing indictment of the Government’s communications efforts to date.)
  • Clarification with Ofsted of ‘the role that assessment without levels will play in the inspection process’ so schools can demonstrate effectiveness without adding to teacher workload. (So again they must believe that Ofsted has not sufficiently clarified this already.)
  • Dissemination of good practice, obtained through engagement with ‘a wide group of stakeholders including schools, local authorities, teachers and teaching unions’. (This is tacit admission that the strategy described above is not working.)
  • Advice to the Government on how ITT and CPD can support assessment without levels and guidance to schools on the use of CPD for this purpose. (There is no reference to the resource implications of introducing additional training and development.)
  • Advice to the Government on ensuring ‘appropriate provision is made for pupils with SEN in the development of assessment policy’. (Their judgement that this is not yet accounted for is a worrying indictment of Government policy to date. They see this as not simply a lapse of communication but a lacuna in the policy-making process.)
  • ‘Careful consideration’ of commitments to tackling teacher workload – which they expect to alleviate by providing information, advice and support. (There is no hint that the introduction of Performance Descriptors will be delayed in line with the Workload Challenge.)
  • A final report before the end of the summer term, though it may publish some outputs sooner. (It will not be able to do so until the outcome of the Election is decided.)

Although there is some implicit criticism of Government policy and communications to date, the failure to make any reference to the Performance Descriptors is unlikely to instil confidence in the capacity of the Commission to provide the necessary challenge to the original proposals, or support to the profession in identifying a workable alternative.]

.

Outstanding tasks

  • Further dissemination of good practice through the existing mechanisms (ongoing) 
  • Further ‘work with professional associations, subject experts, education publishers and external test developers to signpost schools to a range of potential approaches.’ (ongoing)
  • Additional work (via the commission) to ‘collate, quality assure, publish and share’ best practice (Report by July 2015 with other outputs possible from May 2015)

Reporting to parents

.

Consultation response

The consultation document envisaged three outcomes for each test:

  • A scaled score
  • The learner’s position in the national cohort, expressed as a decile
  • The rate of progress from a baseline, derived by comparing a learner’s scaled score with that of other learners with the same level of prior attainment.

Deciles did not survive the consultation

The consultation response confirms that, for each test, parents will receive:

  • Their own child’s scaled score; and
  • The average scaled score for the school, ‘the local area’ (presumably the geographical area covered by the authority in which the school is situated) and the country as a whole.

They must also receive information about progress, but the response only discusses how this might be published on school websites and for the purposes of the floor targets (see sections below), rather than how it should be reported directly to parents.

We have addressed already the available information about the calculation of the scaled scores.

The original consultation document also outlined the broad methodology underpinning the progress measures:

‘In order to report pupils’ progress through the primary curriculum, the scaled score for each pupil at key stage 2 would be compared to the scores of other pupils with the same prior attainment. This will identify whether an individual made more or less progress than pupils with similar prior attainment…

…. Using this approach, a school might report pupils’ national curriculum test results to parents as follows:

In the end of key stage 2 reading test, Sally received a scaled score of 126 (the secondary ready standard is 100), placing her in the top 10% of pupils nationally. The average scaled score for pupils with the same prior attainment was 114, so she has made more progress in reading than pupils with a similar starting-point.’

.

Developments to date

On this web page first published in April 2014 STA commits to publishing guidance during summer term 2015 on how the results of national curriculum tests will be reported, including an explanation of scaled scores.

In September 2014, a further guidance note ‘National curriculum and assessment from September 2014: Information for schools’ shed a little further light on the calculation of the progress measures:

‘Pupil progress will be determined in relation to the average progress made by pupils with the same baseline (i.e. the same KS1 average point score). For example, if a pupil had an APS of 19 at KS1, we will calculate the average scaled score in the KS2 tests for all pupils with an APS of 19 and see whether the pupil in question achieved a higher or lower scaled score than that average The exact methodology of how this will be reported is still to be determined.’

It is hard to get a clear sense of the full range of assessment information that parents will receive.

I have been unable to find any comprehensive description, which would suggest that this is being held back until the methodology for calculating the various measures is finalised.

The various sections above suggest that they will receive details of:

  • Reception baseline assessment outcomes.
  • Attainment in end of KS1 and end of KS2 tests, now expressed as scaled scores (or via teacher assessment, code or P-scales if working below the level of the tests). This will be supplemented by a series of average scaled scores for each test.
  • Progress between the baseline assessment (reception baseline from 2022; KS1 baseline beforehand) and end of KS2 tests, relative to learners with similar prior attainment at the baseline.
  • Attainment in statutory teacher assessments, normally expressed through performance descriptors, but with different arrangements for low attainers.
  • Attainment and progress between reception baseline, KS1 and KS2 tests, provided through schools’ own internal assessment and tracking systems.

We have seen that reporting mechanisms for the first and fourth are not yet finalised.

The fifth is now for schools to determine, taking account of Ofsted’s guidance and, if they wish, the Assessment Principles.

The scales necessary to report the second are not yet published, and these also form the basis of the remaining progress measures.

Parents will be receiving this information in a variety of different formats: scaled scores, average scaled scores, baseline scores, performance descriptors, progress scores and internal tracking measures.

Moreover, the performance descriptor scales will vary according to the assessment and internal tracking will vary from school to school.

This is certainly much more complex than the current unified system of reporting based on levels. Parents will require extensive support to understand what they are receiving.

Outstanding tasks

Previous sections have already referenced expected guidance on reporting baseline assessments, scaled scores and the use of performance descriptors (which presumably includes parental reporting).

One assumes that there will also need to be unified guidance on all aspects of reporting to parents, intended for parental consumption.

So, avoiding duplication of previous sections, the remaining outstanding tasks are to:

  • Finalise the methodology for reporting on pupil progress (summer term 2015) 
  • Provide comprehensive guidance to parents on all aspects of reporting (summer term 2015?)

Publication of outcomes

.

Consultation response

This section covers publication of material for public consumption, within and alongside the Primary School Performance Tables and on schools’ websites.

The initial consultation document has much to say about first of these, while the consultation response barely mentions the Tables, focusing almost exclusively on school websites

The original document suggests that the Performance Tables will include a variety of measures, including:

  • The percentage of pupils meeting the secondary readiness standard
  • The average scaled score
  • Where the school’s pupils fit in the national cohort
  • Pupils’ rate of progress
  • How many of the school’s pupils are among the highest-attaining nationally, through a measure showing the percentage of pupils attaining a high scaled score in each subject.
  • Teacher assessment outcomes in English maths and science
  • Comparisons of each school’s performance with that of schools with similar intake
  • Data about the progress of those with very low prior attainment.

All the headline measures will be published separately for pupils in receipt of the pupil premium.

All measures will be published as three year rolling averages in addition to annual results.

There is also a commitment to publish a wide range of test and teacher assessment data, relating to both attainment and progress, through a Data Portal:

‘The department is currently procuring a new data portal or “data warehouse” to store the school performance data that we hold and provide access to it in the most flexible way. This will allow schools, governors and parents to find and analyse the data about schools in which they are most interested, for example focusing on the progress of low attainers in mathematics in different schools or the attainment of certain pupil groups.’

The consultation response acknowledges as a guiding principle:

‘…a broad range of information should be published to help parents and the wider public know how well schools are performing.’

The accountability system will:

‘…require schools to publish information on their websites so that parents can understand both the progress pupils make and the standards they achieve.’

Data on low attainers’ attainment and progress will not be published since the diversity of this group demands extensive contextual information.

But when it comes to Performance Tables, the consultation response says only:

‘As now, performance tables will present a wide range of information about primary school performance.’

By implication, they will include progress measures since the text adds:

‘In 2022 performance tables, we will judge schools on whichever is better: their progress from the reception baseline to key stage 2; or their progress from key stage 1 to key stage 2.

However, schools will be required to publish a suite of indicators in standard format on their websites, including:

  • The average progress made by pupils in reading, writing and maths
  • The percentage of pupils achieving the expected standard at the end of KS2 in reading, writing and maths
  • The average score of pupils in their end of KS2 assessments and
  • The ‘percentage of pupils who achieve a high score in all areas’ at the end of KS2.

The precise form of the last of these indicators is not explained. This is not quite the same as the ‘measure showing the percentage of pupils attaining a high scaled score in each subject’ mentioned in the original consultation document.

Does ‘all areas’ mean reading, writing and maths? Must learners achieve a minimum score in each assessment, or a single aggregate score above a certain threshold?

In addition:

‘So that parents can make comparisons between schools, we would like to show each school’s position in the country on these measures and present these results in a manner that is clear for all audiences to understand. We will discuss how best to do so with stakeholders, to ensure that the presentation of the data is clear, fair and statistically robust.’

.

Developments to date

In June 2014, a consultation document was issued ‘Accountability: publishing headline performance measures on school and college websites’. This was accompanied by a press release.

The consultation document explains the intended relationship between the Performance Tables, Data Portal and material published on schools’ websites:

‘Performance tables will continue to provide information about individual schools and colleges and be the central source of school and college performance information.’

Moreover:

‘Future changes to the website, through the school and college performance data portal, will improve accessibility to a wide range of information, including the headline performance measures. It will enable interested parents, students, schools, colleges and researchers to interrogate educational data held by the Department for Education to best meet their requirements.’

But:

‘Nevertheless, the first place many parents and students look for information about a school or college is the institution’s own website’

Schools are already required to publish such information, but there is inconsistency in where and how it is presented. The document expresses the intention that consistent information should be placed ‘on the front page of every school and college website’.

The content proposed for primary school’s websites covers the four headline measures set out in the consultation response.

A footnote says:

‘These measures will apply to all-through primary, junior and middle schools. Variants of these measures will apply for infant and first schools.’

But the variants are not set out.

There is no reference to the plan to show ‘each school’s position in the country on these measures’ as mentioned in the consultation response.

The consultation proposes a standard visual presentation which, for primary schools, looks like this

.

school websites Capture

.

The response to this consultation ‘Publishing performance measures on school and college websites’ appeared in December 2014 (the consultation document had said ‘Autumn 2014’).

The summary of responses says:

‘The majority of respondents to the consultation welcomed the proposals to present headline performance measures in a standard format. There was also strong backing for the proposed visual presentation of data to aid understanding of performance. However, many respondents suggested that without some sense of scale or spread to provide some context to the visual presentation, the data could be misleading. Others said that the language used alongside the charts should be clearer…’

…Whilst most respondents favoured a data application tool that would remove the burden of annually updating performance data on school and college websites, they also highlighted the difficulties of developing a data application that would be compatible with a wide range of school and college websites.’

It is clear that some respondents had questioned why school websites should not simply carry a link on their homepage to the School Performance Tables.

In the light of this reaction, further research will be undertaken to:

  • develop a clear and simple visual representation of the data, but with added contextual information.
  • establish how performance tables data can be presented ‘in a way that reaches more parents’.

The timeline suggests that this will result in ‘proposals for redevelopment of performance tables’ by May 2015, so we can no longer assume that the Tables will cover the list of material suggested in the original consultation document.

The timeline indicates that if initial user research concludes that a data application is required, that will be developed and tested between June and October 2015, for roll out between September 2016 and January 2017.

Schools will be informed by autumn 2015 whether they should carry a link to the Tables, download a data application or pursue a third option.

But, nevertheless:

‘All schools and colleges, including academies, free schools and university technical colleges, will be required to publish the new headline performance measures in a consistent, standard format on their websites from 2016.’

So, if an application is not introduced, it seems that schools will still have to publish the measures on their websites: they will not be able to rely solely on a link to the Performance Tables.

Middle schools will only be required to publish the primary measures. No mention is made of infant or first schools.

.

There is no further reference to the data portal, since this project was quietly shelved in September 2014, following unexplained delays in delivery.

.

.

There has been no subsequent explanation of the implications of this decision. Will the material intended for inclusion in the Portal be included in the Performance Tables, or published by another route, or will it no longer be published?

.

Finally, some limited information has emerged about accountability arrangements for infant schools.

This appears on a web page – New accountability arrangements for infant schools from 2016 – published in June 2014.

It explains that the reception baseline will permit the measurement of progress alongside attainment. The progress of infant school pupils will be published for the first time in the 2019 Performance Tables.

This might mean a further addition to the list of information reported to parents set out in the previous section.

There is also a passing reference to moderation:

‘To help increase confidence and consistency in our moderation of infant schools, we will be increasing the proportion of schools where KS1 assessments are moderated externally. From summer 2015, half of all infant schools will have their KS1 assessments externally moderated.’

But no further information is forthcoming about the nature of other headline measures and how they will be reported.

.

Outstanding tasks

  • Complete user research and publish proposals for redevelopment of Performance Tables (May 2015) 
  • Confirm what data will be published in the 2016 Performance Tables (summer Term 2015?)
  • Confirm how material originally intended for inclusion in Data Portal will be published (summer term 2015?)
  • Confirm the format and publication route for data showing each school’s position in the country on the headline measures (summer term 2015?) 
  • Confirm headline performance measures for infant and first schools (summer term 2015?) 
  • If necessary, further develop and test a prototype data application for schools’ websites (October 2015) 
  • Inform schools whether a data application will be introduced (autumn 2015) 
  • Amend School Information Regulations to require publication of headline measures in standard format (April 2016) 
  • If proceeding, complete development and testing of a data application (May 2016) 
  • If proceeding, complete roll out of data application (February 2017)

.

Floor standards

.

Consultation response

Minimum expectations of schools will continue to be embodied in floor standards. Schools falling below the floor will attract ‘additional scrutiny through inspection’ and ‘intervention may be required’.

Although the new standard:

‘holds schools to account both on the progress they make and on how well their pupils achieve.’

In practice they are able to choose between one or the other.

An all-through primary school will be above the floor standards if:

  • Pupils make sufficient progress between the reception baseline and the end of KS2 in all of reading, writing and maths or
  • 85% or more of pupils meet the new expected standard at the end of KS2 (similar to Level 4b under the current system).

A junior or middle school will be above the floor standard if:

  • pupils make sufficient progress at key stage 2 from their starting point at key stage 1; or
  • 85% or more of pupils meet the new expected standard at the end of key stage 2

At this stage arrangements for measuring the progress of pupils in infant or first schools are still to be considered.

Since the reception baseline will be introduced in 2015, progress in all-through primary schools will continue to be measured from the end of KS1 until 2022.

This should mean that, prior to 2022, the standard would be achieved by ensuring that the progress made by pupils in a school – in reading, writing and maths – equals or exceeds the national average progress made by pupils with similar prior attainment at the end of KS1.

Exactly how individual progress will be aggregated to create a whole school measure is not yet clear. The original consultation document holds up the possibility that slightly below average progress will be acceptable:

‘…we expect the value-added score required to be above the floor to be between 98.5 and 99 (a value-added score of 100 represents average progress).’

The consultation response says the amount of progress required will be determined in 2016:

‘The proposed progress measure will be based on value-added in each of reading, writing and mathematics. Each pupil’s scaled scores in each area at key stage 2 will be compared with the scores of pupils who had the same results in their assessments at key stage 1.

For a school to be above the progress floor, pupils will have to make sufficient progress in all of reading, writing and mathematics. For 2016, we will set the precise extent of progress required once key stage 2 tests have been sat for the first time. Once pupils take a reception baseline, progress will continue to be measured using a similar value added methodology.’

In 2022 schools will be assessed against either the reception or KS1 baseline, whichever gives the best result. From 2023 only the reception baseline will be in play.

The attainment standard will be based on achievement of ‘a scaled score of 100 or more’ in each of the reading and maths tests and achievement, via teacher assessment, of the new expected standard in writing (presumably the middle of the five described above).

The attainment standard is significantly more demanding, in that the present requirement is for 65% of learners to meet the expected standard – and the standard itself will now be pitched higher, at the equivalent of Level 4B.

The original consultation document says:

‘Our modelling suggests that a progress measure set at this level, combined with the 85% threshold attainment measure, would result in a similar number of schools falling below the floor as at present. Over time we will consider whether schools should make at least average progress as part of floor standards.’

The consultation response does not confirm this judgement.

.

Developments

The only significant development since the publication of the consultation response is the detail provided on the June 2014 webpage New accountability arrangements for infant schools from 2016.

In addition to the points in the previous section, this also confirms that:

‘…there will not be a floor standard for infant schools’

But this statement has been called into question, since the table from the performance descriptors consultation, reproduced above, appears to suggest that KS1 teacher assessments in reading, writing and maths do contribute to a floor standard – whether for infant or all-through primary schools is unclear.

.

The aforementioned Centre Forum Report ‘Progress matters in Primary too’ (January 2015) also appears to call into question the results of the modelling reported in the initial consultation document.

It says:

‘…the likelihood is that, based on current performance, progress will be the measure used for the vast majority of schools, at least in the short to medium term. Even those schools which achieve the attainment floor target will only do so by ensuring at least average progress is made by their pupils. As a result, progress will in practice be the dominant accountability metric.’

It undertakes modelling based on 2013 attainment data – ie simulating the effect of the new standards had they been in place in 2013, using selected learning areas within the EYFSP as a proxy for the reception baseline – which suggests that just 10% of schools in 2013 would have met the new attainment floor.

It concludes that:

‘For the vast majority of schools, progress will be their only option for avoiding intervention when the reforms come into effect.’

Unfortunately though, it does not provide an estimate of the proportion of schools likely to achieve the progress floor standard, with either the current KS1 baseline or its proxy for a reception baseline.

Outstanding Tasks

  • Confirm the detailed methodology for deriving both the attainment and progress elements of the floor standards, in relation to both the new reception baseline and the interim KS1 baseline (summer 2015?)
  • Set the amount of progress required to achieve the progress element of the floor standards (summer 2016)
  • (In the consultation document) Consider whether schools should make at least average progress as part of floor standards and ‘move to three year rolling averages for floor standard measures’ (long term)

.

Overall progress, Purdah and General Election outcomes

Progress to date and actions outstanding

The lists of outstanding actions above record some 40 tasks necessary to the successful implementation of the primary assessment and accountability reforms.

If the ‘advance notice’ conventions are observed, roughly half of these require completion by the end of the summer term in July 2015, within the two windows of 50 working days on either side of Purdah.

These conventions have already been set aside in some cases, most obviously in respect of reception baseline assessment and the performance descriptors for statutory teacher assessment.

Unsurprisingly, the commentary above suggests that these two strands of the reform programme are the most complex and potentially the most problematic.

The sheer number of outstanding tasks and the limited time in which to complete them could pose problems.

It is important to remember that there are similar reforms in the secondary and post-16 sectors that need to be managed in parallel.

The leaked amber/red rating was attributed solely to the negative reaction to the draft performance descriptors, but it could also reflect a wider concern that all the necessary steps may not be completed in time to give schools the optimal period for planning and preparation.

Schools may be able to cope with shorter notice in a few instances, where the stakes are relatively low, but if too substantive a proportion of the overall reform programme is delayed into next academic year, they will find the cumulative impact much harder to manage.

In a worst case scenario, implementation of some elements might need to be delayed by a year, although the corollary would be an extended transition period for schools that would be less than ideal. It may also be difficult to disentangle the different strands given the degree of interdependency between them.

Given the proximity of a General Election, it may not be politic to confirm such delays before Purdah intervenes: the path of least resistance is probably to postpone any difficult decisions for consideration by the incoming government.

.

The implications of Purdah

As noted above, if the General Election result is clear-cut, Purdah will last some five-and-a-half weeks and will occur at a critical point in the implementation timetable.

The impact of Purdah should not be under-estimated.

From the point at which Parliament is dissolved on Monday 30 March, the Government must abstain from major policy decisions and announcements.

The Election is typically announced a few days before the dissolution of Parliament. This ‘wash up’ period between announcement and dissolution is typically used to complete essential unfinished business.

The Cabinet Office issues guidance on conduct during Purdah shortly before it begins.

The 2015 guidance has not yet issued so the 2010 guidance is the best source of information about what to expect.

.

[Postscript: 2015 Guidance was posted on 30 March 2015 and is substantively the same as the 2010 edition.]

.

Key points include:

  • ‘Decisions on matters of policy on which a new Government might be expected to want the opportunity to take a different view from the present Government should be postponed until after the Election, provided that such postponement would not be detrimental to the national interest or wasteful of public money.’
  • ‘Officials should not… be asked to devise new policies or arguments…’
  • ‘Departmental communications staff may…properly continue to discharge during the Election period their normal function only to the extent of providing factual explanation of current Government policy, statements and decisions.’
  • ‘There would normally be no objection to issuing routine factual publications, for example, health and safety advice but these will have to be decided on a case by case basis taking account of the subject matter and the intended audience.’
  • ‘Regular statistical releases and research reports (e.g. press notices, bulletins, publications or electronic releases) will continue to be issued and published on dates which have been pre-announced. Ad hoc statistical releases or research reports should be released only where a precise release date has been published prior to the Election period. Where a pre-announcement has specified that the information would be released during a specified period (e.g. a week, or longer time period), but did not specify a precise day, releases should not be published within the Election period.’
  • ‘Research: Fieldwork involving interviews with the public or sections of it will be postponed or abandoned although regular, continuous and on-going statistical surveys may continue.’
  • ‘Official websites…the release of new online services and publication of reworked content should not occur until after the General Election… Content may be updated for factual accuracy but no substantial revisions should be made and distributed.’
  • The general principles and conventions set out in this guidance apply to NDPBs and similar public bodies.

Assuming similar provisions in 2015, most if not all of the assessment and accountability work programme would grind to a halt.

To take an example, it is conceivable that those awarded baseline assessment contracts would be able to recruit schools after 30 March, but they will receive little or no help from the DfE during the Purdah period. Given that the recruitment deadline is 30 April, this may be expected to depress recruitment significantly.

.

The impact of different General Election outcomes

Forming a Government in the case of a Hung Parliament may also take some time, further delaying the process.

The six days taken in 2010 may not be a guide to what will happen in 2015.

The Cabinet Manual (2011) says:

‘Where an election does not result in an overall majority for a single party, the incumbent government remains in office unless and until the Prime Minister tenders his or her resignation and the Government’s resignation to the Sovereign. An incumbent government is entitled to wait until the new Parliament has met to see if it can command the confidence of the House of Commons, but is expected to resign if it becomes clear that it is unlikely to be able to command that confidence and there is a clear alternative…

…The nature of the government formed will be dependent on discussions between political parties and any resulting agreement. Where there is no overall majority, there are essentially three broad types of government that could be formed:

  • single-party, minority government, where the party may (although not necessarily) be supported by a series of ad hoc agreements based on common interests;
  • formal inter-party agreement, for example the Liberal–Labour pact from 1977 to 1978; or
  • formal coalition government, which generally consists of ministers from more than one political party, and typically commands a majority in the House of Commons’.

If one or more of the parties forming the next government has a different policy on assessment and accountability, this could result in pressure to amend or withdraw parts of the reform programme.

If a single party is involved, pre-Election contact with civil servants may have clarified its intentions, enabling work to resume as soon as the new government is in place but, if more than one party is involved, it may take longer to agree the preferred way forward.

Under a worst case scenario, planners might need to allow for Purdah and post-Election negotiations to consume eight weeks or longer.

The impact of the Election on the shape and scope of the primary assessment and accountability reforms will also depend on which party or parties enter government.

If the same Coalition partners are returned, one might expect uninterrupted implementation, unless the minority Lib Dems seek to negotiate different arrangements, which seems unlikely.

But if a different party or a differently constituted Coalition forms the Government, one might expect decisions to abandon or delay some aspects of the programme.

If Labour forms the Government, or is the major party in a Coalition, some unravelling will be necessary.

They are broadly committed to the status quo:

‘Yet when it comes to many of the technical day-to-day aspects of school leadership – child protection, curriculum reform, assessment and accountability – we believe that a period of stability could prove beneficial for raising pupil achievement. This may not be an exciting rallying cry, but it is crucial that the incoming government takes account of the classroom realities.’

Hunt has also declared:

‘Do not mistake me: I am zealot for minimum standards, rigorous assessment and intelligent accountability.

But if we choose to focus upon exam results and league tables to the detriment of everything else, then we are simply not preparing our young people for the demands of the 21st century.’

And, thus far, Labour has made few specific commitments in this territory.

  • They support reception baseline assessment but whether that extends to sustaining a market of providers is unknown. Might they be inclined to replace this with a single national assessment?.
  • There is very little about floor targets – a Labour invention – although the Blunkett Review appears to suggest that Directors of School Standards will enjoy some discretion in respect of their enforcement.

Reading between the lines, it seems likely that they would delay some of the strands described above – and potentially simplify others.

.

Conclusion

The primary assessment reform programme is both extensive and highly complex, comprising several strands and many interdependencies.

Progress to date can best be described as halting.

There are still many steps to be taken and difficult issues to resolve, about half of which should be completed by the end of this academic year. Pre-Election Purdah will cut significantly into the time available.

More announcements may be delayed into the summer holidays or the following autumn term, but this reduces the planning and preparation time available to schools and has potentially significant workload implications.

Alternatively, implementation of some elements or strands may be delayed by a year, but this extends the transition period between old and new arrangements. Any such rationalisation seems likely to be delayed until after the Election and decisions will be influenced by its outcome.

.

[Postscript: The commitment in the Government’s Workload Challenge response to a one-year lead time, now encapsulated in the Protocol published on 23 March, has not resulted in any specific commitments to delay ahead of the descent of Purdah.

At the onset of Purdah on 30 March some 18 actions appear to be outstanding and requiring completion by the end of the summer term. This will be a tall order for a new Government, especially one of a different complexion.]

.

If Labour is the dominant party, they may be more inclined to simplify some strands, especially baseline assessment and statutory teacher assessment, while also providing much more intensive support for schools wrestling with the removal of levels.

Given the evidence set out above, ‘amber/red’ seems an appropriate rating for the programme as a whole.

It seems increasingly likely that some significant adjustments will be essential, regardless of the Election outcome.

.

GP

January 2015

Advertisements

A Closer Look at Level 6

This post provides a data-driven analysis of Level 6 (L6) performance at Key Stage 2, so as to:

pencil-145970_640

  • Marshall the published information and provide a commentary that properly reflects this bigger picture;
  • Establish which data is not yet published but ought to be in the public domain;
  • Provide a baseline against which to measure L6 performance in the 2014 SATs; and
  • Initiate discussion about the likely impact of new tests for the full attainment span on the assessment and performance of the highest attainers, both before and after those tests are introduced in 2016.

Following an initial section highlighting key performance data across the three L6 tests – reading; grammar, punctuation and spelling (GPS); and maths – the post undertakes a more detailed examination of L6 achievement in English, maths and science, taking in both teacher assessment and test outcomes.

It  concludes with a summary of key findings reflecting the four purposes above.

Those who prefer not to read the substantive text can jump straight to the summary from here

I apologise in advance for any transcription errors and statistical shortcomings in the analysis below.

Background

Relationship with previous posts

This discussion picks up themes explored in several previous posts.

In May 2013 I reviewed an Investigation of Level 6 Key Stage 2 Tests commissioned and published by in February that year by the Department for Education.

My overall assessment of that report?

‘A curate’s egg really. Positive and useful in a small way, not least in reminding us that primary-secondary transition for gifted learners remains problematic, but also a missed opportunity to flag up some other critical issues – and of course heavily overshadowed by the primary assessment consultation on the immediate horizon.’

The performance of the highest primary attainers also featured strongly in an analysis of the outcomes of NAHT’s Commission on Assessment (February 2014) and this parallel piece on the response to the consultation on primary assessment and accountability (April 2014).

The former offered the Commission two particularly pertinent recommendations, namely that it should:

‘shift from its narrow and ‘mildly accelerative’ view of high attainment to accommodate a richer concept that combines enrichment (breadth), extension (depth) and acceleration (faster pace) according to learners’ individual needs.’

Additionally it should:

‘incorporate a fourth ‘far exceeded’ assessment judgement, since the ‘exceeded’ judgement covers too wide a span of attainment.’

The latter discussed plans to discontinue L6 tests by introducing from 2016 single tests for the full attainment span at the end of KS2, from the top of the P-scales to a level the initial consultation document described as ‘at least of the standard of’ the current L6.

It opined:

‘The task of designing an effective test for all levels of prior attainment at the end of key stage 2 is…fraught with difficulty…I have grave difficulty in understanding how such assessments can be optimal for high attainers and fear that this is bad assessment practice.’

Aspects of L6 performance also featured in a relatively brief review of High Attainment in 2013 Primary School Performance Tables (December 2013). This post expands significantly on the relevant data included in that one.

The new material is drawn from three principal sources:

The recent history of L6 tests

Level 6 tests have a rather complex history. The footnotes to SFR 51/2013 simplify this considerably, noting that:

  • L6 tests were initially available from 1995 to 2002
  • In 2010 there was a L6 test for mathematics only
  • Since 2012 there have been tests of reading and mathematics
  • The GPS test was introduced in 2013.

In fact, the 2010 maths test was the culmination of an earlier QCDA pilot of single level tests. In that year the results from the pilot were reported as statutory National Curriculum test results in pilot schools.

In 2011 optional L6 tests were piloted in reading, writing and maths. These were not externally marked and the results were not published.

The June 2011 Bew Report came out in favour:

‘We believe that the Government should continue to provide level 6 National Curriculum Tests for schools to use on an optional basis, whose results should be reported to parents and secondary schools.’

Externally marked L6 tests were offered in reading and maths in 2012, alongside L6 teacher assessment in writing. The GPS test was added to the portfolio in the following year.

In 2012, ministers were talking up the tests describing them as:

‘…a central element in the Coalition’s drive to ensure that high ability children reach their potential. Nick Gibb, the schools minister, said: “Every child should be given the opportunity to achieve to the best of their abilities.

“These tests will ensure that the brightest pupils are stretched and standards are raised for all.”’

In 2012 the Primary Performance Tables used L6 results only in the calculation of ‘level 5+’, APS, value-added and progress measures, but this was not the case in 2013.

The Statement of Intent on the Tables said:

‘…the percentage of the number of children at the end of Key Stage 2 achieving level 6 in a school will also be shown in performance tables. The Department will not publish any information at school level about the numbers of children entered for the level 6 tests, or the percentage achieving level 6 of those entered for level 6.’

The nature of the test is unchanged for 2014: they took place on 12, 13 and 15 May respectively. This post is timed to coincide with their administration.

The KS2 ARA booklet  continues to explain that:

‘Children entered for level 6 tests are required to take the levels 3-5 tests. Headteachers should consider a child’s expected attainment before registering them for the level 6 tests as they should be demonstrating attainment above level 5. Schools may register children for the level 6 tests and subsequently withdraw them.

The child must achieve a level 5 in the levels 3-5 test and pass the corresponding level 6 test in the same year in order to be awarded an overall level 6 result. If the child does not pass the level 6 test they will be awarded the level achieved in the levels 3-5 test.’

Anticipated future developments

At the time of writing the Government has not published a Statement of Intent explaining whether there will be any change in the reporting of L6 results in the December 2014 Primary School Performance Tables.

An accompanying Data Warehouse (aka Portal) is also under development and early iterations are expected to appear before the next set of Tables. The Portal will make available a wider range of performance data, some of it addressing high attainment.

The discussion in this post of material not yet in the public domain is designed in part as a marker to influence consideration of material for inclusion in the Portal.

As noted above, the Government has published its response to the consultation on primary assessment and accountability arrangements, confirming that new single assessments for the full attainment span will be introduced in 2016.

At the time of writing, there is no published information about the number of entries for the 2014 tests. (In 2013 these details were released in the reply to a Parliamentary Question.)

Entries had to be confirmed by March 2014, so it may be that the decision to replace the L6 tests, not confirmed until that same month, has not impacted negatively on demand. The effect on 2015 entries remains to be seen, but there is a real risk that these will be significantly depressed.

L6 tests are scheduled to be taken for the final time in May 2015. The reading and maths tests will have been in place for four consecutive years; the GPS test for three.

Under the new arrangements there will continue to be tests in reading, GSP and maths – plus a sampling test in science – as well as teacher assessment in reading, writing, maths and science.

KS2 test outcomes (but not teacher assessment) will be reported by means of a scaled score for each test, alongside three average scaled scores, for the school, the local area and nationally.

The original consultation document proposed that each scaled score would be built around a ‘secondary readiness standard’ loosely aligned with the current L4B, but converted into a score of 100.

The test development frameworks mention that:

‘at the extremes of the scaled score distribution, as is standard practice, the scores will be truncated such that above and below a certain point, all children will be awarded the same scaled score in order to minimise the effect for children at the ends of the distribution where the test is not measuring optimally.’

A full set of sample materials including tests and mark schemes for every test will be published by September 2015, the beginning of the academic year in which the new tests are first deployed.

The consultation document said these single tests would:

‘include challenging material (at least of the standard of the current level 6 test) which all pupils will have the opportunity to answer, without the need for a separate test’.

The development frameworks published on 31 March made it clear that the new tests should:

‘provide a suitable challenge for all children and give every child the opportunity to achieve as high a standard…as possible.’

Additionally:

‘In order to improve general accessibility for all children, where possible, questions will be placed in order of difficulty.’

These various and potentially conflicting statements informed the opinion I have already repeated.

The question then arises whether the Government’s U turn on separate tests for the highest attainers is in the latter’s best interests. There cannot be a continuation of L6 tests per se, because the system of levels that underpins it will no longer exist, but separate tests could in principle continue.

Even if the new universal tests provide equally valid and reliable judgements of their attainment – which is currently open to question – one might reasonably argue that the U turn itself may undermine continuity of provision and continued improvement in schools’ practice.

The fact that this practice needs substantive improvement is evidenced by Ofsted’s recent decision to strengthen the attention given to the attainment and progress of what they call ‘the most able’ in all school inspection reports.

L6 tests: Key Performance Data

Entry and success rates

As noted above, the information in the public domain about entry rates to L6 tests is incomplete.

The 2013 Investigation provides the number of pupils entered for each test in 2012. We do not have comparable data for 2013, but a PQ reply does supply the number of pupils registered for the tests in both 2012 and 2013. This can be supplemented by material in the 2013 SFR and the corresponding 2012 publication.

The available data is synthesised in this table showing for each year – and where available – the number registered for each test, the number entered, the total number of pupils achieving L6 and, of those, the number attending state-funded schools.

                    2012                   2013
Reg Ent Pass PassSF Reg Ent Pass Pass SF
Reading 47,148 46,810 942 x 73,118 x 2,262 2,137
GPS x x x x 61,883 x 8,606 x
Maths 55,809 55,212 18,953 x 80,925 x 35,137 33,202

One can see that there are relatively small differences between the numbers of pupils registered and the number entered, so the former is a decent enough proxy for the latter. I shall use the former in the calculations immediately below.

It is also evident that the proportions of learners attending independent schools who achieve L6 are small though significant. But, given the incomplete data set for state-funded schools, I shall use the pass rate for all schools in the following calculations.

In sum then, in 2012, the pass rates per registered entry were:

  • Reading – 2.0%
  • Maths – 34.0%

And in 2013 they were:

  • Reading – 3.1%
  • GPS – 13.9%
  • Maths – 43.4%

The pass rates in 2013 have improved significantly in both reading and maths, the former from a very low base. However, the proportion of learners successful in the L6 reading test remains extremely small.

The 2013 Investigation asserted, on the basis of the 2012 results, that:

‘The cost of supporting and administering a test for such a small proportion of the school population appears to outweigh the benefits’

However it did not publish any information about that cost.

It went on to suggest that there is a case for reviewing whether the L6 test is the most appropriate means to  ‘identify a range of higher performing pupils, for example the top 10%’. The Government chose not to act on this suggestion.

Gender, ethnic background and disadvantage

The 2013 results demonstrate some very significant gender disparities, as revealed in Chart 1 below.

Girls account for 62% of successful pupils in GPS and a whopping 74% in reading, while boys account for 61% of successful pupils in maths. These imbalances raise important questions about whether gender differences in high attainment are really this pronounced, or whether there is significant underachievement amongst the under-represented gender in each case.

Chart 1: Number of pupils successful in 2013 L6 tests by gender

L6 chart 1

There are equally significant disparities in performance by ethnic background. Chart 2 below illustrates how the performance of three selected ethnic minority groups – white, Asian and Chinese – varies by test and gender.

It shows that pupils from Chinese backgrounds have a marked ascendancy in all three tests, while Asian pupils are ahead of white pupils in GPS and maths but not reading. Girls are ahead of boys within all three ethnic groups, girls leading in reading and GPS and boys leading in maths. Chinese girls comfortably out-perform white and Asian boys

Chinese pupils are way ahead in maths, with 29% overall achieving L6 and an astonishing 35% of Chinese boys achieving this outcome.

The reasons for this vast disparity are not explained and raise equally awkward questions about the distribution of high attainment and the incidence of underachievement.

 

Chart 2: Percentages of pupils successful in 2013 L6 tests by gender and selected ethnic background

L6 chart2

There are also significant excellence gaps on each of the tests, though these are hard to visualise when working solely with percentages (pupil numbers have not been published).

The percentage variations are shown in the table below. This sets out the FSM gap and the disadvantaged gap, the latter being based on the ever-6 FSM measure that underpins the Pupil Premium.

These figures suggest that, while learners eligible for the Pupil Premium are demonstrating success on the maths test (and, for girls at least, on the GPS test too), they are over three times less likely to be successful than those from advantaged backgrounds. The impact of the Pupil Premium is therefore limited.

The gap between the two groups reaches as high as 7% for boys in maths. Although this is low by comparison with the corresponding gap at level 4, it is nonetheless significant. There is more about excellence gaps in maths below.

 

Reading GPS        Maths   
G B G B G B
FSM 0 0 1 0 2 3
Non-FSM 1 0 2 1 6 9
Gap 1 0 1 1 4 6
 
Dis 0 0 1 0 2 3
Non-Dis 1 0 3 2 7 10
Gap 1 0 2 2 5 7

Schools achieving L6 success

Finally in this opening section, a comparison of schools achieving L6 success in the 2013 Primary School Performance Tables reveals different patterns for each test.

The table below shows how many schools secured different percentages of pupils at L6. The number of schools achieving 11-20% at L6 in the GPS test is over 20 times the number that achieved that outcome in reading. But over eight times more schools secured this outcome in maths than managed it in GPS.

No schools made it beyond 20% at L6 in reading and none pushed beyond 40% at L6 in GPS, but the outliers in maths managed well over 60% and even 70% returns.

11-20% 21-30% 31-40% 41-50% 51-60% 61-70% 71-80% Total
Reading 24 24
GPS 298 22 2 322
Maths 2521 531 106 25 0 1 2 3186

There is also some evidence of schools being successful in more than one test.

Amongst the small sample of 28 schools that secured 41% or more L6s in maths,  two also featured amongst the top 24 performers in reading and five amongst the top 24 performers in GSP.

The school with arguably the best record across all three tests is Christ Church Primary School in Hampstead, which secured 13% in reading, 21% in GPS and 46% in maths, from a KS2 cohort of 24. The FSM/Pupil Premium rates at the school are low but, nevertheless, this is an outstanding result.

The following sections look more closely at L6 test and teacher assessment results in each subject. Each section consists of a series of bullet points highlighting significant findings.

English

 

Reading Test

The evidence on performance on the L6 reading test is compromised to some extent by the tiny proportions of pupils that achieve it. However:

  • 9,605 schools registered pupils for the 2013 L6 reading test, up 48% from 6,469 in 2012, and the number of pupils registered increased from 47,148 in 2012 to 73,118 in 2013, an increase of 55%.
  • Of the 539,473 learners who undertook the 2013 KS2 reading tests, only 2,262 (about 0.42%) achieved L6. This figure includes some in independent schools; the comparable figure for state-funded schools only is 2,137, so 5.5% of L6s were secured in the independent sector.
  • Of this first total – ie including pupils from independent schools – 1,670 were girls (0.63% of all girls who undertook the KS2 reading tests) and 592 were boys (0.21% of all boys who undertook the KS2 reading tests).
  • These are significant improvements on the comparable 2012 figures which showed about 900 learners achieving L6, including 700 girls and 200 boys. (The figures were rounded in the SFR but the 2013 evaluation confirmed the actual number as 942). The overall percentage achieving L6 therefore increased by about 140% in 2013, compared with 2012. If we assume registration for L6 tests as a proxy for entry, this suggests that just over 3% of entrants passed in 2013.
  • In state-funded schools only, the percentage of learners from a Chinese background entered for KS2 reading tests who achieved L6 reaches 2%, compared with 1% for those of mixed background and 0% for learners from white, Asian and black backgrounds.
  • Amongst the defined sub-groups, learners of Irish, any other white, white and Asian and any other Asian backgrounds also make it to 1%. All the remainder are at 0%.
  • The same is true of EAL learners and native English speakers, FSM-eligible and disadvantaged learners, making worthwhile comparisons almost impossible.
  • The 2013 transition matrices show that 12% of learners who had achieved L4 at the end of KS1 went on to achieve L6, while 1% of those who had achieved L3 did so. Hence the vast majority of those at L4 in KS1 did not make two levels of progress.
  • Progression data in the SFR shows that, of the 2,137 learners achieving L6 in state funded schools, 2,047 were at L3 or above at KS1, 77 were at L2A, 10 were at L2B and 3 were at L2C. Of the total population at KS1 L3 or above, 1.8% progressed to L6.
  • Regional and local authority breakdowns are given only as percentages, of limited value for comparative purposes because they are so small. Only London and the South East record 1% at L6 overall, with all the remaining regions at 0%. Only one local authority – Richmond upon Thames – reaches 2%.
  • However 1% of girls reach L6 in all regions apart from Yorkshire and Humberside and a few more authorities record 2% of girls at L6: Camden, Hammersmith and Fulham, Kensington and Chelsea, Kingston, Richmond and Solihull.
  • The 2013 Primary School Performance Tables show that some 12,700 schools recorded no learners achieving L6.
  • At the other end of the spectrum, 36 schools recorded 10% or more of their KS2 cohort achieving L6. Four of these recorded 15% or higher:

Iford and Kingston C of E Primary School, East Sussex (19%; cohort of 21).

Emmanuel C of E Primary School, Camden (17%; cohort of 12).

Goosnargh Whitechapel Primary School, Lancashire (17%; cohort of 6).

High Beech  C of E VC Primary School, Essex (15%; cohort of 13).

Reading TA

There is relatively little data about teacher assessment outcomes.

  • The total number of pupils in all schools achieving L6 in reading TA in 2013 is 15,864 from a cohort of 539,729 (2.94%). This is over seven times as many as achieved L6 in the comparable test (whereas in maths the figures are very similar). It would be useful to know how many pupils achieved L6 in TA, were entered for the test and did not succeed.
  • The number of successful girls is 10,166 (3.85% of females assessed) and the number of boys achieving L6 is 5,698 (2.06% of males assessed). Hence the gap between girls and boys is far narrower on TA than it is on the corresponding test.
  • Within the 2013 Performance Tables, eight schools recorded 50% or more of their pupils at L6, the top performer being Peppard Church of England Primary School, Oxfordshire, which reached 83% (five from a cohort of six).

 

Writing (including GPS)

 

GPS Test

The L6 Grammar, Punctuation and Spelling (GPS) test was newly introduced in 2013. This is what we know from the published data:

  • The number of schools that registered for the test was 7,870, almost 2,000 fewer than registered for the reading test. The number of pupil registrations was 61,883, over 12,000 fewer than for reading.
  • The total number of successful learners is 8,606, from a total of 539,438 learners assessed at KS2, including those in independent schools taking the tests, giving an actual percentage of 1.6%. As far as I can establish, a comparable figure for state-funded schools is not available.
  • As with reading, there are significant differences between boys and girls. There were 5,373 successful girls (2.04% of girls entered for KS2 GPS tests) and 3,233 successful boys (1.17% of boys entered for KS2 GPS). This imbalance in favour of girls is significant, but not nearly as pronounced as in the reading test.
  • The proportion of pupil registrations for the L6 GPS test resulting in L6 success is around one in seven (13.9%) well over four times as high as for reading.
  • The ethnic breakdown in state-funded schools shows that Chinese learners are again in the ascendancy. Overall, 7% of pupils from a Chinese background achieved L6, compared with 1% white, 2% mixed, 2% Asian and 1% black.
  • Chart 3 below shows how L6 achievement in GPS varies between ethnic sub-groups. Indian pupils reach 4% while white and Asian pupils score 3%, as do pupils from any other Asian background.

Chart 3: 2013 GPS L6 performance by ethnic sub-groups

L6 chart 3

  • When gender differences are taken into account, Chinese girls are at 8% (compared with boys at 7%), ahead of Indian girls at 5% (boys 3%), white and Asian girls at 4% (boys 3%) and any other Asian girls also at 4% (boys 3%). The ascendancy of Chinese girls over boys from any other ethnic background is particularly noteworthy and replicates the situation in maths (see below).
  • Interestingly, EAL learners and learners with English as a native language both record 2% at L6. Although these figures are rounded, it suggests that exceptional performance in this aspect of English does not correlate with being a native speaker.
  • FSM-eligible learners register 0%, compared with 2% for those not eligible. However, disadvantaged learners are at 1% and non-disadvantaged 2% (Disadvantaged boys are at 0% and non-disadvantaged girls at 3%). Without knowing the numbers involved we can draw few reliable conclusions from this data.
  • Chart 4 below gives illustrates the regional breakdown for boys, girls and both genders. At regional level, London reaches 3% success overall, with both the South East and Eastern regions at 2% and all other regions at 1%. Girls record 2% in every region apart from the North West and Yorkshire and Humberside. Only in London do boys reach 2%.

 

Chart 4: 2013 L6 GPS outcomes by gender and region

L6 chart 4

  • At local authority level the highest scoring are Richmond (7%); the Isles of Scilly (6%); Kingston and Sutton (5%); and Harrow, Hillingdon and Wokingham (4%).
  • The School Performance Tables reveal that some 10,200 schools posted no L6 results while, at the other extreme, 34 schools recorded 20% or more of their KS2 cohort at L6 and 463 schools managed 10% or above. The best records were achieved by:

St Joseph’s Catholic Primary School, Southwark (38%; cohort of 24).

The Vineyard School, Richmond  (38%; cohort of 56).

Cartmel C of E Primary School,  (29%; cohort of 7) and

Greystoke School, (29%; cohort of 7).

Writing TA

When it comes to teacher assessment:

  • 8,410 learners from both state and independent schools out of a total of 539,732 assessed (1.56%) were judged to be at L6 in writing. The total figure for state-funded schools is 7,877 pupils. This is very close to the number successful in the L6 GPS test, even though the focus is somewhat different.
  • Of these, 5,549 are girls (2.1% of the total cohort) and 2,861 boys (1.04% of the total cohort). Hence the imbalance in favour of girls is more pronounced in writing TA than in the GPS test, whereas the reverse is true for reading. 
  • About 5% of learners from Chinese backgrounds achieve L6, as do 3% of white Asian and 3% of Irish pupils.
  • The 2013 transition matrices record progression in writing TA, rather than in the GSP test. They show that 61% of those assessed at L4 at KS1 go on to achieve L6, so only 6 out of 10 are making the expected minimum two levels of progress. On the other hand, some 9% of those with KS1 L3 go on to achieve L6, as do 2% of those at L2A.
  • The SFR provides further progression data – again based on the TA outcomes – for state-funded schools only. It shows us that one pupil working towards L1 at KS1 went on to achieve L6 at KS2, as did 11 at L1, 54 at L2C, 393 at L2B, 1,724 at L2A and 5,694 at L3 or above. Hence some pupils are making five or more levels of progress.
  • The regional breakdown – this time including independent schools – gives the East Midlands, West Midlands, London and the South West at 2%, with all the rest at 1%. At local authority level, the best performers are: City of London at 10%; Greenwich, Kensington and Chelsea and Richmond at 5% and Windsor and Maidenhead at 4%.

English TA

There is additionally a little information about pupils achieving L6 across the subject:

  • The SFR confirms that 8,087 pupils (1.5%) were assessed at L6 in English, including 5,244 girls (1.99% of all girls entered) and 2,843 boys (1.03% of all boys entered). These figures are for all schools, including independent schools.
  • There is a regional breakdown showing the East and West Midlands, London and the South West at 2%, with all the remainder at 1%. Amongst local authorities, the strongest performers are City of London (10%); and Bristol, Greenwich, Hackney, Richmond, Windsor and Maidenhead (4%). The exceptional performance of Bristol, Greenwich and Hackney is noteworthy.
  • In the Performance Tables, 27 schools record 30% or more pupils at L6 across English, the top performer again being Newton Farm, at 60%.

Maths

L6 performance in maths is more common than in other tests and subjects and the higher percentages generated typically result in more meaningful comparisons.

  • The number of school registrations for L6 maths in 2013 was 11,369, up almost 40% from 8,130 in 2012. The number of pupil registrations was 80,925, up some 45% from 55,809 in 2012.
  • The number of successful pupils – in both independent and state schools – was 35,137 (6.51% of all entrants). The gender imbalance in reading and GPS is reversed, with 21,388 boys at this level (7.75% of males entered for the overall KS2 test) compared with 13,749 girls (5.22% of females entered for the test). The SFR gives a total for state-funded schools of 33,202 pupils, so some 5.5% of Level 6s were achieved in independent schools.
  • Compared with 2012, the numbers of successful pupils has increased from 18,953. This represents an increase of 85%, not as huge as the increase for reading but a very substantial increase nevertheless. 
  • The number of successful girls has risen by some 108% from 6,600 (rounded) and the number of successful boys by about 72%, from 12,400 (rounded), so the improvement in girls’ success is markedly larger than the corresponding improvement for boys.  
  • Assuming L6 test registration as a proxy for entry, the success rate in 2013 is around 43.4%, massively better than for reading (3%) and GPS (13.9%). The corresponding success rate in 2012 was around 34%. (Slightly different results would be obtained if one used actual entry rates and passes for state schools only, but we do not have these figures for both years.)
  • The breakdown in state-funded schools for the main ethnic groups by gender is illustrated by Chart 5 below. This shows how performance by boys and girls varies according to whether they are white ( W), mixed (M), Asian (A), black (B) or Chinese (C). It also compares the outcomes in 2012 and 2013. The superior performance of Chinese learners is evident, with Chinese boys reaching a staggering 35% success rate in 2013. As things stand, Chinese boys are almost nine times more likely to achieve L6 than black girls.
  • Chart 5 also shows that none of the gender or ethnic patterns has changed between 2012 and 2013, but some groups are making faster progress, albeit from a low base. This is especially true of white girls, black boys and, to a slightly lesser extent, Asian girls.
  • Chinese girls and boys have improved at roughly the same rate and black boys have progressed faster than black girls but, in the remaining three groups, girls are improving at a faster rate than boys.

Chart 5: L6 Maths test by main ethnic groups and gender

L6 chart 5

  • Amongst sub-groups, not included on this table, the highest performing are: any other Asian background 15%, Indian 14%, white and Asian 11% and Irish 10%. Figures for Gypsy/Roma and any other white background are suppressed, while travellers of Irish heritage are at 0%, black Caribbean at 2% and any other black background at 3%. In these latter cases, the differential with Chinese performance is huge.
  • EAL learners record a 7% success rate, compared with 6% for native English language speakers, an improvement on the level pegging recorded for GPS. This gap widens to 2% for boys – 9% versus 7% in favour of EAL, whereas for girls it is 1% – 6% versus 5% in favour of EAL. The advantage enjoyed by EAL learners was also evident in 2012.
  • The table below shows the position for FSM and disadvantaged learners by gender, and how this has changed since 2012.
FSM boys Non FSM boys Gap Dis boys Non dis boys Gap
2012 1% 5% 4% 1% 6% 5%
2013 3% 9% 6% 3% 10% 7%
FSM girls Non FSM girls Gap Dis girls Non dis girls Gap
2012 1% 3% 2% 1% 3% 2%
2013 2% 6% 4% 2% 7% 5%
FSM all Non FSM all Gap Dis all Non dis all Gap
2012 1% 4% 3% 1% 4% 3%
2013 2% 7% 5% 2% 8% 6%
  • This shows that the gap between FSM and non-FSM and between disadvantaged and non-disadvantaged has grown – for boys, girls and the groups as a whole – between 2012 and 2013. All the gaps have increased by 2% or 3%, with higher gaps between disadvantaged and advantaged girls and for disadvantaged boys and girls together, compared with their more advantaged peers.
  • The gaps are all between 2% and 7%, so not large compared with those lower down the attainment spectrum, but the fact that they are widening is a significant cause for concern, suggesting that Pupil Premium funding is not having an impact at L6 in maths.
  • The Transition Matrices show that 89% of learners assessed at L4 in KS1 went on to achieve L6, while 26% of those with L3 at KS1 did so, as did 4% of those with L2A and 1% of those with L2B. Hence a noticeable minority is making four levels of progress.
  • The progression data in the SFR, relating to state-funded schools, show that one pupil made it from W at KS1 to L6, while 8 had L1, 82 had 2C, 751 had 2B, 4,983 had 2A and 27,377 had L3. Once again, a small minority of learners is making four or five levels of progress.
  • At regional level, the breakdown is: NE 6%, NW 6%, Y+H 5%, EM 6%, WM 6%, E 6%, London 9%, SE 7% and SW 6%. So London has a clear lead in respect of the proportion of its learners achieving L6.
  • The local authorities leading the rankings are: City of London 24%, Richmond 19%, Isles of Scilly 17%, Harrow and Kingston 15%, Trafford and Sutton 14%. No real surprises there!
  • The Performance Tables show 33 schools achieved 40% or higher on this measure. Eight schools were at 50% or above. The best performing schools were:

St Oswald’s C of E Aided Primary School, Cheshire West and Chester (75%; cohort 8)

St Joseph’s Roman Catholic Primary School, Hurst Green, Lancashire (71%; cohort 7)

Haselor School, Warwickshire (67%; cohort 6).

  • Some of the schools achieving 50% were significantly larger, notably Bowdon C of E Primary School, Trafford, which had a KS2 cohort of 60.

Maths TA

The data available on maths TA is more limited:

  • Including pupils at independent schools, a total of 33,668 were assessed at L6 in maths (6.24% of all KS2 candidates). This included 20,336 boys (7.37% of all male KS2 candidates) and 13,332 girls (5.06% of all female candidates). The number achieving L6 maths TA is slightly lower than the corresponding number achieving L6 in the test.
  • The regional breakdown was as follows: NE 5%; NW 5%; Y+H 5%; EM 5%, WM 6%; E 6%, London 8%; SE 7%, SW 6%, so London’s ascendancy is not as significant as in the test. 
  • The strongest local authority performers are: City of London 24%; Harrow and Richmond 15%; Sutton 14%; Trafford 13%; Solihull and Bromley 12%.
  • In the Performance Tables, 63 schools recorded 40% or higher on this measure, 15 of them at 50% or higher. The top performer was St Oswald’s C of E Aided Primary School (see above) with 88%.

Science

Science data is confined to teacher assessment outcomes.

  • A total of just 1,633 pupils achieved L6 in 2013, equivalent to 0.3% of the KS2 science cohort. Of these, 1,029 were boys (0.37%) and 604 were girls (0.23%), suggesting a gender imbalance broadly similar to that in maths.
  • No regions and only a handful of local authorities recorded a success rate of 1%.
  • In the Performance Tables, 31 schools managed 20% or higher and seven schools were above 30%. The best performing were:

Newton Farm (see above) (50%; cohort 30)

Hunsdon Junior Mixed and Infant School, Hertfordshire (40%; cohort 10)

Etchingham Church of England Primary School, East Sussex (38%; cohort 16)

St Benedict’s Roman Catholic Primary School Ampleforth, North Yorkshire (36%; cohort 14).

Conclusions

 

Key findings from this data analysis

I will not repeat again all of the significant points highlighted above, but these seem particularly worthy of attention and further analysis:

  • The huge variation in success rates for the three L6 tests. The proportion of learners achieving L6 in the reading test is improving at a faster rate than in maths, but from a very low base. It remains unacceptably low, is significantly out of kilter with the TA results for L6 reading and – unless there has been a major improvement in 2014 – is likely to stay depressed for the limited remaining lifetime of the test.
  • In the tests, 74% of those successful in reading are girls, 62% of those successful in GPS are girls and 61% of those successful in maths are boys. In reading there are also interesting disparities between gender distribution at L6 in the test and in teacher assessment. Can these differences be attributed solely to gender distinctions or is there significant gender-related underachievement at the top of the attainment distribution? If so, how can this be addressed? 
  • There are also big variations in performance by ethnic background. Chinese learners in particular are hugely successful, especially in maths. In 2013, Chinese girls outscored significantly boys from all other backgrounds, while an astonishing 35% of Chinese boys achieved L6. This raises important questions about the distribution of high attainment, the incidence of underachievement and how the interaction between gender and ethnic background impacts on these.
  • There are almost certainly significant excellence gaps in performance on all three tests (ie between advantaged and disadvantaged learners), though in reading and GPS these are masked by the absence of numerical data. In maths we can see that the gaps are not as large as those lower down the attainment spectrum, but they widened significantly in 2013 compared with 2012. This suggests that the impact of the Pupil Premium on the performance of the highest attainers from disadvantaged backgrounds is extremely limited.  What can and should be done to address this issue?
  • EAL learners perform equally as well as their counterparts in the GPS test and even better in maths. This raises interesting questions about the relationship between language acquisition and mathematical performance and, even more intriguingly, the relationship between language acquisition and skill in manipulating language in its written form. Further analysis of why EAL learners are so successful may provide helpful clues that would improve L6 teaching for all learners.
  • Schools are recording very different success rates in each of the tests. Some schools that secure very high L6 success rates in one test fail to do so in the others, but a handful of schools are strong performers across all three tests. We should know more than we do about the characteristics and practices of these highly successful schools.

Significant gaps in the data

A data portal to underpin the School Performance Tables is under construction. There have been indications that it will contain material about high attainers’ performance but, while levels continue to be used in the Tables, this should include comprehensive coverage of L6 performance, as well as addressing the achievement of high attainers as they are defined for Performance Table purposes (a much broader subset of learners).

Subject to the need to suppress small numbers for data protection purposes, the portal might reasonably include, in addition to the data currently available:

  • For each test and TA, numbers of registrations, entries and successful pupils from FSM and disadvantaged backgrounds respectively, including analysis by gender and ethnic background, both separately and combined. All the data below should also be available for these subsets of the population.
  • Registrations and entries for each L6 test, for every year in which the tests have been administered, showing separately rates for state-funded and all schools and rates for different types of state-funded school.
  • Cross-referencing of L6 test and TA performance, to show how many learners are successful in one, the other and both – as well as how many learners achieve L6 on more than one test and/or TA and different combinations of assessments.
  • Numbers of pupils successful in each test and TA by region and LA, as well as regional breakdowns of the data above and below.
  • Trends in this data across all the years in which the tests and TA have been administered.
  • The annual cost of developing and administering each of the L6 tests so we can make a judgement about value for money.

It would also be helpful to produce case studies of schools that are especially successful in maximising L6 performance, especially for under-represented groups.

 

The impact of the new tests pre- and post-2016

We do not yet know whether the announcement that L6 tests will disappear after 2015 has depressed registration, entry and success rates in 2014. This is more likely in 2015, since the 2014 registration deadline and the response to the primary assessment and accountability consultation were broadly co-terminous.

All the signs are that the accountability regime will continue to focus some attention on the performance of high attainers:

  • Ofsted is placing renewed emphasis on the attainment and progress of the ‘most able’ in school inspection, though they have a broad conceptualisation of that term and may not necessarily highlight L6 achievement.
  • From 2016, schools will be required to publish ‘the percentage of pupils who achieve a high score in all areas at the end of key stage 2.’ But we do not know whether this means publishing separately the percentage of pupils achieving high scores in each area, or only the percentage of pupils achieving high scores across all areas. Nor do we know what will count as a high score for these purposes.
  • There were commitments in the original primary assessment and accountability consultation document to inclusion of measures in the Primary Performance Tables setting out:

‘How many of a school’s pupils are amongst the highest attaining nationally, by showing the percentage of pupils achieving a high scaled score in each subject.’

but these were not repeated in the consultation response.

In short, there are several unanswered questions and some cause to doubt the extent to which Level 6-equivalent performance will continue to be a priority. The removal of L6 tests could therefore reduce significantly the attention primary schools give to their highest attainers.

Moreover, questions remain over the suitability of the new tests for these highest attainers. These may possibly be overcome but there is considerable cause for concern.

It is quite conceivable that the test developers will not be able to accommodate effective assessment of L6 performance within single tests as planned.

If that is the case, the Government faces a choice between perpetuating separate tests, or the effective relegation of the assessment of the highest attainers to teacher assessment alone.

Such a decision would almost certainly need to be taken on this side of a General Election. But of course it need not be binding on the successor administration. Labour has made no commitments about support for high attainers, which suggests they will not be a priority for them should they form the next Government.

The recently published Assessment Principles are intended to underpin effective assessment systems within schools. They state that such systems:

‘Differentiate attainment between pupils of different abilities, giving early recognition of pupils who are falling behind and those who are excelling.’

This lends welcome support to the recommendations I offered to NAHT’s Commission on Assessment

But the national system for assessment and accountability has an equally strong responsibility to differentiate throughout the attainment spectrum and to recognise the achievement of those who excel.

As things stand, there must be some doubt whether it will do so.

Postscript

On 19 May 2014 two newspapers helpfully provided the entry figures for the 2014 L6 tests. These are included in the chart below.

L6 postscript chart

It is clear that entries to all three tests held up well in 2014 and, as predicted, numbers have not yet been depressed as a consequence of the decision to drop L6 tests after 2015.

The corresponding figures for the numbers of schools entering learners for each test have not been released, so we do not know to what extent the increase is driven by new schools signing up, as opposed to schools with previous entries increasing the numbers they enter.

This additional information makes it easier to project approximate trends into 2015, so we shall be able to tell next year whether the change of assessment policy will cause entry rates to tail off.

  • Entries for the L6 reading test were 49% up in 2013 and 36% up in 2014. Assuming the rate of increase in 2015 falls to 23% (ie again 13% down on the previous year), there would be some 117,000 entries in 2015.
  • Entries for the L6 maths test were 41% up in 2013 and 36% up in 2014. Assuming the rate of increase in 2015 falls to 31% (ie again 5% down on the previous year), there would be around 139,000 entries in 2015.
  • GPS is more problematic because we have only two years on which to base the trend. If we assume that the rate of increase in entries will fall somewhere between the rate for maths and the rate for reading in 2014 (their second year of operation) there would be somewhere between 126,000 and 133,000 entries in 2015 – so approximately 130,000 entries.

It is almost certainly a projection too far to estimate the 2014 pass rates on the basis of the 2014 entry rates, so I will resist the temptation. Nevertheless, we ought to expect continued improvement at broadly commensurate rates.

The press stories include a Government ‘line to take’ on the L6 tests.

In the Telegraph, this is:

‘Want to see every school stretching all their pupils and these figures show that primary schools have embraced the opportunity to stretch their brightest 11-year-olds.’

‘This is part of a package of measures – along with toughening up existing primary school tests, raising the bar and introducing higher floor standards – that will raise standards and help ensure all children arrive at secondary school ready to thrive.’

In the Mail it is:

‘We brought back these tests because we wanted to give teachers the chance to set high aspirations for pupils in literacy and numeracy.’

‘We want to see every school stretching all their pupils. These figures show that primary schools have embraced the opportunity to stretch their brightest 11-year-olds by  teaching them more demanding new material, in line with the new curriculum, and by entering them for the Level 6 test.’

There is additionally confirmation in the Telegraph article that ‘challenging material currently seen in the level 6 exams would be incorporated into all SATs tests’ when the new universal assessments are introduced, but nothing about the test development difficulties that this presents.

But each piece attributed this welcome statement to Mr Gove:

‘It is plain wrong to set a ceiling on the talents of the very brightest pupils and let them drift in class.’

‘Letting teachers offer level 6 tests means that the most talented children will be fully stretched and start secondary school razor sharp.’

Can we read into that a commitment to ensure that the new system – including curriculum, assessment, qualifications, accountability and (critically) Pupil Premium support for the disadvantaged – is designed in a joined up fashion to meet the needs of ‘the very brightest pupils’?

I wonder if Mr Hunt feels able to follow suit.

GP

May 2014

Unpacking the Primary Assessment and Accountability Reforms

This post examines the Government response to consultation on primary assessment and accountability.

pencil-145970_640It sets out exactly what is planned, what further steps will be necessary to make these plans viable and the implementation timetable.

It is part of a sequence of posts I have devoted to this topic, most recently:

Earlier posts in the series include The Removal of National Curriculum Levels and the Implications for Able Pupils’ Progression (June 2012) and Whither National Curriculum Assessment Without Levels? (February 2013).

The consultation response contrives to be both minimal and dense. It is necessary to unpick each element carefully, to consider its implications for the package as a whole and to reflect on how that package fits in the context of wider education reform.

I have organised the post so that it considers sequentially:

  • The case for change, including the aims and core principles, to establish the policy frame for the planned reforms.
  • The impact on the assessment experience of children aged 2-11 and how that is likely to change.
  • The introduction of baseline assessment in Year R.
  • The future shape of end of KS1 and end of KS2 assessment respectively.
  • How the new assessment outcomes will be derived, reported and published.
  • The impact on floor standards.

Towards the end of the post I have also provided a composite ‘to do’ list containing all the declared further steps necessary to make the plan viable, with a suggested deadline for each.

And the post concludes with an overall judgement on the plans, in the form of a summary of key issues and unanswered questions arising from the earlier commentary. Impatient readers may wish to jump straight to that section.

I am indebted to Warwick Mansell for his previous post on this topic. I shall try hard not to parrot the important points he has already made, though there is inevitably some overlap.

Readers should also look to Michael Tidd for more information about the shape and content of the new tests.

What has been published?

The original consultation document ‘Primary assessment and accountability under the new national curriculum’ was published on 17 July 2013 with a deadline for response of 17 October 2013. At that stage the Government’s response was due ‘in autumn 2013’.

The response was finally published on 27 March, some four months later than planned and only five months prior to the introduction of the revised national curriculum which these arrangements are designed to support.

It is likely that the Government will have decided that 31 March was the latest feasible date to issue the response, so they were right up against the wire.

It was accompanied by:

  • A press release which focused on the full range of assessment reforms – for primary, secondary and post-16.

Shortly before the response was published, the reply to a Parliamentary question asked on 17 March explained that test frameworks were expected to be included within it:

‘Guidance on the nature of the revised key stage 1 and key stage 2 tests, including mathematics, will be published by the Standards and Testing Agency in the form of test framework documents. The frameworks are due to be released as part of the Government’s response to the primary assessment and accountability consultation. In addition, some example test questions will be made available to schools this summer and a full sample test will be made available in the summer of 2015.’ (Col 383W)

.

.

In the event, these documents – seven in all – did not appear until 31 March and there was no reference to any of the three commitments above in what appeared on 27 March.

Finally, the Standards and Testing Agency published on 3 April a guidance page on national curriculum tests from 2016. At present it contains very little information but further material will be added as and when it is published.

Partly because the initial consultation document was extremely ‘drafty’, the reaction of many key external respondents to the consultation was largely negative. One imagines that much of the period since 17 October has been devoted to finding the common ground.

Policy makers will have had to do most of their work after the consultation document issued because they were not ready beforehand.

But the length of the delay in issuing the response would suggest that they also encountered significant dissent amongst internal stakeholders – and that the eventual outcome is likely to be a compromise of sorts between these competing interests.

Such compromises tend to have observable weaknesses and/or put off problematic issues for another day.

A brief summary of consultation responses is included within the Government’s response. I will refer to this at relevant points during the discussion below.

 .

The Case for Change

 .

Aims

The consultation response begins – as did the original consultation document – with a section setting out the case for reform.

It provides a framework of aims and principles intended to underpin the changes that are being set in place.

The aims are:

  • The most important outcome of primary education is to ‘give as many pupils as possible the knowledge and skills to flourish in the later phases of education’. This is a broader restatement of the ‘secondary ready’ concept adopted in the original consultation document.
  • The primary national curriculum and accountability reforms ‘set high expectations so that all children can reach their potential and are well prepared for secondary school’. Here the ‘secondary ready’ hurdle is more baldly stated. The parallel notion is that all children should do as well as they can – and that they may well achieve different levels of performance. (‘Reach their potential’ is disliked by some because it is considered to imply a fixed ceiling for each child and fixed mindset thinking.)
  • To raise current threshold expectations. These are set too low, since too few learners (47%) with KS2 level 4C in both English and maths go on to achieve five or more GCSE grades A*-C including English and maths, while 72% of those with KS2 level 4B do so. So the new KS2 bar will be set at this higher level, but with the expectation that 85% of learners per school will jump it, 13% more than the current national figure. Meanwhile the KS4 outcome will also change, to achievement across eight GCSEs rather than five, quite probably at a more demanding level than the present C grade. In the true sense, this is a moving target.
  • No child should be allowed to fall behind’. This is a reference to the notion of ‘mastery’ in its crudest sense, though the model proposed will not deliver this outcome. We have noted already a reference to ‘as many children as possible’ and the school-level target – initially at least – will be set at 85%. In reality, a significant minority of learners will progress more slowly and will fall short of the threshold at the end of KS2.
  • The new system ‘will set a higher bar’ but ‘almost all pupils should leave primary school well-placed to succeed in the next phase of their education’. Another nuanced version of ‘secondary ready’ is introduced. This marks a recognition that some learners will not jump over the higher bar. In the light of subsequent references to 85%, ‘almost all’ is rather over-optimistic.
  • We also want to celebrate the progress that pupils make in schools with more challenging intakes’. Getting ‘nearly all pupils to meet this standard…’ (the standard of secondary readiness?) ‘…is very demanding, at least in the short term’. There will therefore be recognition of progress ‘from a low starting point’ – even though these learners have, by definition, been allowed to fall behind and will continue to do so.

So there is something of a muddle here, no doubt engendered by a spirit of compromise.

The black and white distinction of ‘secondary-readiness’ has been replaced by various verbal approximations, but the bottom line is that there will be a defined threshold denoting preparedness that is pitched higher than the current threshold.

And the proportion likely to fall short is downplayed – there is apparent unwillingness at this stage to acknowledge the norm that up to 15% of learners in each school will undershoot the threshold – substantially more in schools with ‘challenging intakes’.

What this boils down to is a desire that all will achieve the new higher hurdle – and that all will be encouraged to exceed it if they can – tempered by recognition that this is presently impossible. No child should be allowed to fall behind but many inevitably will do so.

It might have been better to express these aims in the form of future aspirations – and our collective efforts to bridge the gap between present reality and those ambitious aspirations.

Principles

The section concludes with a new set of principles governing pedagogy, assessment and accountability:

  • ‘Ongoing, teacher-led assessment is a crucial part of effective teaching;
  • Schools should have the freedom to decide how to teach their curriculum and how to track the progress that pupils make;
  • Both summative teacher assessment and external testing are important;
  • Accountability is key to a successful school system, and therefore must be fair and transparent;
  • Measures of both progress and attainment are important for understanding school performance; and
  • A broad range of information should be published to help parents and the wider public know how well schools are performing.’

These are generic ‘motherhood and apple pie’ statements and so largely uncontroversial. I might have added a seventh – that schools’ in-house assessment and reporting systems must complement summative assessment and testing, including by predicting for parents the anticipated outcomes of the latter.

Perhaps interestingly, there is no repetition of the defence for the removal of national curriculum levels. Instead, the response concentrates on the support available to schools.

It mentions discussion with an ‘expert group on assessment’ about ‘how to support schools to make best use of the new assessment freedoms’. We are not told the membership of this group (which, as far as I know, has not been made public) or the nature of its remit.

There is also a link to information about the Assessment Innovation Fund, which will provide up to 10 grants of up to £10,000 which schools and organisations can use to develop packages that share their innovative practice with others.

 

Children’s experience of assessment up to the end of KS2

The response mentions the full range of national assessments that will impact on children between the ages of two and 11:

  • The statutory progress check at two years of age.
  • A new baseline assessment undertaken within a few weeks of the start of Year R, introduced from September 2015.
  • An Early Years Foundation Stage Profile undertaken in the final term of the year in which children reach the age of five. A revised profile was introduced from September 2012. It is currently compulsory but will be optional from September 2016. The original consultation document said that the profile would no longer be moderated and data would no longer be collected. Neither of those commitments is repeated here.
  • The Phonics Screening Check, normally undertaken in Year 1. The possibility of making these assessments non-statutory for all-through primary schools, suggested in the consultation document, has not been pursued: 53% of respondents opposed this idea, whereas 32% supported it.
  • End of KS1 assessment and
  • End of KS2 assessment.

So a total of six assessments are in place between the ages of two and 11. At least four – and possibly five – will be undertaken between ages two and seven.

It is likely that early years’ professionals will baulk at this amount of assessment, no matter how sensitively it is designed. But the cost and inefficiency of the model is also open to criticism.

The Reception Baseline

Approach

The original consultation document asked whether:

  • KS1 assessment should be retained as a baseline – 45% supported this and 41% were opposed.
  • A baseline check should be introduced at the start of Reception – 51% supported this and 34% were opposed.
  • Such a baseline check should be optional – 68% agreed and 19% disagreed.
  • Schools should be allowed to choose from a range of commercially available materials for this baseline check – 73% said no and only 15% said yes.

So, whereas views were mixed on where the baseline should be set, there were substantial majorities in favour of any Year R baseline check being optional and following a single, standard national format.

The response argues that Year R is the most sensible point at which to position the baseline since that is:

‘…the earliest point that nearly all children are in school’.

What happens in respect of children who are not in school at this point is not discussed.

There is no explanation of why the Government has disregarded the clear majority of respondents by choosing to permit a range of assessment approaches, so this decision must be ideologically motivated.

The response says ‘most’ are likely to be administered by teaching staff, leaving open the possibility that some options will be administered externally.

Design

Such assessments will need to be:

‘…strong predictors of key stage 1 and key stage 2 attainment, whilst reflecting the age and abilities of children in Reception’.

Presumably this means predictors of attainment in each of the three core subjects – English, maths and science – rather than any broader notion of attainment. The challenge inherent in securing a reasonable predictor of attainment across these domains seven years further on in a child’s development should not be under-estimated.

The response points out that such assessment tools are already available for use in Year R, some are used widely and some schools have long experience of using them. But there is no information about how many of these are deemed to meet already the description above.

In any case, new criteria need to be devised which all such assessments must meet. Some degree of modification will be necessary for all existing products and new products will be launched to compete in the market.

There is an opportunity to use this process to ratchet up the Year R Baseline beyond current expectations, so matching the corresponding process at the end of KS2. The consultation response says nothing about whether this is on the cards.

Interestingly, in his subsequent ‘Unsure start’ speech about early years inspection, HMCI refers to:

‘…the government’s announcement last week that they will be introducing a readiness-for-school test at age four. This is an ideal opportunity to improve accountability. But I think it should go further.

I hope that the published outcomes of these tests will be detailed enough to show parents how their own child has performed. I fear that an overall school grade will fail to illuminate the progress of poor children. I ask government to think again about this issue.’

The terminology – ‘readiness for school’ is markedly blunter than the references to a reception baseline in the consultation response. There is nothing in the response about the outcomes of these tests being published, nor anything about ‘an overall school grade’.

Does this suggest that decisions have already been made that were not communicated in the consultation response?

.

Timeline, options, questions

Several pieces of further work are required in short order to inform schools and providers about what will be required – and to enable both to prepare for introduction of the assessments from September 2015. All these should feature in the ‘to do’ list below.

One might reasonably have hoped that – especially given the long delay – some attempt might have been made to publish suggested draft criteria for the baseline alongside the consultation response. The fact that even preliminary research into existing practice has not been undertaken is a cause for concern.

Although the baseline will be introduced from September 2015, there is a one-year interim measure which can only apply to all-through primary schools:

  • They can opt out of the Year R baseline measure entirely, relying instead on KS1 outcomes as their baseline; or
  • They can use an approved Year R baseline assessment and have this cohort’s progress measured at the end of KS2 (which will be in 2022) by either the Year R or the KS1 baseline, whichever demonstrates the most progress.

In the period up to and including 2021, progress will continue to be measured from the end of KS1. So learners who complete KS2 in 2021 for example will be assessed on progress since their KS1 tests in 2017.

Junior and middle schools will also continue to use a KS1 baseline.

Arrangements for infant and first schools are still to be determined, another rather worrying omission at this stage in proceedings.

It is also clear that all-through primary schools (and infant/first schools?) will continue to be able to opt out from the Year R baseline from September 2016 onwards, since the response says:

‘Schools that choose not to use an approved baseline assessment from 2016 will be judged on an attainment floor standard alone’.

Hence the Year R baseline check is entirely optional and a majority of schools could choose not to undertake it.

However, they would need to be confident of meeting the demanding 85% attainment threshold in the floor standard.

They might be wise to postpone that decision until the pitch of the progress expectation is determined. For neither the Year R baseline nor the amount of progress that learners are expected to make from their starting point in Year R is yet defined.

This latter point applies at the average school level (for the purposes of the floor standard) and in respect of the individual learner. For example, if a four year-old is particularly precocious in, say, maths, what scaled scores must they register seven years later to be judged to have made sufficient progress?

There are several associated questions that follow on from this.

Will it be in schools’ interests to acknowledge that they have precocious four year-olds at all? Will the Year R baseline reinforce the tendency to use Reception to bring all children to the same starting point in readiness for Year 1, regardless of their precocity?

Will the moderation arrangements be hard-edged enough to stop all-through primary schools gaming the system by artificially depressing their baseline outcomes?

Who will undertake this moderation and how much will it cost? Will not the decision to permit schools to choose from a range of measures unnecessarily complicate the moderation process and add to the expense?

The consultation response neither poses these questions nor supplies answers.

The future shape of end KS1 and end KS2 assessment

.

What assessment will take place?

At KS1 learners will be assessed in:

  • Reading – test plus teacher assessment
  • Writing – test (of grammar, punctuation and spelling) plus teacher assessment
  • Speaking and listening – teacher assessment
  • Maths – test plus teacher assessment
  • Science  – teacher assessment

The new test of grammar, punctuation and spelling did not feature in the original consultation and has presumably been introduced to strengthen the marker of progress to which four year-olds should aspire at age seven.

The draft test specifications for the KS1 tests in reading, GPS and maths outline the requirements placed on the test developers, so it is straightforward to compare the specifications for reading and maths with the current tests.

The GPS test will include a 20 minute written grammar and punctuation task; a 20 minute test comprising short grammar, punctuation and vocabulary questions; and a 15 minute spelling task.

There is a passing reference to further work on KS1 moderation which is included in the ‘to do’ list below.

At KS2 learners will be assessed in

  • Reading – test plus teacher assessment
  • Writing – test (of grammar spelling and punctuation) plus teacher assessment
  • Maths – test plus teacher assessment
  • Science  – teacher assessment plus a science sampling test.

Once again, the draft test specifications – reading, GPS, maths and science sampling – describe the shape of each test and the content they are expected to assess.

I will leave it to experts to comment on the content of the tests.

 .

Academies and free schools

It is important to note that the framing of this content – by means of detailed ‘performance descriptors’ – means that the freedom academies and free schools enjoy in departing from the national curriculum will be largely illusory.

I raised this issue back in February 2013:

  • ‘We know that there will be a new grading system in the core subjects at the end of KS2. If this were to be based on the ATs as drafted, it could only reflect whether or not learners can demonstrate that they know, can apply and understand ‘the matters, skills and processes specified’ in the PoS as a whole. Since there is no provision for ATs that reflect sub-elements of the PoS – such as reading, writing, spelling – grades will have to be awarded on the basis of separate syllabuses for end of KS2 tests associated with these sub-elements.
  • This grading system must anyway be applied universally if it is to inform the publication of performance tables. Since some schools are exempt from National Curriculum requirements, it follows that grading cannot be derived directly from the ATs and/or the PoS, but must be independent of them. So this once more points to end of KS2 tests based on entirely separate syllabuses which nevertheless reflect the relevant part of the draft PoS. The KS2 arrangements are therefore very similar to those planned at KS4.’

I have more to say about the ‘performance descriptors’ below.

 .

Single tests for all learners

A critical point I want to emphasise at this juncture – not mentioned at all in the consultation document or the response – is the test development challenge inherent in producing single papers suitable for all learners, regardless of their attainment.

We know from the response that the P-scales will be retained for those who are unable to access the end of key stage tests. (Incidentally, the content of the P-scales will remain unchanged so they will not be aligned with the revised national curriculum, as suggested in the consultation document.)

There will also be provision for pupils who are working ‘above the P-scales but below the level of the test’.

Now the P-scales are for learners working below level 1 (in old currency). This is the first indication I have seen that the tests may not cater for the full range from Level 1-equivalent to Level 6-equivalent and above. But no further information is provided.

It may be that this is a reference to learners who are working towards level 1 (in old currency) but do not have SEN.

The 2014 KS2 ARA booklet notes:

‘Children working towards level 1 of the national curriculum who do not have a special educational need should be reported to STA as ‘W’ (Working below the level). This includes children who are working towards level 1 solely because they have English as an additional language. Schools should use the code ‘NOTSEN’ to explain why a child working towards level 1 does not have P scales reported. ‘NOTSEN’ replaces the code ‘EAL’ that was used in previous years.’

The consultation document said:

‘We do not propose to develop an equivalent to the current level 6 tests, which are used to challenge the highest-attaining pupils. Key stage 2 national curriculum tests will include challenging material (at least of the standard of the current level 6 test) which all pupils will have the opportunity to answer, without the need for a separate test’.

The draft test specifications make it clear that the tests should:

‘provide a suitable challenge for all children and give every child the opportunity to achieve as high a standard…as possible.’

Moreover:

‘In order to improve general accessibility for all children, where possible, questions will be placed in order of difficulty.’

The development of single tests covering this span of attainment – from level 1 to above level 6 – tests in which the questions are posed in order of difficulty and even the highest attainers must answer all questions – seem to me to be a very tall order, especially in maths.

More than that, I urgently need persuading that this is not a waste of high attainers’ time and poor assessment practice.

 .

How assessment outcomes will be derived, reported and published

Deriving assessment outcomes

One of the reasons cited for replacing national curriculum levels was the complexity of the system and the difficulty parents experienced in understanding it.

The Ministerial response to the original report from the National Curriculum Expert Panel said:

‘As you rightly identified, the current system is confusing for parents and restrictive for teachers. I agree with your recommendation that there should be a direct relationship between what children are taught and what is assessed. We will therefore describe subject content in a way which makes clear both what should be taught and what pupils should know and be able to do as a result.’

The consultation document glossed the same point thus:

‘Schools will be able to focus their teaching, assessment and reporting not on a set of opaque level descriptions, but on the essential knowledge that all pupils should learn.’

However, the consultation response introduces for the first time the concept of a ‘performance descriptor’.

This term is defined in the glossaries at the end of each draft test specification:

Description of the typical characteristics of children working at a particular standard. For these tests, the performance descriptor will characterise the minimum performance required to be working at the appropriate standard for the end of the key stage.’

Essentially this is a collective term for something very similar to old-style level descriptions.

Except that, in the case of the tests, they are all describing the same level of performance.

They have been rendered necessary by the odd decision to provide only a single generic attainment target for each programme of study. But, as noted back in February 2013, the test developers need a more sophisticated framework on which to base their assessments.

According to the draft test specifications they will also be used

‘By a panel of teachers to set the standards on the new tests following their first administration in May 2016’.

When it comes to teacher assessment, the consultation response says:

‘New performance descriptors will be introduced to inform the statutory teacher assessments at the end of key stage one [and]…key stage two.’

But there are two models in play simultaneously.

In four cases – science at KS1 and reading, maths and science at KS2 – there will be ‘a single performance descriptor of the new expected standard’, in the same way as there are in the test specifications.

But in five cases – reading, writing, speaking and listening and maths at KS1; and writing at KS2 :

‘teachers will assess pupils as meeting one of several performance descriptors’.

These are old-style level descriptors by another name. They perform exactly the same function.

The response says that the KS1 teacher assessment performance descriptors will be drafted by an expert group for introduction in autumn 2014. It does not mention whether KS2 teacher assessment performance descriptors will be devised in the same way and to the same timetable.

 .

Reporting assessment outcomes to parents

When it comes to reporting to parents, there will be three different arrangements in play at both KS1 and KS2:

  • Test results will be reported by means of scaled scores (of which more in a moment).
  • One set of teacher assessments will be reported by selecting from a set of differentiated performance descriptors.
  • A second set of teacher assessments will be reported according to whether learners have achieved a single threshold performance descriptor.

This is already significantly more complex than the previous system, which applied the same framework of national curriculum levels across the piece.

It seems that KS1 test outcomes will be reported as straightforward scaled scores (though this is only mentioned on page 8 of the main text of the response and not in Annex B, which compares the new arrangements with those currently in place).

But, in the case of KS2:

‘Parents will be provided with their child’s score alongside the average for their school, the local area and nationally. In the light of the consultation responses, we will not give parents a decile ranking for their child due to concerns about whether decile rankings are meaningful and their reliability at individual pupil level.’

The consultation document proposed a tripartite reporting system comprising:

  • A scaled score for each KS2 test, derived from raw test marks and built around a ‘secondary readiness standard’. This standard would be set at a scaled score of 100, which would remain unchanged. It was suggested for illustrative purposes that a scale based on the current national curriculum tests might run from 80 to 130.
  • An average scaled score in each test for other pupils nationally with the same prior attainment at the baseline. Comparison of a learner’s scaled score with the average scaled score would show whether they had made more or less progress than the national average.
  • A national ranking in each test – expressed in terms of deciles – showing how a learner’s scaled score compared with the range of performance nationally.

The latter has been dispensed with, given that 35% of consultation respondents disagreed with it, but there were clearly technical reservations too.

In its place, the ‘value added’ progress measure has been expanded so that there is a comparison with other pupils in the learner’s own school and the ‘local area’ (which presumably means local authority). This beefs up the progression element in reporting at the expense of information about the attainment level achieved.

So at the end of KS2 parents will receive scaled scores and three average scaled scores for each of reading, writing and maths – twelve scores in all – plus four performance descriptors, of which three will be singleton threshold descriptors (reading, maths and science) and one will be selected from a differentiated series (writing). That makes sixteen assessment outcomes altogether, provided in four different formats.

The consultation response tells us nothing more about the range of the scale that will be used to provide scaled scores. We do not even know if it will be the same for each test.

The draft test specifications say that:

‘The exact scale for the scaled scores will be determined following further analysis of trialling data. This will include a full review of the reporting of confidence intervals for scaled scores.’

But they also contain this worrying statement:

‘The provision of a scaled score will aid in the interpretation of children’s performance over time as the scaled score which represents the expected standard will be the same year on year. However, at the extremes of the scaled score distribution, as is standard practice, the scores will be truncated such that above and below a certain point, all children will be awarded the same scaled score in order to minimise the effect for children at the ends of the distribution where the test is not measuring optimally.’

This appears to suggest that scaled scores will not accurately describe performance at the extremes of the distribution, because the tests will not accurately measure such performance. This might be describing a statistical truism, but it again begs the question whether the highest attainers are being short-changed by the selected approach.

.

Publication of assessment outcomes

The response introduces the idea that ‘a suite of indicators’ will be published on each school’s own website in a standard format. These are:

  • The average progress made by pupils in reading, writing and maths. (This is presumably relevant to both KS1 and KS2 and to both tests and teacher assessment.)
  • The percentage of pupils reaching the expected standard in reading, writing and mathematics at the end of key stage 2. (This is presumably relevant to both tests and teacher assessment.)
  • The average score of pupils in their end of key stage 2 assessments. (The final word suggests teacher assessment as well as tests, even though there will not be a score from the former.)
  • The percentage of pupils who achieve a high score in all areas at the end of key stage 2. (Does ‘all areas’ imply something more than statutory tests and teacher assessments? Does it mean treating each area separately, or providing details only of those who have achieved high scores across all areas?)

The latter is the only reference to high attainers in the entire response. It does not give any indication of what will count as a high score for these purposes. Will it be designed to catch the top-third of attainers or something more demanding, perhaps equivalent to the top decile?

A decision has been taken not to report the outcomes of assessment against the P-scales because the need to contextualise such information is perceived to be relatively greater.

And, as noted above, HMCI let slip the fact that the outcomes of reception baselines would also be published, but apparently in the form of a single overall grade.

We are not told when these requirements will be introduced, but presumably they must be in place to report the outcomes of assessments undertaken in spring 2016.

Additionally:

‘So that parents can make comparisons between schools, we would like to show each school’s position in the country on these measures and present these results in a manner that is clear for all audiences to understand. We will discuss how best to do so with stakeholders, to ensure that the presentation of the data is clear, fair and statistically robust.’

This suggests inclusion in the 2016 School Performance Tables, but this is not stated explicitly.

Indeed, apart from references to the publication of progress measures in the 2022 Performance Tables, there is no explicit coverage of their contribution in the response, nor any reference to the planned supporting data portal, or how data will be distributed between the Tables and the portal.

The original consultation document gave several commitments on the future content of performance tables. They included:

  • How many of a school’s pupils are amongst the highest attaining nationally, by showing the percentage of pupils achieving a high scaled score in each subject.
  • Measures to show the attainment and progress of learners attracting the Pupil Premium.
  • Comparison of each school’s performance with that of schools with similar intakes.

None are mentioned here, nor are any of the suggestions advanced by respondents taken up.

Floor standards

Changes are proposed to the floor standards with effect from September 2016.

This section of the response begins by committing to:

‘…a new floor standard that holds schools to account both on the progress that they make and on how well their pupils achieve.’

But the plans set out subsequently do not meet this description.

The progress element of the current floor standard relates to any of reading, writing or mathematics but, under the new floor standard, it will relate to all three of these together.

An all-though primary school must demonstrate that:

‘…pupils make sufficient progress at key stage 2 from their starting point…’

As we have noted above, all-through primaries can opt to use the KS1 baseline or the Year R baseline in 2015. Moreover, from 2016 they can choose not to use the Year R baseline and be assessed solely on the attainment measure in the floor standards (see below).

Junior and middle schools obviously apply the KS1 baseline, while arrangements for infant and first schools have yet to be finalised.

What constitutes ‘sufficient progress’ is not defined. Annex C of the response says:

‘For 2016 we will set the precise extent of progress required once key stage 2 tests have been sat for the first time.’

Presumably this will be progress from KS1 to KS2, since progress from the Year R baseline will not be introduced until 2023.

The attainment element of the new floor standards is for schools to have 85% or more of pupils meeting the new, higher threshold standard at the end of KS2 in all of reading, writing and maths. The text says explicitly that this threshold is ‘similar to a level 4b under the current system’.

Annex C clarifies that this will be judged by the achievement of a scaled score of 100 or more in each of the reading and maths tests, plus teacher assessment that learners have reached the expected standard in writing (so the GPS test does not count in the same way, simply informing the teacher assessment).

As noted above, this a far bigger ask than the current reference to 65% of learners meeting the expected (and lower 4c) standard. The summary at the beginning of the response refers to it as ‘a challenging aspiration’:

‘Over time we expect more and more schools to achieve this standard.’

The statement in the first paragraph of this section of the response led us to believe that these two requirements – for progress and attainment respectively – would be combined, so that schools would be held account for both (unless, presumably, they exercised their right to opt out of the Year R baseline assessment).

But this is not the case. Schools need only achieve one or the other.

It follows that schools with a very high performing intake may exceed the floor standards on the basis of all-round high attainment alone, regardless of the progress made by their learners.

The reason for this provision is unclear, though one suspects that schools with an extremely high attaining intake, whether at Reception or Year 3, will be harder pressed to achieve sufficient progress, presumably because some ceiling effects come into play at the end of KS2.

This in turn might suggest that the planned tests do not have sufficient headroom for the highest attainers, even though they are supposed to provide similar challenge to level 6 and potentially extend beyond it.

Meanwhile, schools with less than stellar attainment results will be obliged to follow the progress route to jump the floor standard. This too will be demanding because all three domains will be in play.

There will have been some internal modelling undertaken to judge how many schools would be likely to fall short of the floor standards given these arrangements and it would be very useful to know these estimates, however unreliable they prove to be.

In their absence, one suspects that the majority of schools will be below the floor standards, at least initially. That of course materially changes the nature and purpose of the standards.

To Do List

The response and the draft specifications together contain a long list of work to be carried out over the next two years or so. I have included below my best guess as to the latest possible date for each decision to be completed and communicated:

  • Decide how progress will be measured for infants and first schools between the Year R baseline and the end of KS1 (April 2014)
  • Make available to schools a ‘small number’ of sample test questions for each key stage and subject (Summer 2014)
  • Work with experts to establish the criteria for the Year R baseline (September 2014)
  • KS1 [and KS2?] teacher assessment performance descriptors to be drafted by an expert group (September 2014)
  • Complete and report outcomes of a study with schools that already use Year R baseline assessments (December 2014)
  • Decide how Year R baseline assessments will be moderated (December 2014)
  • Publish a list of assessments that meet the Year R baseline criteria (March 2015)
  • Decide how Year R baseline results will be communicated to parents and to Ofsted (March 2015)
  • Make available to schools a full set of sample materials including tests and mark schemes for all KS1 and KS2 tests (September 2015)
  • Complete work with Ofsted and Teachers to improve KS1 moderation (September 2015)
  • Provide further information to enable teachers to assess pupils at the end of KS1 and KS2 who are ‘working above the P-scales but below the level of the test’ (September 2015)
  • Decide whether to move to external moderation of P-scale teacher assessment (September 2015)
  • Agree with stakeholders how to compare schools’ performance on a suite of assessment outcomes published in a standard format (September 2015)
  • Publish all final test frameworks (Autumn 2015)
  • Introduce new requirements for schools to publish a suite of assessment outcomes in a standard format (Spring 2016)
  • Panels of teacher use level descriptors to set the standards on the new tests following their first administration in May 2016 (Summer 2016)
  • Define what counts as sufficient progress from the Year R baseline to end KS1 and end KS2 respectively (Summer 2016)

Conclusion

Overall the response is rather more cogent and coherent than the original consultation document, though there are several inconsistencies and many sins of omission.

Drawing together the key issues emerging from the commentary above, I would highlight twelve key points:

  • The declared aims express the policy direction clumsily and without conviction. The ultimate aspirations are universal ‘secondary readiness’ (though expressed in broader terms), ‘no child left behind’ and ‘every child fulfilling their potential’ but there is no real effort to reconcile these potentially conflicting notions into a consensual vision of what primary education is for. Moreover, an inconvenient truth lurks behind these statements. By raising expectations so significantly – 4b equivalent rather than 4c; 85% over the attainment threshold rather than 65%; ‘sufficient progress’ rather than median progress and across three domains rather than one – there will be much more failure in the short to medium term. More learners will fall behind and fall short of the thresholds; many more schools are likely to undershoot the floor standards. It may also prove harder for some learners to demonstrate their potential. It might have been better to acknowledge this reality and to frame the vision in terms of creating the conditions necessary for subsequent progress towards the ultimate aspirations.
  • Younger children are increasingly caught in the crossbeam from the twin searchlights of assessment and accountability. HMCI’s subsequent intervention has raised the stakes still further. This creates obvious tensions in the sector which can be traced back to disagreements over the respective purposes of early years and primary provision and how they relate to each other. (HMCI’s notion of ‘school readiness’ is no doubt as narrow to early years practitioners as ‘secondary readiness’ is to primary educators.) But this is not just a theoretical point. Additional demands for focused inspection, moderation and publication of outcomes all carry a significant price tag. It must be open to question whether the sheer weight of assessment activity is optimal and delivers value for money. Should a radical future Government – probably with a cost-cutting remit – have rationalisation in mind?
  • Giving schools the freedom to choose from a range of Year R baseline assessment tools also seems inherently inefficient and flies in the face of the clear majority of consultation responses. We are told nothing of the perceived quality of existing services, none of which can – by definition – satisfy these new expectations without significant adjustment. It will not be straightforward to construct a universal and child-friendly instrument that is a sufficiently strong predictor of Level 4b-equivalent performance in KS2 reading, writing and maths assessments undertaken seven years later. Moreover, there will be a strong temptation for the Government to pitch the baseline higher than current expectations, so matching the  realignment at the other end of the process. Making the Reception baseline assessment optional – albeit with strings attached – seems rather half-hearted, almost an insurance against failure. Effective (and expensive) moderation may protect against widespread gaming, but the risk remains that Reception teachers will be even more predisposed to prioritise universal school readiness over stretching their more precocious four year-olds.
  • The task of designing an effective test for all levels of prior attainment at the end of key stage 2 is equally fraught with difficulty. The P-scales will be retained (in their existing format, unaligned with the revised national curriculum) for learners with special needs working below the equivalent of what is currently level 1. There will also be undefined provision ‘for those working above the level of the P-scales but below the level of the test’, even though the draft test development frameworks say:

‘All eligible children who are registered at maintained schools, special schools, or academies (including free schools) in England and are at the end of key stage 2 will be required to take the…test, unless they have taken it in the past.’

And this applies to all learners other than those in the exempted categories set out in the ARA booklets. The draft specifications add that test questions will be placed in order of difficulty. I have grave difficulty in understanding how such assessments can be optimal for high attainers and fear that this is bad assessment practice.

  • On top of this there is the worrying statement in the test development frameworks that scaled scores will be ‘truncated’ at the extremes of the distribution’. This does not fill one with confidence that the highest and lowest attainers will have their test performance properly recognised and reported.
  • The necessary invention of ‘performance descriptors’ removes any lingering illusion that academies and free schools have significant freedom to depart from the national curriculum, at least as far as the core subjects are concerned. It is hard to understand why these descriptors could not have been published alongside the programmes of study within the national curriculum.
  • The ‘performance descriptors’ in the draft test specifications carry all sorts of health warnings that they are inappropriate for teacher assessment because they cover only material that can be assessed in a written test. But there will be significant overlap between the test and teacher assessment versions, particularly in those that describe threshold performance at the equivalent of level 4b. For we know now that there will also be hierarchies of performance descriptors – aka level descriptors – for KS1 teacher assessment in reading, writing, speaking and listening and maths, as well as for KS2 teacher assessment in writing. Levels were so problematic that it has been necessary to reinvent them!
  • What with scaled scores, average scaled scores, threshold performance descriptors and ‘levelled’ performance descriptors, schools face an uphill battle in convincing parents that the reporting of test outcomes under this system will be simpler and more understandable. At the end of KS2 they will receive 16 different assessments in four different formats. (Remember that parents will also need to cope with schools’ approaches to internal assessment, which may or may not align with these arrangements.)
  • We are told about new requirements to be placed on schools to publish assessment outcomes, but the description is infuriatingly vague. We do not know whether certain requirements apply to both KS1 and 2, and/or to both tests and teacher assessment. The reference to ‘the percentage of pupils who achieve a high score in all areas at the end of key stage 2’ is additionally vague because it is unclear whether it applies to performance in each assessment, or across all assessments combined. Nor is the pitch of the high score explained. This is the only reference to high attainers in the entire response and it raises more questions than it answers.
  • We also have negligible information about what will appear in the school performance tables and what will be relegated to the accompanying data portal. We know there is an intention to compare schools’ performance on the measures they are required to publish and that is all. Much of the further detail in the original consultation document may or may not have fallen by the wayside.
  • The new floor standards have all the characteristics of a last-minute compromise hastily stitched together. The consultation document was explicit that floor standards would:

‘…focus on threshold attainment measures and value-added progress measures’

It anticipated that the progress measure would require average scaled scores of between 98.5 and 99.0 adding:

‘Our modelling suggests that a progress measure set at this level, combined with the 85% threshold attainment measure, would result in a similar number of schools falling below the floor as at present.’

But the analysis of responses fails to report at all on the question ‘Do you have any comments about these proposals for the Department’s floor standards?’ It does include the response to a subsequent question about including an average point score attainment measure in the floor standards (39% of respondents were in favour of this against 31% against). But the main text does not discuss this option at all. It begins by stating that both an attainment and a progress dimension are in play, but then describes a system in which schools can choose one or the other. There is no attempt to quantify ‘sufficient progress’ and no revised modelling of the impact of standards set at this level. We are left with the suspicion that a very significant proportion of schools will not exceed the floor. There is also a potential perverse incentive for schools with very high attaining intakes not to bother about progress at all.

  • Finally, the ‘to do’ list is substantial. Several of those with the tightest deadlines ought really to have been completed ahead of the consultation response, especially given the significant delay. There is nothing about the interaction between this work programme and that proposed by NAHT’s Commission on Assessment. Much of this work would need to take place on the other side of a General Election, while the lead time for assessing KS2 progress against a Year R baseline is a full nine years. This makes the project as a whole particularly vulnerable to the whims of future governments.

I’m struggling to find the right description for the overall package. I don’t think it’s quite substantial or messy enough to count as a dog’s breakfast. But, like a poorly airbrushed portrait, it flatters to deceive. Seen from a distance it appears convincing but, on closer inspection, there are too many wrinkles that have not been properly smoothed out

GP

April 2014

 

 

Challenging NAHT’s Commission on Assessment

.

This post reviews the Report of the NAHT’s National Commission on Assessment, published on 13 February 2014.

pencil-145970_640Since I previously subjected the Government’s consultation document on primary assessment and accountability to a forensic examination, I thought it only fair that I should apply the same high standards to this document.

I conclude that the Report is broadly helpful, but there are several internal inconsistencies and a few serious flaws.

Impatient readers may wish to skip the detailed analysis and jump straight to the summary at the end of the post which sets out my reservations in the form of 23 recommendations addressed to the Commission and the NAHT.

.

Other perspectives

Immediate reaction to the Report was almost entirely positive.

The TES included a brief Ministerial statement in its coverage, attributed to Michael Gove:

‘The NAHT’s report gives practical, helpful ideas to schools preparing for the removal of levels. It also encourages them to make the most of the freedom they now have to develop innovative approaches to assessment that meet the needs of pupils and give far more useful information to parents.’

ASCL and ATL both welcomed the Report, as did the National Governors’ Association, though there was no substantive comment from NASUWT or NUT.

The Blogosphere exhibited relatively little interest, although a smattering of posts began to expose some issues:

  • LKMco supported the key recommendations, but wondered whether the Commission might not be guilty of reinventing National Curriculum levels;
  • Mr Thomas Maths was more critical, identifying three key shortcomings, one being the proposed approach to differentiation within assessment;
  • Warwick Mansell, probably because he blogs for NAHT, confined himself largely to summarising the Report, which he found ‘impressive’, though he did raise two key points – the cost of implementing these proposals and how the recommendations relate to the as yet uncertain position of teacher assessment in the Government’s primary assessment and accountability reforms.

All of these points – and others – are fleshed out in the critique below.

.

Background

.

Remit, Membership and Evidence Base

The Commission was first announced in July 2013, when it was described as:

‘a commission of practitioners to shape the future of assessment in a system without levels.’

By September, Lord Sutherland had agreed to Chair the body and its broad remit had been established:

‘To:

  • establish a set of principles to underpin national approaches to assessment and create consistency;
  • identify and highlight examples of good practice; and
  • build confidence in the assessment system by securing the trust and support of officials and inspectors.’

Written evidence was requested by 16 October.

The first meeting took place on 21 October and five more were scheduled before the end of November.

Members’ names were not included at this stage (beyond the fact that NAHT’s President – a Staffordshire primary head – was involved) though membership was now described as ‘drawn from across education’.

Several members had in fact been named in an early October blog post from NAHT and a November press release from the Chartered Institute of Educational Assessors (CIEA) named all but one – NAHT’s Director of Education. This list was confirmed in the published Report.

The Commission had 14 members but only six of them – four primary heads one primary deputy and one secondary deputy – could be described as practitioners.

The others included two NAHT officials in addition to the secretariat, one being General Secretary Russell Hobby, and one from ASCL;  John Dunford, a consultant with several other strings to his bow, one of those being Chairmanship of the CIEA; Gordon Stobart an academic specialist in assessment with a long pedigree in the field; Hilary Emery, the outgoing Chief Executive of the National Children’s Bureau; and Sam Freedman of Teach First.

There were also unnamed observers from DfE, Ofqual and Ofsted.

The Report says the Commission took oral evidence from a wide range of sources. A list of 25 sources is provided but it does not indicate how much of their evidence was written and how much oral.

Three of these sources are bodies represented on the Commission, two of them schools. Overall seven are from schools. One source is Tim Oates, the former Chair of the National Curriculum Review Expert Panel.

The written evidence is not published and I could find only a handful of responses online, from:

Overall one has to say that the response to the call for evidence was rather limited. Nevertheless, it would be helpful for NAHT to publish all the evidence it received. It might be helpful for NAHT to consult formally on key provisions in its Report.

 .

Structure of the Report and Further Stages Proposed

The main body of the Report is sandwiched between a foreword by the Chair and a series of Annexes containing case studies, historical and international background.  This analysis concentrates almost entirely on the main body.

The 21 Recommendations are presented twice, first as a list within the Executive Summary and subsequently interspersed within a thematic commentary that summarises the evidence received and also conveys the Commission’s views.

The Executive Summary also sets out a series of Underpinning Principles for Assessment and a Design Checklist for assessment in schools, the latter accompanied by a set of five explanatory notes.

It offers a slightly different version of the Commission’s Remit:

‘In carrying out its task, the Commission was asked to achieve three distinct elements:

  • A set of agreed principles for good assessment
  • Examples of current best practice in assessment that meet these principles
  • Buy-in to the principles by those who hold schools to account.’

These are markedly less ambitious than their predecessors, having dropped the reference to ‘national approaches’ and any aspiration to secure support from officials and inspectors for anything beyond the Principles.

Significantly, the Report is presented as only the first stage in a longer process, an urgent response to schools’ need for guidance in the short term.

It recommends that further work should comprise:

  • ‘A set of model assessment criteria based on the new National Curriculum.’ (NAHT is called upon to develop and promote these. The text says that a model document is being  commissioned but doesn’t reveal the timescale or who is preparing it);
  • ‘A full model assessment policy and procedures, backed by appropriate professional development’ that would expand upon the Principles and Design Checklist. (NAHT is called upon to take the lead in this, but there is no indication that they plan to do so. No timescale is attached)
  • ‘A system-wide review of assessment’ covering ages 2-19. It is not explicitly stated, but one assumes that this recommendation is directed towards the Government. Again no timescale is attached.

The analysis below looks first at the assessment Principles, then the Design Checklist and finally the recommendations plus associated commentary. It concludes with an overall assessment of the Report as a whole.

.

Assessment Principles

As noted above, it seems that national level commitment is only sought in respect of these Principles, but there is no indication in the Report – or elsewhere for that matter – that DfE, Ofsted and Ofqual have indeed signed up to them.

Certainly the Ministerial statement quoted above stops well short of doing so.

The consultation document on primary assessment and accountability also sought comments on a set of core principles to underpin schools’ curriculum and assessment frameworks. It remains to be seen whether the version set out in the consultation response will match those advanced by the Commission.

The Report recommends that schools should review their own assessment practice against the Principles and Checklist together, and that all schools should have their own clear assessment principles, presumably derived or adjusted in the light of this process.

Many of the principles are unexceptionable, but there are a few interesting features that are directly relevant to the commentary below.

For it is of course critical to the internal coherence of the Report that the Design Checklist and recommendations are entirely consistent with these Principles.

I want to highlight three in particular:

  • ‘Assessment is inclusive of all abilities…Assessment embodies, through objective criteria, a pathway of progress and development for every child…Assessment objectives set high expectations for learners’.

One assumes that ‘abilities’ is intended to stand proxy for both attainment and potential, so that there should be ‘high expectations’ and a ‘pathway of progress and development’ for the lowest and highest attainers alike.

  • ‘Assessment places achievement in context against nationally standardised criteria and expected standards’.

This begs the question whether the ‘model document’ containing assessment criteria commissioned by NAHT will be ‘nationally standardised’ and, if so, what standardisation process will be applied.

  • ‘Assessment is consistent…The results are readily understandable by third parties…A school’s results are capable of comparison with other schools, both locally and nationally’.

The implication behind these statements must be that results of assessment in each school are transparent and comparable through the accountability regime, presumably by means of the performance tables (and the data portal that we expect to be introduced to support them).

This cannot be taken as confined to statutory tests, since the text later points out that:

‘The remit did not extend to KS2 tests, floor standards and other related issues of formal accountability.’

It isn’t clear, from the Principles at least, whether the Commission believes that teacher assessment outcomes should also be comparable. Here, as elsewhere, the Report does a poor job of distinguishing between statutory teacher assessment and assessment internal to the school.

.

Design Checklist.

 

Approach to Assessment and Use of Assessment

The Design Checklist is described as:

‘an evaluation checklist for schools seeking to develop or acquire an assessment system. They could also form the seed of a revised assessment policy.’

It is addressed explicitly to schools and comprises three sections covering, respectively, a school’s approach to assessment, method of assessment and use of assessment.

The middle section is by far the most significant and also the most complex, requiring five explanatory notes.

I have taken the more straightforward first and third sections first.

‘Our approach to assessment’ simply makes the point that assessment is integral to teaching and learning, while also setting expectations for regular, universal professional development and ‘a senior leader who is responsible for assessment’.

It is not clear whether this individual is the same as, or additional to, the ‘trained assessment lead’ mentioned in the Report’s recommendations.

I can find no justification in the Report for the requirement that this person must be a senior leader.

A more flexible approach would be preferable, in which the functions to be undertaken are outlined and schools are given flexibility over how those are distributed between staff. There is more on this below.

The final section ‘Our use of assessment’ refers to staff:

  • Summarising and analysing attainment and progress;
  • Planning pupils’ learning to ensure every pupil meets or exceeds expectations (Either this is a counsel of perfection, or expectations for some learners are pitched below the level required to satisfy the assessment criteria for the subject and year in question. The latter is much more likely, but this is confusing since satisfying the assessment criteria is also described in the Checklist in terms of ‘meeting…expectations’.)
  • Analysing data across the school to ensure all pupils are stretched while the vulnerable and those at risk make appropriate progress (‘appropriate’ is not defined within the Checklist itself but an explanatory note appended to the central section  – see below – glosses this phrase);
  • Communicating assessment information each term to pupils and parents through ‘a structured conversation’ and the provision of ‘rich, qualitative profiles of what has been achieved and indications of what they [ie parents as well as pupils] need to do next’; and
  • Celebrating a broad range of achievements, extending across the full school curriculum and encompassing social, emotional and behavioural development.

.

Method of Assessment: Purposes

‘Our method of assessment’ is by far the longest section, containing 11 separate bullet points. It could be further subdivided for clarity’s sake.

The first three bullets are devoted principally to some purposes of assessment. Some of this material might be placed more logically in the ‘Our Use of Assessment’ section, so that the central section is shortened and restricted to methodology.

The main purpose is stipulated as ‘to help teachers, parents and pupils plan their next steps in learning’.

So the phrasing suggests that assessment should help to drive forward the learning of parents and teachers, as well as to the learning of pupils. I’m not sure if this is deliberate or accidental.

Two subsidiary purposes are mentioned: providing a check on teaching standards and support for their improvement; and providing a comparator with other schools via collaboration and the use of ‘external tests and assessments’.

It is not clear why these three purposes are singled out. There is some overlap with the Principles but also a degree of inconsistency between the two pieces of documentation. It might have been better to cross-reference them more carefully.

In short, the internal logic of the Checklist and its relationship with the Principles could both do with some attention.

The real meat of the section is incorporated in the eight remaining bullet points. The first four are about what pupils are assessed against and when that assessment takes place. The last four explain how assessment judgements are differentiated, evidenced and moderated.

.

Method of Assessment: What Learners Are Assessed Against – and When

The next four bullets specify that learners are to be assessed against ‘assessment criteria which are short, discrete, qualitative and concrete descriptions of what a pupil is expected to know and be able to do.’

These are derived from the school curriculum ‘which is composed of the National Curriculum and our own local design’ (Of course that is not strictly the position in academies, as another section of the Report subsequently points out.)

The criteria ‘for periodic assessment are arranged into a hierarchy setting out what children are normally expected to have mastered by the end of each year’.

Each learner’s achievement ‘is assessed against all the relevant criteria at appropriate times of the school year’.

.

The Span of the Assessment Criteria

The first explanatory note (A) clarifies that the assessment criteria are ‘discrete, tangible descriptive statements of attainment’ derived from ‘the National Curriculum (and any school curricula)’.

There is no repetition of the provision in the Principles that they should be ‘nationally standardised’ but ‘there is little room for meaningful variety’, even though academies are not obliged to follow the National Curriculum and schools have complete flexibility over the remainder of the school curriculum.

The Recommendations have a different emphasis, saying that NAHT’s model criteria should be ‘based on the new National Curriculum’ (Recommendation 6), but the clear impression here is that they will encompass the National Curriculum ‘and any school curricula’ alike.

This inconsistency needs to be resolved. NAHT might be better off confining its model criteria to the National Curriculum only – and making it clear that even these may not be relevant to academies.

.

The Hierarchy of Assessment Criteria

The second explanatory note (B) relates to the arrangement of the assessment criteria

‘…into a hierarchy, setting out what children are normally expected to have mastered by the end of each year’.

This note is rather muddled.

It begins by suggesting that a hierarchy divided chronologically by school year is the most natural choice, because:

‘The curriculum is usually organised into years and terms for planned delivery’

That may be true, but only the Programmes of Study for the three core subjects are organised by year, and each clearly states that:

‘Schools are…only required to teach the relevant programme of study by the end of the key stage. Within each key stage, schools therefore have the flexibility to introduce content earlier or later than set out in the programme of study. In addition, schools can introduce key stage content during an earlier key stage if appropriate.’

All schools – academies and non-academies alike – therefore enjoy considerable flexibility over the distribution of the Programmes of Study between academic years.

(Later in the Report – in the commentary preceding the first six recommendations – the text mistakenly suggests that the entirety of ‘the revised curriculum is presented in a model of year-by-year progress’ (page 14) It does not mention the provision above).

The note goes on to suggest that the Commission has chosen a different route, not because of this flexibility, but because ‘children’s progress may not fit neatly into school years’:

‘…we have chosen the language of a hierarchy of expectations to avoid misunderstandings. Children may be working above or below their school year…’

But this is not an absolute hierarchy of expectations – in the sense that learners are free to progress entirely according to ability (or, more accurately, their prior attainment) rather than in age-related lock steps.

In a true hierarchy of expectations, learners would be able to progress as fast or as slowly as they are able to, within the boundaries set by:

  • On one hand, high expectations, commensurate challenge and progression;
  • On the other hand, protection against excessive pressure and hot-housing and a judicious blending of faster pace with more breadth and depth (of which more below).

This is no more than a hierarchy by school year with some limited flexibility at the margins.

.

The timing of assessment against the criteria

The third explanatory note (C) confirms the Commission’s assumption that formal assessments will be conducted at least termly – and possibly more frequently than that.

It adds:

‘It will take time before schools develop a sense of how many criteria from each year’s expectations are normally met in the autumn, spring and summer terms, and this will also vary by subject’.

This is again unclear. It could mean that a future aspiration is to judge progress termly, by breaking down the assessment criteria still further – so that a learner who met the assessment criteria for, say, the autumn term is deemed to be meeting the criteria for the year as a whole at that point.

Without this additional layer of lock-stepping, presumably the default position for the assessments conducted in the autumn and spring terms is that learners will still be working towards the assessment criteria for the year in question.

The note also mentions in passing that:

‘For some years to come, it will be hard to make predictions from outcomes of these assessments to the results in KS2 tests. Such data may emerge over time, although there are question marks over how reliable predictions may be if schools are using incompatible approaches and applying differing standards of performance and therefore cannot pool data to form large samples.’

This is one of very few places where the Report picks up on the problems that are likely to emerge from the dissonance between internal and external statutory assessment.

But it avoids the central issue, this being that the approach to internal assessment it advocates may not be entirely compatible with predicting future achievement in the KS2 tests. If so, its value is seriously diminished, both for parents and teachers, let alone the learners themselves.  This issue also reappears below.

.

Method of Assessment: How Assessment Judgements are Differentiated, Evidenced and Moderated

The four final bullet points in this section of the Design Checklist explain that all learners will be assessed as either ‘developing’, ‘meeting’, or ‘exceeding’ each relevant criterion for that year’.

Learners deemed to be exceeding the relevant criteria in a subject for a given year ‘will also be assessed against the criteria in that subject for the next year.’

Assessment judgements are supported by evidence comprising observations, records of work and test outcomes and are subject to moderation by teachers in the same school and in other schools to ensure they are fair, reliable and valid.

I will set moderation to one side until later in the post, since that too lies outside the scope of methodology.

.

Differentiation against the hierarchy of assessment criteria

The fourth explanatory note (D) addresses the vexed question of differentiation.

As readers may recall, the Report by the National Curriculum Review Expert Panel failed abjectly to explain how they would provide stretch and challenge in a system that focused exclusively on universal mastery and ‘readiness to progress’, saying only that further work was required to address the issue.

Paragraph 8.21 implied that they favoured what might be termed an ‘enrichment and extension’ model:

‘There are issues regarding ‘stretch and challenge’ for those pupils who, for a particular body of content, grasp material more swiftly than others. There are different responses to this in different national settings, but frequently there is a focus on additional activities that allow greater application and practice, additional topic study within the same area of content, and engagement in demonstration and discussion with others…These systems achieve comparatively low spread at the end of primary education, a factor vital in a high proportion of pupils being well positioned to make good use of more intensive subject-based provision in secondary schooling.’

Meanwhile, something akin to the P Scales might come into play for those children with learning difficulties.

On this latter point, the primary assessment and accountability consultation document said DfE would:

‘…explore whether P-scales should be reviewed so that they align with the revised national curriculum and provide a clear route to progress to higher attainment.’

We do not yet know whether this will happen, but Explanatory Note B to the Design Checklist conveys the clear message that the P-Scales need to be retained:

‘…must ensure we value the progress of children with special needs as much as any other group. The use of P scales here is important to ensure appropriate challenge and progression for pupils with SEN.’

By contrast, for high attainers, the Commission favours what might be called a ‘mildly accelerative’ model whereby learners who ‘exceed’ the assessment criteria applying to a subject for their year group may be given work that enables them to demonstrate progress against the criteria for the year above.

I describe it as mildly accelerative because there is no provision for learners to be assessed more than one year ahead of their chronological year group. This is a fairly low ceiling to impose on such accelerative progress.

It is also unclear whether the NAHT’s model assessment criteria will cover Year 7, the first year of the KS3 Programmes of Study, to enable this provision to extend into Year 6.

The optimal approach for high attainers would combine the ‘enrichment and extension’ approach apparently favoured by the Expert Panel with an accelerative approach that provides a higher ceiling, to accommodate those learners furthest ahead of their peers.

High attaining learners could then access a customised blend of enrichment (more breadth), extension (greater depth) and acceleration (faster pace) according to their needs.

This is good curricular practice and it should be reflected in assessment practice too, otherwise the risk is that a mildly accelerative assessment process will have an undesirable wash-back effect on teaching and learning.

Elsewhere, the Report advocates the important principle that curriculum, assessment and pedagogy should be developed in parallel, otherwise there is a risk that one – typically assessment – has an undesirable effect on the others. This would be an excellent exemplar of that statement.

The judgement whether a learner is exceeding the assessment criteria for their chronological year would be evidenced by enrichment and extension activity as well as by pre-empting the assessment criteria for the year ahead. Exceeding the criteria in terms of greater breadth or more depth should be equally valued.

This more rounded approach, incorporating a higher ceiling, should also be supported by the addition of a fourth ‘far exceeded’ judgement, otherwise the ‘exceeded’ judgement has to cover far too wide a span of attainment, from those who are marginally beyond their peers to those who are streets ahead.

These concerns need urgently to be addressed, before NAHT gets much further with its model criteria.

.

The aggregation of criteria

In order to make the overall judgement for each subject, learners’ performance against individual assessment criteria has to be combined to give an aggregate measure.

The note says:

‘The criteria themselves can be combined to provide the qualitative statement of a pupil’s achievements, although teachers and schools may need a quantitative summary. Few schools appear to favour a pure binary approach of yes/no. The most popular choice seems to be a three phase judgement of working towards (or emerging, developing), meeting (or mastered, confident, secure, expected) and exceeded. Where a student has exceeded a criterion, it may make sense to assess them also against the criteria for the next year.’

This, too, begs some questions. The statement above is consistent with one of the Report’s central recommendations:

‘Pupil progress and achievement should be communicated in terms of descriptive profiles rather than condensed to numerical summaries (although schools may wish to use numerical data for internal purposes).’

Frankly it seems unlikely that such ‘condensed numerical summaries’ can be kept hidden from parents. Indeed, one might argue that they have a reasonable right to know them.

These aggregations – whether qualitative or quantitative – will be differentiated at three levels, according to whether the learner best fits a ‘working towards’, ‘meeting’ or ‘exceeding’ judgement for the criteria relating to the appropriate year in each programme of study.

I have just recommended that there needs to be an additional level at the top end, to remove undesirable ceiling effects that lower expectations and are inconsistent with the Principles set out in the Report. I leave it to others to judge whether, if this was accepted, a fifth level is also required at the lower end to preserve the symmetry of the scale.

There is also a ‘chicken and egg’ issue here. It is not clear whether a learner must already be meeting some of the criteria for the succeeding year in order to show they are exceeding the criteria for their own year – or whether assessment against the criteria for the succeeding year is one potential consequence of a judgement that they are exceeding the criteria for their own year.

This confusion is reinforced by a difference of emphasis between the checklist – which says clearly that learners will be assessed against the criteria for the succeeding year if they exceeded the criteria for their own – and the explanatory note, which says only that this may happen.

Moreover, the note suggests that this applies criterion by criterion – ‘where a student has exceeded a criterion’ – rather than after the criteria have been aggregated, which is the logical assumption from the wording in the checklist – ‘exceeded the relevant criteria’.

This too needs clarifying.

.

.

Recommendations and Commentary

I will try not to repeat in this section material already covered above.

I found that the recommendations did not always sit logically with the preceding commentary, so I have departed from the subsections used in the Report, grouping the material into four broad sections: further methodological issues; in-school and school-to school support; national support; and phased implementation.

Each section leads with the relevant Recommendations and folds in additional relevant material from different sections of the commentary. I have repeated recommendations where they are relevant to more than one section.

.

Further methodological issues

Recommendation 4: Pupils should be assessed against objective criteria rather than ranked against each other

Recommendation 5: Pupil progress and achievements should be communicated in terms of descriptive profiles rather than condensed to numerical summaries (although schools may wish to use numerical data for internal purposes.

Recommendation 6: In respect of the National Curriculum, we believe it is valuable – to aid communication and comparison – for schools to be using consistent criteria for assessment. To this end, we call upon NAHT to develop and promote a set of model assessment criteria based on the new National Curriculum.

The commentary discusses the evolution of National Curriculum levels, including the use of sub-levels and their application to progress as well as achievement. In doing so, it summarises the arguments for and against the retention of levels.

In favour of retention:

  • The system of levels provides a common language used by schools to summarise attainment and progress;
  • It is argued (by some professionals) that parents have grown up with levels and have an adequate grasp of what they mean;
  • The numerical basis of levels was useful to schools in analysing and tracking the performance of large numbers of pupils;
  • The decision to remove levels was unexpected and caused concern within the profession, especially as it was also announced that being ‘secondary ready’ was to be associated with the achievement of Level 4B;
  • If levels are removed, they must be replaced by a different common language, or at least ‘an element of compatibility or common understanding’ should several different assessment systems emerge.

In favour of removal:

  • It is argued (by the Government) that levels are not understood by parents and other stakeholders;
  • The numerical basis of levels does not have the richness of a more rounded description of achievement. The important narrative behind the headline number is often lost through over-simplification.
  • There are adverse effects from labelling learners with levels.

The Commission is also clear that the Government places too great a reliance on tests, particularly for accountability purposes. This has narrowed the curriculum and resulted in ‘teaching to the test’.

It also creates other perverse incentives, including the inflation of assessment outcomes for performance management purposes or, conversely, the deflation of assessment outcomes to increase the rate of progress during the subsequent key stage.

Moreover, curriculum, assessment and pedagogy must be mutually supportive. Although the Government has not allowed the assessment tail to wag the curricular dog:

‘…curriculum and assessment should be developed in tandem.’

Self-evidently, this has not happened, since the National Curriculum was finalised way ahead of the associated assessment arrangements which, in the primary sector, are still unconfirmed.

There is a strong argument that such assessment criteria should have been developed by the Government and made integral to the National Curriculum.

Indeed, in Chapter 7 of its Report on ‘The Framework for the National Curriculum’, the National Curriculum Expert Panel proposed that attainment targets should be retained, not in the form of level descriptors but as ‘statements of specific learning outcomes related to essential knowledge’ that  would be ’both detailed and precise’. They might be presented alongside the Programmes of Study.

The Government ignored this, opting for a very broad single, standard attainment target in each programme of study:

‘By the end of each key stage, pupils are expected to know, apply and understand the matters, skills and processes specified in the relevant programme of study.’

As I pointed out in a previous post, one particularly glaring omission from the Consultation Document on Primary Assessment and Accountability was any explanation of how Key Stage Two tests and statutory teacher assessments would be developed from these singleton ‘lowest common denominator’ attainment targets, especially in a context where academies, while not obliged to follow the National Curriculum, would undertake the associated tests.

We must await the long-delayed response to the consultation to see if it throws any light on this matter.

Will it commit the Government to producing a framework, at least for statutory tests in the core subjects, or will it throw its weight behind the NAHT’s model criteria instead?

I have summarised this section of the Report in some detail as it is the nearest it gets to providing a rational justification for the approach set out in the recommendations above.

The model criteria appear confined to the National Curriculum at this point, though we have already noted that is not the case elsewhere in the Report.

I have also discussed briefly the inconsistency in permitting the translation of descriptive profiles into numerical data ‘for internal purposes’, but undertook to develop that further, for there is a wider case that the Report does not entertain.

We know that there will be scores attached to KS2 tests, since those are needed to inform parents and for accountability purposes.

The Primary Assessment and Accountability consultation document proposed a tripartite approach:

  • Scaled scores to show attainment, built around a new ‘secondary-ready’ standard, broadly comparable with the current Level 4B;
  • Allocation to a decile within the range of scaled scores achieved nationally, showing attainment compared with one’s peers; and
  • Comparison with the average scaled score of those nationally with the same prior attainment at the baseline, to show relative progress.

Crudely speaking, the first of these measures is criterion-referenced while the second and third are norm-referenced.

We do not yet know whether these proposals will proceed – there has been some suggestion that deciles at least will be dropped – but parents will undoubtedly want schools to be able to tell them what scaled scores their children are on target to achieve, and how those compare with the average for those with similar prior attainment.

It will be exceptionally difficult for schools to convey that information within the descriptive profiles, insofar as they relate to English and maths, without adopting the same numerical measures.

It might be more helpful to schools if the NAHT’s recommendations recognised that fact. For the brutal truth is that, if schools’ internal assessment processes do not respond to this need, they will have to set up parallel processes that do so.

In order to derive descriptive profiles, there must be objective assessment criteria that supply the building blocks, hence the first part of Recommendation 4. But I can find nothing in the Report that explains explicitly why pupils cannot also be ranked against each other. This can only be a veiled and unsubstantiated objection to deciles.

Of course it would be quite possible to rank pupils at school level and, in effect, that is what schools will do when they condense the descriptive profiles into numerical summaries.

The real position here is that such rankings would exist, but would not be communicated to parents, for fear of ‘labelling’. But the labelling has already occurred, so the resistance is attributable solely to communicating these numerical outcomes to parents. That is not a sustainable position.

.

In-school and school-to-school support

Recommendation 1: Schools should review their assessment practice against the principles and checklist set out in this report. Staff should be involved in the evaluation of existing practice and the development of a new, rigorous assessment system and procedures to enable the school to promote high quality teaching and learning.

Recommendation 2: All schools should have clear assessment principles and practices to which all staff are committed and which are implemented. These principles should be supported by school governors and accessible to parents, other stakeholders and the wider school community.

Recommendation 3: Assessment should be part of all school development plans and should be reviewed regularly. This review process should involve every school identifying its own learning and development needs for assessment. Schools should allocate specific time and resources for professional development in this area and should monitor how the identified needs are being met.

Recommendation 7 (part): Schools should work in collaboration, for example in clusters, to ensure a consistent approach to assessment. Furthermore, excellent practice in assessment should be identified and publicised…

Recommendation 9: Schools should identify a trained assessment lead, who will work with other local leads and nationally accredited assessment experts on moderation activities.

Recommendation 16: All those responsible for children’s learning should undertake rigorous training in formative, diagnostic and summative assessment, which covers how assessment can be used to support teaching and learning for all pupils, including those with special educational needs. The government should provide support and resources for accredited training for school assessment leads and schools should make assessment training a priority.

Recommendation 20: Schools should be asked to publish their principles of assessment from September 2014, rather than being required to publish a detailed assessment framework, which instead should be published by 2016. The development of the full framework should be outlined in the school development plan with appropriate milestones that allow the school sufficient time to develop an effective model.

All these recommendations are perfectly reasonable in themselves, but it is worth reflecting for a while on the likely cost and workload implications, particularly for smaller primary schools:

Each school must have a ‘trained assessment lead’ who may or may not be the same as the ‘senior leader who is responsible for assessment’ mentioned in the Design Checklist. There is no list of responsibilities for that person, but it would presumably include:

  • Leading the review of assessment practice and developing a new assessment system;
  • Leading the definition of the school’s assessment principles and practices and communicating these to governors, parents, stakeholders and the wider community;
  • Lead responsibility for the coverage of assessment within the school’s development plan and the regular review of that coverage;
  • Leading the identification and monitoring of the school’s learning and development needs for assessment;
  • Ensuring that all staff receive appropriate professional development – including ‘rigorous training in formative diagnostic and summative assessment’;
  • Leading the provision of in-school and school-to-school professional development relating to assessment;
  • Allocating time and resources for all assessment-related professional development and monitoring its impact;
  • Leading collaborative work with other schools to ensure a consistent approach to assessment;
  • Dissemination of effective practice;
  • Working with other local assessment leads and external assessment experts on moderation activities.

And, on top of this, there is a range of unspecified additional responsibilities associated with the statutory tests.

It is highly unlikely that this range of responsibilities could be undertaken effectively by a single person in less than half a day a week, as a bare minimum. There will also be periods of more intense pressure when a substantially larger time allocation is essential.

The corresponding salary cost for a ‘senior leader’ might be £3,000-£4,000 per year, not to mention the cost of undertaking the other responsibilities displaced.

There will also need to be a sizeable school budget and time allocation for staff to undertake reviews, professional development and moderation activities.

Moderation itself will bear a significant cost. Internal moderation may have a bigger opportunity cost but external moderation will otherwise be more expensive.

Explanatory note (E), attached to the Design Checklist, says:

‘The exact form of moderation will vary from school to school and from subject to subject. The majority of moderation (in schools large enough to support it) will be internal but all schools should undertake a proportion of external moderation each year, working with partner schools and local agencies.’

Hence the cost of external moderation will fall disproportionately on smaller schools with smaller budgets.

It would be wrong to suggest that this workload is completely new. To some extent these various responsibilities will be undertaken already, but the Commission’s recommendations are effectively a ratcheting up of the demand on schools.

Rather than insisting on these responsibilities being allocated to a single individual with other senior management responsibilities, it might be preferable to set out the responsibilities in more detail and give schools greater flexibility over how they should be distributed between staff.

Some of these tasks might require senior management input, but others could be handled by other staff, including paraprofessionals.

.

National support

Recommendation 7 (part): Furthermore, excellent practice in assessment should be identified and publicised, with the Department for Education responsible for ensuring that this is undertaken.

Recommendation 8 (part): Schools should be prepared to submit their assessment to external moderators, who should have the right to provide a written report to the head teacher and governors setting out a judgement on the quality and reliability of assessment in the school, on which the school should act. The Commission is of the view that at least some external moderation should be undertaken by moderators with no vested interest in the outcomes of the school’s assessment. This will avoid any conflicts of interest and provide objective scrutiny and broader alignment of standards across schools.

Recommendation 9: Schools should identify a trained assessment lead, who will work with other local leads and nationally accredited assessment experts on moderation activities.

Recommendation 11: The Ofsted school inspection framework should explore whether schools have effective assessment systems in place and consider how effectively schools are using pupil assessment information and data to improve learning in the classroom and at key points of transition between key stages and schools.

Recommendation 14: Further work should be undertaken to improve training for assessment within initial teacher training (ITT), the newly qualified teacher (NQT) induction year and on-going professional development. This will help to build assessment capacity and support a process of continual strengthening of practice within the school system.

Recommendation 15: The Universities’ Council for the Education of Teachers (UCET) should build provision in initial teacher training for delivery of the essential assessment knowledge.

Recommendation 16: All those responsible for children’s learning should undertake rigorous training in formative, diagnostic and summative assessment, which covers how assessment can be used to support teaching and learning for all pupils, including those with special educational needs. The government should provide support and resources for accredited training for school assessment leads and schools should make assessment training a priority.

Recommendation 17: A number of pilot studies should be undertaken to look at the use of information technology (IT) to support and broaden understanding and application of assessment practice.

Recommendation 19: To assist schools in developing a robust framework and language for assessment, we call upon the NAHT to take the lead in expanding the principles and design checklist contained in this report into a full model assessment policy and procedures, backed by appropriate professional development.

There are also several additional proposals in the commentary that do not make it into the formal recommendations:

  • Schools should be held accountable for the quality of their assessment practice as well as their assessment results, with headteachers also appraising teachers on their use of assessment. (The first part of this formulation appears in Recommendation 11 but not the second.) (p17);
  • It could be useful for the teaching standards to reflect further assessment knowledge, skills and understanding (p17);
  • A national standard in assessment practice for teachers would be a useful addition (p18);
  • The Commission also favoured the approach of having a lead assessor to work with each school or possibly a group of schools, helping to embed good practice across the profession (p18).

We need to take stock of the sheer scale of the infrastructure that is being proposed and its likely cost.

In respect of moderation alone, the Report is calling for sufficient external moderators, ‘nationally accredited assessment experts’ and possibly lead assessors to service some 17,000 primary schools.

Even if we assume that these roles are combined in the same person and that each person can service, say, 25 schools, that still demands something approaching a cadre of 700 people who also need to be supported, managed and trained.

If they are serving teachers there is an obvious opportunity cost. Providing a service of this scale would cost tens of millions of pounds a year.

Turning to training and professional development, the Commission is proposing:

  • Accredited training for some 17,000 school assessment leads (with an ongoing requirement to train new appointees and refresh the training of those who undertook it too far in the past);
  • ‘Rigorous training in formative, diagnostic and summative assessment, which covers how assessment can be used to support teaching and learning for all pupils, including those with special educational needs’ for everyone deemed responsible for children’s learning, so not just teachers. This will include hundreds of thousands of people in the primary sector alone.
  • Revitalised coverage of assessment in ITE and induction, on top of the requisite professional development package.

The Report says nothing of the cost of developing, providing and managing this huge training programme, which would cost some more tens of millions of pounds a year.

I am plucking a figure out of the air, but it would be reasonable to suggest that moderation and training costs combined might require an annual budget of some £50 million – and quite possibly double that. 

Unless one argues that the testing regime should be replaced by a national sampling process – and while the Report says some of the Commission’s members supported that, it stops short of recommending it – there are no obvious offsetting savings.

It is disappointing that the Commission made no effort at all to quantify the cost of its proposals.

These recommendations provide an excellent marketing opportunity for some of the bodies represented on the Commission.

For example, the CIEA press release welcoming the Report says:

‘One of the challenges, and one that schools will need to meet, is in working together, and with local and national assessment experts, to moderate their judgements and ensure they are working to common standards across the country. The CIEA has an important role to play in training these experts.’

Responsibility for undertaking pilot studies on the role of IT in assessment is not allocated, but one assumes it would be overseen by central government and also funded by the taxpayer.

Any rollout from the pilots would have additional costs attached and would more than likely create additional demand for professional development.

The reference to DfE taking responsibility for sharing excellent practice is already a commitment in the consultation document:

‘…we will provide examples of good practice which schools may wish to follow. We will work with professional associations, subject experts, education publishers and external test developers to signpost schools to a range of potential approaches.’ (paragraph 3.8).

Revision of the School Inspection Framework will require schools to give due priority to the quality of their assessment practice, though Ofsted might reasonably argue that it is already there.

Paragraph 116 of the School Inspection Handbook says:

‘Evidence gathered by inspectors during the course of the inspection should include… the quality and rigour of assessment, particularly in nursery, reception and Key Stage 1.’

We do not yet know whether NAHT will respond positively to the recommendation that it should go beyond the model assessment criteria it has already commissioned by leading work to expand the Principles and Design Checklist into ‘a full model assessment policy and procedures backed by appropriate professional development’.

There was no reference to such plans in the press release accompanying the Report.

Maybe the decision could not be ratified in time by the Association’s decision-making machinery – but this did not prevent the immediate commissioning of the model criteria.

.

Phased Implementation

Recommendation 10: Ofsted should articulate clearly how inspectors will take account of assessment practice in making judgements and ensure both guidance and training for inspectors is consistent with this.

Recommendation 12: The Department for Education should make a clear and unambiguous statement on the teacher assessment data that schools will be required to report to parents and submit to the Department for Education. Local authorities and other employers should provide similar clarity about requirements in their area of accountability.

Recommendation 13: The education system is entering a period of significant change in curriculum and assessment, where schools will be creating, testing and revising their policies and procedures. The government should make clear how they will take this into consideration when reviewing the way they hold schools accountable as new national assessment arrangements are introduced during 2014/15. Conclusions about trends in performance may not be robust.

Recommendation 18: The use by schools of suitably modified National Curriculum levels as an interim measure in 2014 should be supported by the government. However, schools need to be clear that any use of levels in relation to the new curriculum can only be a temporary arrangement to enable them to develop, implement and embed a robust new framework for assessment. Schools need to be conscious that the new curriculum is not in alignment with the old National Curriculum levels.

Recommendation 20: Schools should be asked to publish their principles of assessment from September 2014, rather than being required to publish a detailed assessment framework, which instead should be published by 2016. The development of the full framework should be outlined in the school development plan with appropriate milestones that allow the school sufficient time to develop an effective model.

Recommendation 21: A system wide review of assessment should be undertaken. This would help to repair the disjointed nature of assessment through all ages, 2-19.

The Commission quite rightly identifies a number of issues caused by the implementation timetable, combined with continuing uncertainty over aspects of the Government’s plans.

At the time of writing, the response to the consultation document has still not been published (though it was due in autumn 2013) yet schools will be implementing the new National Curriculum from this September.

The Report says:

‘There was strong concern expressed about the requirement for schools to publish their detailed curriculum and assessment framework in September 2014.’

This is repeated in Recommendation 20, together with the suggestion that this timeline should be amended so that only a school’s principles for assessment need be published by this September.

I have been trying to pin down the source of this requirement.

Schedule 4 of The School Information (England) (Amendment) Regulations 2012 do not require the publication of a detailed assessment framework, referring only to

‘The following information about the school curriculum—

(a)  in relation to each academic year, the content of the curriculum followed by the school for each subject and details as to how additional information relating to the curriculum may be obtained;

(b)  in relation to key stage 1, the names of any phonics or reading schemes in operation; and

(c)  in relation to key stage 4—

(i)            a list of the courses provided which lead to a GCSE qualification,

(ii)          a list of other courses offered at key stage 4 and the qualifications that may be acquired.’

I could find no Government guidance stating unequivocally that this requires schools to carve up all the National Curriculum programmes of study into year-by-year chunks.  (Though there is no additional burden attached to publication if they have already undertaken this task for planning purposes.)

There are references to the publication of Key Stage 2 results (which will presumably need updating to reflect the removal of levels), but nothing on the assessment framework.

Moreover, the DfE mandatory timeline says that from the Spring Term of 2014:

‘All schools must publish their school curriculum by subject and academic year, including their provision of personal, social, health and economic education (PSHE).’

(The hyperlink returns one to the Regulations quoted above.)

There is no requirement for publication of further information in September.

I wonder therefore if this is a misunderstanding. I stand to be corrected if readers can point me to the source.

It may arise from the primary assessment and accountability consultation document, which discusses publication of curricular details and then proceeds immediately to discuss the relationship between curriculum and assessment:

‘Schools are required to publish this curriculum on their website…In turn schools will be free to design their approaches to assessment, to support pupil attainment and progression. The assessment framework must be built into the school curriculum, so that schools can check what pupils have learned and whether they are on track to meet expectations at the end of the key stage, and so that they can report regularly to parents.’ (paras 3.4-3.5)

But this conflation isn’t supported by the evidence above and, anyway, these are merely proposals.

That said, it must be assumed that the Commission consulted its DfE observer on this point before basing recommendations on this interpretation.

If the observer’s response was consistent with the Commission’s interpretation, then it is apparently inconsistent with all the material so far published by the Department!

It may be necessary for NAHT to obtain clarification of this point given the evidence cited above.

That aside, there are issues associated with the transition from the current system to the future system.

The DfE’s January 2014 ‘myths and facts’ publication says:

‘As part of our reforms to the national curriculum, the current system of “levels” used to report children’s attainment and progress will be removed from September 2014. Levels are not being banned, but will not be updated to reflect the new national curriculum and will not be used to report the results of national curriculum tests. Key Stage 1 and Key Stage KS2 [sic] tests taken in the 2014 to 2015 academic year will be against the previous national curriculum, and will continue to use levels for reporting purposes

Schools will be expected to have in place approaches to formative assessment that support pupil attainment and progression. The assessment framework should be built into the school curriculum, so that schools can check what pupils have learned and whether they are on track to meet expectations at the end of the key stage, and so that they can report regularly to parents. Schools will have the flexibility to use approaches that work for their pupils and circumstances, without being constrained by a single national approach.’

The reference here to having approaches in place – rather than the publication of a ‘detailed curriculum and assessment framework’ – would not seem wildly inconsistent with the Commission’s idea that schools should establish their principles by September 2014, and develop their detailed assessment frameworks iteratively over the two succeeding years. However, the Government needs to clarify the position.

Since Key Stage 2 tests will not dispense with levels until May 2016 (and they will be published in the December 2015 Performance Tables), there will be an extended interregnum in which National Curriculum Levels will continue to have official currency.

Moreover, levels may still be used in schools – they are not being banned – though they will not be aligned to the new National Curriculum.

The Report says:

‘…it is important to recognise that, even if schools decide to continue with some form of levels, the new National Curriculum does not align to the existing levels and level descriptors and this alignment is a piece of work that needs to be undertaken now.’ (p19).

However, the undertaking of this work does not feature in the Recommendations, unless it is implicit in the production by NAHT of ‘a full model assessment policy and procedures’, which seems unlikely.

One suspects that the Government would be unwilling to endorse such a process, even as a temporary arrangement, since what is to stop schools from continuing to use this new improved levels structure more permanently?

The Commission would appear to be on stronger ground in asking Ofsted to make allowances during the interregnum (which is what I think Recommendation 10 is about) especially given that, as Recommendation 13 points out, evidence of ‘trends in performance may not be robust’.

The point about clarity over teacher assessment is well made – and one hopes it will form part of the response to the primary assessment and accountability consultation document when that is eventually published.

The Report itself could have made progress in this direction by establishing and maintaining a clearer distinction between statutory and internal teacher assessment.

The consultation document itself made clear that KS2 writing would continue to be assessed via teacher assessment rather than a test, and, moreover:

‘At the end of each key stage schools are required to report teacher assessment judgements in all national curriculum subjects to parents. Teachers will judge whether each pupil has met the expectations set out in the new national curriculum. We propose to continue publishing this teacher assessment in English, mathematics and science, as Lord Bew recommended.’ (para 3.9)

But what it does not say is what requirements will be imposed to ensure consistency across this data. Aside from KS2 writing, will they also be subject to the new scaled scores, and potentially deciles too?

Until schools have answers to that question, they cannot consider the overall shape of their assessment processes.

The final recommendation, for a system-wide review of assessment from 2-19 is whistling in the wind, especially given the level of disruption already caused by the decision to remove levels.

Neither this Government nor the next is likely to act upon it.

 

Conclusion

The Commission’s Report moves us forward in broadly the right direction.

The Principles, Design Checklist and wider recommendations help to fill some of the void created by the decision to remove National Curriculum levels, the limited nature of the primary assessment and accountability consultation document and the inordinate delay in the Government’s response to that consultation.

We are in a significantly better place as a consequence of this work being undertaken.

But there are some worrying inconsistencies in the Report as well as some significant shortcomings to the proposals it contains. There are also several unanswered questions.

Not to be outdone, I have bound these up into a series of recommendations directed at NAHT and its Commission. There are 23 in all and I have given mine letters rather than numerals, to distinguish them from the Commission’s own recommendations.

  • Recommendation A: The Commission should publish all the written evidence it received.
  • Recommendation B: The Commission should consult on key provisions within the Report, seeking explicit commitment to the Principles from DfE, Ofqual and Ofsted.
  •  Recommendation C: The Commission should ensure that its Design Checklist is fully consistent with the Principles in all respects. It should also revisit the internal logic of the Design Checklist.
  • Recommendation D: So far as possible, ahead of the primary assessment and accountability consultation response, the Commission should distinguish clearly how its proposals relate to statutory teacher assessment, alongside schools’ internal assessment processes.
  • Recommendation E: NAHT should confirm who it has commissioned to produce model assessment criteria and to what timetable. It should also explain how these criteria will be ‘nationally standardised’.
  • Recommendation F: The Commission should clarify whether the trained assessment lead mentioned in Recommendation 9 is the same or different to the ‘senior leader who is responsible for assessment’ mentioned in the Design Checklist.
  • Recommendation G: The Commission should set out more fully the responsibilities allocated to this role or roles and clarify that schools have flexibility over how they distribute those responsibilities between staff.
  • Recommendation H:  NAHT should clarify how the model criteria under development apply – if at all – to the wider school curriculum in all schools and to academies not following the National Curriculum.
  • Recommendation I: NAHT should clarify how the model criteria under development will allow for the fact that in all subjects all schools enjoy flexibility over the positioning of content in different years within the same key stage – and can also anticipate parts of the subsequent key stage.
  • Recommendation J: NAHT should clarify whether the intention is that the model criteria should reflect the allocation of content to specific terms as well as to specific years.
  • Recommendation K: The Commission should explain how its approach to internal assessment will help predict future performance in end of Key Stage tests.
  • Recommendation L: The Commission should shift from its narrow and ‘mildly accelerative’ view of high attainment to accommodate a richer concept that combines enrichment (breadth), extension (depth) and acceleration (faster pace) according to learners’ individual needs.
  • Recommendation M: The Commission should incorporate a fourth ‘far exceeded’ assessment judgement, since the ‘exceeded’ judgement covers too wide a span of attainment.
  • Recommendation N: NAHT should clarify whether its model criteria will extend into KS3, to accommodate assessment against the criteria for at least year 7, and ideally beyond.
  • Recommendation O: The Commission should clarify whether anticipating criteria for a subsequent year is a cause or a consequence of being judged to be ‘exceeding’ expectations in the learner’s own chronological year.
  • Recommendation P: The Commission should confirm that numerical summaries of assessment criteria – as well as any associated ranking positions – should be made available to parents who request them.
  • Recommendation Q: The Commission should explain why schools should be forbidden from ranking learners against each other (or allocating them to deciles).
  • Recommendation R: The Commission should assess the financial impact of its proposals on schools of different sizes.
  • Recommendation S: The Commission should cost its proposals for training and moderation, identifying the burden on the taxpayer and any offsetting savings.
  • Recommendation T: NAHT should clarify its response to Recommendation 19, that it should lead the development of a full model assessment policy and procedures.
  • Recommendation U: The Commission should clarify with DfE its understanding that schools are required to publish a detailed curriculum and assessment framework by September 2014.
  • Recommendation V: The Commission should clarify with DfE the expectation that it should have in place ‘approaches to formative assessment’ and whether the proposed assessment principles satisfy this requirement.
  • Recommendation W: The commission should clarify whether it is proposing that work is undertaken to align National Curriculum levels with the new National Curriculum and, if so, who it proposes should undertake this.

So – good overall – subject to these 23 reservations!

Some are more significant than others. Given my area of specialism, I feel particularly strongly about those that relate directly to high attainers, especially L and M above.

Those are the two I would nail to the door of 1 Heath Square.

.

GP

March 2014

How High Attainers Feature in School Inspection and Performance Tables (and what to do about it)

 

.

This post explains:

  • How revised Ofsted inspection guidance gives greater prominence to high-attaining learners (or ‘the most able’ in Ofsted terminology).
  • How this differs from the treatment of high attainers in the School Performance Tables as presently formulated.
  • How high attainers feature in current proposals for accountability reform.
  • How schools might respond to inconsistent expectations from each side of the accountability framework and prepare for an uncertain future.

 

Outline of Content

Because of the length of this piece, I have divided it into two parts. Each part has two main sections.

Part One covers:

  • Changes to Ofsted’s inspection guidance. This explains and analyses the key changes to the School Inspection Handbook and Subsidiary Guidance which came into effect from September 2013.
  • Terminology, definitions, measures and data. This examines how Ofsted has begun to use the term ‘most able’ while the Performance Tables refer to ‘high attainers’. It compares the definitions adopted by Ofsted and in the Performance Tables. It discusses the ‘expected levels of progress’ methodology, highlighting a fundamental inconsistency in current guidance, and reflects on whether the accountability system should expect more progress from high attainers.

I have reversed the logical order of these sections to accommodate readers who wish only to understand how Ofsted’s guidance has changed. The second section begins the process of setting those revisions in the context of the wider accountability regime.

Part Two includes:

  • Performance Tables and Proposals for Accountability Reform. This summarises how high attainment and progress are reported in the 2012 Performance Tables and how this will change in 2013. It also offers a comparative analysis of how high attainers’ performance is expected to feature in a reformed accountability framework, for the primary, secondary and post-16 sectors respectively. This is based on the three parallel consultation documents, which begin to explain how the accountability framework will respond to the withdrawal of National Curriculum levels from 2016.
  • How schools should aim to satisfy these expectations. This provides some introductory guidance to shape the development and review of whole school plans to improve support for high attainers. It does not discuss the different ways in which schools can improve their provision – that is a topic for another day – but concentrates on the broad framing of policies and plans. It proposes a basket of key measures, for primary and secondary schools respectively, that fit the current context and can be adjusted to reflect future developments.

Within this post I have drawn together several elements from earlier posts to create the bigger picture. There is some overlap, but I have tried to keep it to a minimum. I hope it is helpful to readers to have all this material within a single frame, focused explicitly on how schools should respond to the challenges presented by the accountability system.

Like all of my posts, this is a free and open access resource (but please observe the provisions of the Creative Commons Licence located at the top right hand corner of this Blog).

And do please use this contact form if you would like to discuss additional, customised advice and support.

 

Changes to Ofsted’s Inspection Guidance

 

Background and Scope

In June 2013, Ofsted published a survey report: ‘The most able students: Are they doing as well as they should in our non-selective secondary schools?

That same month I produced an extended analysis of the Report drawing out its comparative strengths and weaknesses, as well as summarising the guidance it contains on elements of effective whole school practice.

Readers requiring a full blow-by-blow account of the Report and its contents are cordially invited to digest the older post first.

The recommendations contained in ‘The most able students’ led to the changes recently introduced into the school inspection guidance, which uses the same terminology, rendering ‘most able’ Ofsted’s term of art for inspection purposes henceforward.

The revisions were introduced during the 2013 summer holidays and came into almost immediate effect in September 2013, ostensibly fulfilling the recommendations in ‘The most able students’ that Ofsted should:

  • focus more closely in its inspections on the teaching and progress of the most able students, the curriculum available to them, and the information, advice and guidance provided to the most able students
  • consider in more detail during inspection how well the pupil premium is used to support the most able students from disadvantaged backgrounds
  • report its inspection findings about this group of students more clearly in school inspection, sixth form and college reports.’ (page 11).

We might therefore expect the revisions to embed these priorities – and perhaps also to reflect related issues highlighted in the key findings and recommendations, such as: creating a culture of excellence, primary-secondary transition, progress in KS3 specifically, high expectations, the quality of homework, evaluation of mixed ability teaching, tracking and targeting, information for parents/carers and supporting progression to HE.

It is important to note that, while the source document was confined to non-selective secondary schools, the revisions to the inspection guidance apply to all schools – primary as well as secondary – that fall within scope of The Framework for School Inspection.

This means they cover school sixth forms and even extend to maintained nursery schools.

On the other hand, they exclude 16-19 academies, 16-19 UTCs and 16-19 studio schools, as well as sixth form colleges and FE colleges, all of which are covered by the Common Inspection Framework for Education and Skills.

No equivalent changes have been introduced into that Framework, or the relevant supporting documentation. It follows that there is some inconsistency between the expectations placed on 11-18 secondary schools and on 16-19 institutions.

Provision and support for high attainers is optimal when fully connected and co-ordinated across Years R-13, with particular emphasis on the key transition points at ages 11 and 16. But roughly half of the relevant post-16 population will be attending colleges that are not affected by these changes.

I have deliberately postponed detailed scrutiny of definitions until after this opening section, but it is important at the outset to supply what is conspicuously missing from the inspection guidance: a basic explanation of what Ofsted means by the ‘most able’.

This is not easy to establish, but can be derived from a footnote spread across the bottom of pages 6-7 of ‘The Most Able Students’. In the absence of any statement to the contrary, one can only assume that the transfer of that identical phrase into the guidance means that the definition applied to the phrase has also been transferred.

So, according to this source, the most able in secondary schools are:

‘…students starting secondary school in Year 7 attaining Level 5 or above, or having the potential to attain Level 5 and above, in English (reading and writing) and/or mathematics at the end of Key Stage 2.’

Hence Ofsted means all learners with KS2 Level 5 in English, maths or both, plus those falling below this threshold who nevertheless had the potential to achieve it.

An equivalent definition for KS2 in primary schools (not supplied in ‘The most able students’) would be:

‘Learners starting KS2 having attained Level 3 or above, or having the potential to achieve Level 3 and above in (any element of) English and/or maths at the end of KS1.’

The bracketed phrase is included because a single level for English will not be reported in Primary Performance Tables from 2013.

There is no obvious equivalent for KS1 in primary schools or KS5 in secondary schools, though it would be possible to create similar measures relating to achievement at GCSE and in a Year 1 National Curriculum baseline assessment (assuming the ELGs cannot be made to serve the purpose).

The critical point to bring out at this stage is the sheer size of the most able population as defined on this basis.

For example, if we were to use 2012 KS2 results to calculate the national Year 7 population falling within the secondary definition, it would include 50% of them on the basis of Level 5 achievement alone. Once the ‘potential Level 5s’ are factored in, we are dealing with a clear majority of the year group.

In any given school, this population will vary considerably according to the sector, the year group in question, prior attainment of the intake and how ‘potential to achieve’ is determined.

Many schools might reasonably calculate that all of their pupils – or all but a small minority – fall within scope. Even in schools with the most depressed intakes, this population will be sizeable if generous allowance is made for the impact of disadvantage on learners’ capacity to achieve the specified attainment threshold.

It is helpful to hold in mind a rough sense of the size of the most able population as one begins to engage with the inspection guidance.

.

The Framework for School Inspection

In fact, the School Inspection Framework itself has not been amended at all. Ofsted has sought to adjust its practical application through changes to two supporting documents:

  • The School Inspection Handbook (31 July 2013) which ‘provides instructions and guidance for inspectors conducting inspections… It sets out what inspectors must do and what schools can expect, and provides guidance for inspectors on making their judgements.’

This is not entirely satisfactory.

For example, the current version of the Framework stresses that inspections assess whether schools provide an inclusive environment:

‘which meets the needs of all pupils, irrespective of age, disability, gender reassignment, race, religion or belief, sex, or sexual orientation.’ (pp 13-14)

This list may be confined to distinctions that feature in the Equalities legislation, but there is no inherent reason why that should be the case. One might reasonably argue that, if HMI were really serious about inclusion and support for the most able, ‘attainment or ability’ should be added to the list!

It is more concerning that the section of the Framework dealing with pupil achievement says:

‘When judging achievement, inspectors have regard both for pupils’ progress and for their attainment. They take into account their starting points and age. Particular consideration is given to the progress that the lowest attaining pupils are making.’ (p17)

Why shouldn’t equally particular consideration be given to the progress of the highest attaining pupils? If a reference to low attainers is on the face of the Framework, while references to high attainers are confined to the supporting guidance, schools will draw the obvious conclusion about relative priorities.

Elsewhere in the Framework, there are generalised inclusive statements, applied to quality of teaching:

‘Inspectors will consider the extent to which… teaching strategies, including setting appropriate homework, together with support and intervention, match individual needs.’ (p 18)

and to quality of leadership and management:

‘Inspectors will consider the extent to which leaders and managers… demonstrate an ambitious vision for the school and high expectations of all pupils and teachers… provide a broad and balanced curriculum that meets the needs of all pupils, enables all pupils to achieve their full educational potential and make progress in their learning’ (pp 19-20)

but, if these statements are genuinely intended to reflect equality of opportunity, including for the ‘most able’, why has the progress of the lowest attaining learners been singled out beforehand for special attention?

Clearly it was too much to expect amendments on the face of the Framework itself, presumably because they could not be introduced without a formal consultation exercise. The large number of amendments introduced via the supporting guidance – covering a broad spectrum of issues – might have justified a consultation, though it would have delayed their implementation by several months.

But there is nothing to prevent Ofsted from publishing a list of draft amendments to the Framework that, subject to consultation, will be introduced when it is next revised and updated. Such an approach would help schools (and inspectors) to understand much more clearly the intended impact of complementary amendments to the supporting guidance.

.

School Inspection Handbook: Main Text

Prior to this round of amendments, there was a single reference in Paragraph 108 of the Handbook, applying to judgements of the quality of a school:

‘Inspection is primarily about evaluating how well individual pupils benefit from their school. It is important to test the school’s response to individual needs by observing how well it helps all pupils to make progress and fulfil their potential. Depending on the type of school, it may be relevant to pay particular attention to the achievement of:

  • disabled pupils, and those who have special educational needs
  • those with protected characteristics, including Gypsy, Roma and Traveller children, as defined by the Equality Act 2010
  • boys girls
  • the highest and lowest attainers
  • pupils for whom the pupil premium provides support, including:
  • looked after children
  • pupils known to be eligible for free school meals – a school is unlikely to be judged outstanding if these pupils are not making at least good progress
  • children of service families
  • those receiving alternative provision’.

Notice that the relevance of the highest attainers is optional -‘it may be relevant’ – and depends on the type of school being inspected, rather than being applied universally. It is left to the inspection team to make a judgement call.

Note, too, that the preferred terminology is ‘highest attainers’, rather than ‘the most able’. ‘Highest’ is an absolute term – rather than ‘high’ or ‘higher’ – which might be taken to imply the very extreme of the attainment spectrum, but there is no way of knowing.

This reference to the achievement of the ‘highest attainers’ remains in place, but is now juxtaposed against a series of newly inserted references to ‘the most able’. The former is optional, to be applied at inspectors’ discretion; the latter apply to all settings regardless.

There are no clues to tell us whether Ofsted is using the two terms synonymously, or if they intend to maintain a subtle distinction. The fact that the phrase has not been replaced by ‘the most able’ might suggest the latter, but that presupposes that this was picked up and consciously addressed during what seems to have been a rather cursory redrafting process.

There is no published glossary to inform interpretation of the terminology used in the Framework and its supporting guidance. By contrast, Estyn in Wales has published a ‘Glossary of Inspection Terms’, though that is hardly a model to be emulated, since it does not include their own preferred formulation ‘more able and talented’.)

The term ‘most able’ now appears in several parts of the main text:

  • Lesson observations must ‘gather evidence about how well individual pupils and particular groups of pupils are learning and making progress, including those with special needs, those for whom the pupil premium provides support and the most able, and assess the extent to which pupils have grown in knowledge’ (para 26);
  • Through meetings with pupils, parents, staff and other stakeholders, inspectors must: ‘gather evidence from a wide range of pupils, including disabled pupils, those with special educational needs, those for whom the pupil premium provides support, pupils who are receiving other forms of support and the most able.’ (para 41);
  • When it comes to judging achievement of pupils at the school, inspectors must: ‘have regard for pupils’ starting points in terms of their prior attainment and age. This includes the progress that the lowest attaining pupils are making and its effect on raising their attainment, and the progress that the most able are making towards attaining the highest levels and grades.’ (para 115);
  • They must also: ‘take account of: the learning and progress across year groups of different groups of pupils currently on the roll of the school, including disabled pupils, those who have special educational needs, those for whom the pupil premium provides support and the most able.’ (para 116).
  • They should take account of: ‘pupils’ progress in the last three years, including that for looked after children, disabled pupils, those who have special educational needs and the most able. Evidence gathered by inspectors during the course of the inspection should include: the proportions making expected progress and the proportions exceeding expected progress in English and in mathematics from each starting point, compared with national figures, for all pupils and for those for whom the pupil premium provides support.’ (para 116)
  • And in relation to Key Stage 1, they should take account of: ‘how well pupils with a lower starting point have made up ground, and the breadth and depth of progress made by the most able.’ (para 117)
  • When it comes to observing the quality of teaching and learning, inspectors must: ‘consider whether…teaching engages and includes all pupils, with work that is challenging enough and that meets their individual needs, including for the most able pupils’ (para 124)

The bulk of these references relate to data-driven judgements of attainment and progress, but it is worth pausing to emphasise the final point.

This, together with the reference to ‘the extent to which [the most able] pupils have grown in knowledge’ is the nearest we get to any explicit reference to the curriculum.

When it comes to qualitative judgement, and a priority for qualitative whole school improvement, schools need to examine how well – and how consistently – their teaching engages, includes and challenges the most able.

Incidentally, there is nothing in these amendments to indicate a preference for setting, though schools might do well to remember HMCI’s previously expressed concerns about:

‘the curse of mixed ability classes without mixed ability teaching’

The third point  – about progress – seems to be explicitly and deliberately reinforcing the statement on page 17 of the Framework that I quoted above. But while the Framework mentions only progress by the lowest attaining pupils, the Handbook now emphasises progress by the lowest and highest attaining alike. This is not a model of clarity.

Given the emphasis in ‘The most able students’ it seems odd that there is no explicit reference in the Handbook to those eligible for the Pupil Premium, unless one counts what is said about those who ‘exceed expected progress’ in English and maths, but that is not quite the same thing.

The way in which ‘the most able’ is tacked onto lists of different pupil groups also gives the rather unfortunate impression that these groups are mutually exclusive, rather than overlapping.

So far there is nothing significant about support for the most able to progress to competitive universities, apart from a brief and very general statement in the section on quality of leadership and management, referring to how well leaders and managers:

‘Ensure that the curriculum…provides timely independent information, advice and guidance to assist pupils on their next steps in training, education or employment.’

.

School Inspection Handbook: Level Descriptions

‘The most able’ has been inserted into two sets of descriptors within the Handbook.

In relation to achievement of pupils at the school’:

  • In Outstanding schools: ‘The learning of groups of pupils, particularly those who are disabled, those who have special educational needs, those for whom the pupil premium provides support, and the most able is consistently good or better.’
  • In Good schools: ‘The learning of groups of pupils, particularly those who are disabled, those who have special educational needs, those for whom the pupil premium provides support the most able, is generally good.’
  • In schools requiring improvement: there is only a generic ‘Pupils’ achievement requires improvement as it is not good’.
  • In Inadequate schools: ‘Groups of pupils, particularly disabled pupils and/or those who have special educational needs and/or those for whom the pupil premium provides support, and/or the most able, are underachieving.’

And, in the Descriptions for quality of teaching:

  • In Outstanding schools: ‘Much of the teaching in all key stages and most subjects is outstanding and never less than consistently good. As a result, almost all pupils currently on roll in the school, including disabled pupils, those who have special educational needs, those for whom the pupil premium provides support and the most able, are making rapid and sustained progress.’
  • In Good schools: ‘Teaching in most subjects, including English and mathematics, is usually good, with examples of some outstanding teaching. As a result, most pupils and groups of pupils on roll in the school, including disabled pupils, those who have special educational needs, those for whom the pupil premium provides support and the most able, make good progress and achieve well over time.’
  • In Schools Requiring Improvement there is the generic ‘‘Teaching requires improvement as it is not good’
  • In Inadequate Schools: ‘As a result of weak teaching over time, pupils or particular groups of pupils, including disabled pupils, those who have special educational needs, those for whom the pupil premium provides support and the most able, are making inadequate progress.’

 Here the dual emphasis on attainment and progress is writ large. I won’t labour the point I have made already about the overlapping nature of the groups listed.

There is nothing here either about the most able in receipt of the Pupil Premium, about curriculum or IAG, so we must continue our search for these missing pieces of the jigsaw within the parallel Subsidiary Guidance.

.

School Inspection Handbook: Postscript

Before we leave the Handbook behind, it is well worth examining one critical section in more detail, especially since it has been amended quite significantly.

Paragraphs 114-117 set out the evidence of attainment and progress that Ofsted inspectors will now draw upon. Because these seem so central to Ofsted’s interest in the most able, I have paraphrased the full list below, applying it exclusively to them.

The exercise illustrates that, if schools are to prioritise improvement by the most able, they must ensure that this is reflected throughout their evidence base. It also helpfully emphasises the importance of extending this effort to learners who attract the Pupil Premium.

Ofsted will want to examine

  • Learning and progress by the most able across different year groups. Evidence is gathered from: lesson observations; scrutiny of pupils’ work; schools’ records of pupils’ progress and progress of those receiving support from the Pupil Premium; ‘the quality and rigour of assessment’ (particularly in nursery, reception and KS1); discussions with pupils about their work; the views of parents, pupils and staff; discussion with staff and senior leaders; case studies of individual pupils; and listening to pupils read.
  • Progress made by the most able in the last three years. Evidence should include: the proportions making and exceeding expected progress ‘in English and in mathematics from each starting point, compared with national figures, for all pupils and for those whom the Pupil Premium provides support’; value-added indices for pupils and subjects; ‘other relevant indicators, including value-added data’; performance measures for the sixth form, including success rates; EYFS profile data; and ‘any analysis of progress data presented by the school, including information provided by external organisations’.
  • The most able learners’ attainment in relation to national standards (where available) and compared with all schools, based on data over the last three years, noting any evidence of performance significantly above or below national averages, trends of improvement or decline and inspection evidence of current pupils’ attainment across year groups. The latter will include, where relevant: the proportion of pupils achieving particular standards; capped average point scores; average point scores; pupils’ attainment in reading and writing and in maths; outcomes of the most recent phonics screening check and any follow-up by the schools; and attainment shown by test and exam results not yet validated or benchmarked nationally.
  • Difference in the achievement of the most able for whom the Pupil Premium provides support and others in the school including attainment gaps, particularly in English and maths (these to include differences in average points score in each of English and maths at the end of KS2 and at GCSE); and differences in progress from different starting points (see above).

Curiously, the footnotes attached to the original version of this section ignore the relevance of KS2 Level 6.

‘…starting points at Key Stage 2 include Levels W (and P levels), 1, 2, 3, 4 and 5’

I can only assume that this is an oversight.

.

References in the Subsidiary Guidance

One searches in vain for anything explicit about the curriculum or IAG. It seems that Ofsted decided not to give any prominence to these two critically important and controversial areas.

The section on ‘The Use of Prior Performance Data’ now says:

‘Inspectors should compare a school’s proportions of pupils making expected progress and the school’s proportions of pupils making more than expected progress in English and in mathematics with the national figures for each starting point. Consistency in being close to or above the national figures for pupils at each prior-attainment level, including the most able, is an important aspect of good achievement… Inspectors should pay particular attention to the sizeable prior-attainment groups in the school, and the most able, and note that school proportions below national figures for one starting point should not be considered to be compensated for by school proportions above national figures for another starting point. Inspectors should consider the school and national figures for the most recent year and the previous year, and how much they have changed.’ (para 7)

The insertion of references to ‘the most able’ makes for rather clumsy sentence structure, but does serve to highlight the new emphasis on their progression.

Provision in KS1 is once more singled out, but in a slightly different manner:

‘If all pupils are making good progress and levels of attainment are consistently high, overall achievement between the end of the Early Years Foundation Stage and end of Key Stage 1 is likely to be at least good and may be outstanding. To be outstanding, pupils known to be eligible for free school meals and the most able should be making good or better progress.’ (para 32, final bullet point)

There is a most welcome bullet point in the section about the achievement of disabled learners and those with SEN:

‘A category of ‘need’ such as autistic spectrum disorder, does not by itself indicate expected levels that pupils would usually be at, given their starting points (i.e. one pupil may be working towards 12 A* GCSE grades whereas another pupil of the same age may be working towards Level P6)

At last a paragraph appears that confirms inspectors’ interest in the most able attracting the Pupil Premium:

‘Inspectors must take account of the performance of the group for whom the pupil premium provides support, however small. Within this group, the progress in English and in mathematics of each different prior-attainment group should be considered and compared with that of the other pupils in the school, using the tables in RAISE online that show proportions making expected progress and proportions exceeding expected progress from each starting level. Inspectors should pay particular attention to the sizeable prior-attainment groups (those containing around 20% or more of the pupils for whom the pupil premium provides support) and the most able.’ (para 8)

Refreshing though it is to see that every school must pay attention to the most able supported by the Pupil Premium, regardless of the number attracting the Premium and how many are amongst the most able, one wonders why this message is not conveyed through the Handbook in similar terms.

Something similar occurs in respect of early entry to GCSE. The Handbook introduces generic concerns about early entry, especially in maths:

‘Inspectors should evaluate the school’s approach to early entry for GCSE mathematics and its impact on achievement and subsequent curriculum pathways. Inspectors should challenge the use of inappropriate early and multiple entry to GCSE examinations, including where pupils stop studying mathematics before the end of Year 11.

This is subsequently applied to all subjects in the level descriptions for Leadership and management, but only the Subsidiary Guidance relates the issue directly to ability and attainment:

‘Inspectors should investigate whether a policy of early entry to GCSE for pupils is preventing them from making as much progress as they should, for example because:

  • the extensive and inappropriate use of early GCSE entry, particularly in mathematics, puts too much emphasis on attaining a grade C at the expense of developing the understanding necessary to succeed at A level and beyond
  • the policy is having a detrimental impact on the uptake of advanced level courses
  • the widespread use of early GCSE entry and repeated sitting of examinations has encouraged short-term gains in learning but has led to underachievement at GCSE, particularly for able pupils
  • the policy has resulted in a lack of attention to the attainment of the least able
  • opportunities are missed to meet the needs of high-attaining pupils through depth of GCSE study and additional qualifications.

In evaluating any approach to early entry, inspectors should consider the impact not only on the judgement on pupils’ achievement but also on leadership and management in terms of whether the school is providing a curriculum that meets the pupils’ needs.’ (para 34)

Here we have even more terminological confusion, with the use of both ‘able pupils’ and ‘high-attaining pupils’, while ‘most able’ is conspicuous by its absence.

The Handbook and Subsidiary Guidance between them have referred to: ‘highest attaining’, ‘high attaining’, ‘most able’ and ‘able’ without defining any of these terms, or differentiating between them.

 .

Overall

The amendments introduced into the Handbook and Subsidiary Guidance place a stronger emphasis on the most able principally through the repetition of that phrase at various points in the text.

These amendments are focused predominantly on pupil attainment and progress, rather underplaying any wider emphasis on effective whole school practice. References to curricular challenge and IAG for progression to competitive universities are generalised and scant.

The impact on overall Ofsted judgements can best be appreciated by editing together the relevant elements of the two sets of level descriptors referenced above:

  • In outstanding schools the most able pupils’ learning is consistently good or better and they are making rapid and sustained progress.
  •  In good schools the most able pupils’ learning is generally good, they make good progress and achieve well over time.
  • In schools requiring improvement the teaching of the most able pupils and their achievement are not good.
  • In inadequate schools the most able pupils are underachieving and making inadequate progress.

The attainment and progress of the most able supported by the Pupil Premium is integral to these judgements, though this latter point is underplayed in the guidance.

One might have hoped for a more considered and more carefully drafted response, built upon a careful definition of the term, which explains whether it differs from other similar phrases used in these materials and, if so, how.

Unfortunately, there is a real risk that the questionable clarity of the Handbook and Subsidiary Guidance will result in some inconsistency in the application of the Framework, even though the fundamental purpose of such material is surely to achieve the opposite.

A dedicated piece of additional briefing would have been particularly helpful, but there is nothing on this topic in the most recently published package (22 September 2013).

.

Terminology, Definitions, Measures and Data

Some readers may find that parts of this section tell them mostly what they already know. But even those who feel secure in the basics might want to cast an eye over the critical distinctions and issues set out below. Some may want to take issue with certain steps along the path I have negotiated through the tricky terminological issues.

I hope others will find it helpful to have the full scaffolding in place as they grapple with the implications of Ofsted’s new emphasis on the most able learners – and how that relates to the parallel emphasis in the School Performance Tables.

 .

Terminology: Most Able and High Attainers

In the section above, I have faithfully replicated the terminology adopted by Ofsted, while highlighting the problems caused by switching between terms that might or might not be synonymous.

Meanwhile the School Performance Tables have consistently adopted an alternative term: ‘high attainers’.

So what is the distinction – and which terminology should we prefer?

This treatment is necessarily brief and begs many questions that are best addressed in the margins. I shall set out the argument as best I can and move rapidly on.

A failure to distinguish properly between attainment and ability bedevils this field and consistently sullies wider educational debate. The two terms are often used synonymously, especially by economists, who should really know better!

Here is my rough and ready effort at pinning down the distinction in terms that fit the current context:

  • Attainment involves securing specified measurable educational outcomes, typically assessed through graded tests and public examinations (eg KS2 tests, GCSE, A Level). Some authorities (Ofsted included) maintain a distinction between attainment and progress, but it is also used in a general sense to encompass both. Attainment is (only) one dimension of wider educational achievement.
  • Ability is a measure of potential, not a measure of achievement. It may be hidden and/or its realisation obstructed. Consequently it is not easily assessed. Moreover, ability is complex, multifaceted and not synonymous with intelligence. Single identification instruments – for example IQ tests, CAT scores – may well be misleading and/or culturally biased and/or provide an incomplete picture. Some eschew the assessment and identification of ability because of the issues and difficulties associated with the concept. Some deploy questionable identification practice. Others adopt a pragmatic ‘best-fit’ approach, utilising a broad range of qualitative and quantitative evidence including ‘identification through provision’. Attainment-based evidence may feature within this portfolio, but should not be relied on exclusively or excessively otherwise the critical distinction is lost.

The best performers in key stage tests and public examinations are at the top end of the attainment distribution, but not necessarily at the top end of the ability distribution. High attainment may be a proxy for high ability but it is not the same thing, however ability is conceived (which is a separate, complex and highly controversial issue).

Similarly, high-attaining pupils may be regarded as a subset of a school’s gifted and talented population (or whatever alternative terminology it prefers to use) but one might reasonably expect that population also to include other learners who – for a variety and combination of reasons and for the time being at least – are not realising their ability through high attainment.

While some schools may find ability too difficult and controversial a concept to wrestle with (especially since they are no longer expected by the Government to do so), all are pushed by the accountability system to focus on high attainment and on the performance of their high attaining learners.

Schools cannot entirely abdicate from engagement with ability, since their success as judged by the accountability system depends in part on their capacity to unlock high attainment amongst those who are not yet demonstrating it.

But, since their focus is the nurturing of attainment, rather than the nurturing of ability, this can be articulated in terms of the former rather than the latter. Hence the imperative is to maximise the number of high attaining learners and the level at which they attain.

A subset of ‘potential high attainers’ is supported to cross the appropriate threshold. At one extreme, schools may decide that all their learners who are not yet high attaining should be regarded thus. At the other, schools may prefer to focus exclusively on a significantly smaller group of ‘borderline high attainers’.

But schools must balance this attainment imperative against the wider purposes of education and the wider needs of learners, some of which may be influenced by ability. There is always concern that the accountability system overplays attainment at the expense of these wider needs, but that is an argument for another day.

In the past, high attainers may have been regarded as a second-order priority, since emphasis was placed disproportionately on the achievement of key threshold measures set at a lower level and the borderline candidates who could be supported to achieve them. But schools are increasingly driven by the accountability system to improve performance at all levels of prior attainment.

Ofsted’s choice of ‘most able’ is misleading because:

  • Ability and its derivative ‘able’ are heavily loaded and contentious terms. There is comparatively little consensus over what they mean, hence their application without careful definition is always problematic. Ability and attainment are not synonymous but Ofsted’s focus is exclusively on the latter. There is no measure of ability in the School Performance Tables which confine themselves to measures of attainment (including progress) and destination.
  • ‘Most able’ is an absolute term normally denoting those at the extreme of the ability distribution. It suggests a markedly higher threshold than ‘highly able’, ‘more able’ or simply ‘able’. Yet Ofsted’s own definition accounts for some 50% of all learners (see below).

It may be that Ofsted wished to include within its definition some learners who would feature amongst the high attaining population but are underachieving, perhaps as a consequence of disadvantage. But Ofsted is not interested in ability per se, only in its successful conversion to high attainment.

It may be that Ofsted’s choice of terminology was also influenced by their wish to maintain a clear distinction between attainment and progress. Perhaps they were concerned that using the term ‘high attainer’ might confuse this distinction.

Given the considerable scope for confusion I have adopted the terminology ‘high attainers/high attaining learners’ and ‘potential high attainers’. The former means those who have achieved or exceeded a specified assessment outcome and are making commensurate progress. The latter means those who, with appropriate support, might become high attainers. 

 .

Comparing Ofsted and Performance Table Definitions

The Primary and Secondary School Performance Tables report attainment and progress for the pupil population as a whole, but also separately for ‘high attainers’, ‘middle attainers’ and ‘low attainers’.

Each of these groups is defined by reference to their performance in the earliest relevant key stage assessment. Currently KS1 assessment is used for the Primary Tables – though this may change in future – and KS2 for the Secondary Tables.

The Tables report attainment at later key stages by those who achieved or exceeded the initial baseline marker. By this means, and through the expected levels of progress methodology (of which more below), they highlight the improvement made by such learners across one (primary) or two (secondary 11-16) key stages.

The assumption is that a perfect school will ensure that all of its pupils – whether high, middle or low attainers – will successfully achieve the commensurate attainment benchmarks at later key stages and so make at least the expected progress.

Of course, many circumstances can intervene to prevent even the best schools from achieving perfection!  The worst case scenario is that no learners make the expected progress. There is inevitably a distribution of schools between these two extremes.

Other things being equal, one might expect more high attainers than middle attainers to make the expected progress, and more middle attainers than low attainers to do so. This is borne out by the national data in the 2012 Performance Tables.

At school level, if the success rate for high attainers compares unfavourably with those for middle and low attainers, so is out of kilter with the national data, it is taken as evidence that the former are comparatively less well served by the school. The assumption is that high attainers have not received the same degree of targeted challenge and support as their lower attaining peers.

In practice, other factors may come into play, principal among them the proportion of each sub-population within the relevant year group. Do high attainers tend to perform better in the schools and year groups where they are most heavily concentrated, or is the reverse true in certain circumstances? Is there an optimal proportion? Tempting though it is to pursue that question, we must return to the matter in hand.

The User Guide for the 2012 Secondary Tables explains:

‘Prior attainment definitions are based on the KS2 test results attained by pupils on completion of the primary school phase:

  • Below expected level = those below Level 4 in the KS2 tests;
  • At expected level = those at Level 4 in the KS2 tests;
  • Above expected level = those above Level 4 in the KS2 tests.

To establish a pupil’s KS2 attainment level, we calculated the pupil’s average point score in National Curriculum Tests for English, maths and science and classified those with a point score of less than 24 as low; those between 24 and 29.99 as middle, and those with 30 or more as high attaining. ‘

So the 2012 Secondary Performance Tables define high attainers as those who average above Level 4 performance across the three core subjects.

Learners who achieve highly in one subject are not counted if their performance in the other two drags them below the average.

And, on this measure, the 2012 Secondary Tables show that, nationally, 33.1% of pupils attending state maintained schools qualified as high attainers.

The comparable percentage in the Primary Tables, based on an average point score of 18 or more across KS1 English and maths assessments, is 24%, quite considerably lower.

The implications of a definition of high attainers that includes one quarter of learners in the later primary years and one third of learners at secondary level are rarely discussed. Is there a case for a more consistent approach across the two sectors, or is it a reasonable assumption that there are significantly more high attainers in the secondary sector? Let us leave that question hanging and return to the comparison with Ofsted.

One might expect Ofsted to have adopted this same Performance Tables definition, so ensuring consistency across both arms of the accountability regime. That would have been most straightforward for schools on the receiving end of both.

But, as we have seen, for reasons unexplained and best known to itself, Ofsted uses a completely different threshold on which to base its definition, which is confined to the secondary sector because ‘The Most Able Students’ does not deal with primary schools.

The crux of Ofsted’s definition is the achievement of National Curriculum Level 5 in English, in mathematics, or in both English and maths. This is quite different to average above Level 4 achievement in English, maths and science.

This has the virtue of ‘counting in’ learners with relatively high attainment in one of the two principal subjects, but relatively low attainment in the other. Performance in science is not deemed relevant.

The 2012 Primary Performance Tables report the percentage achieving Level 5 in both English and maths as 27%, while 39% achieved Level 5 in maths and 38% did so in English.

Hence 12% achieved Level 5 in maths and not English (39-27) and 10% did so in English and not maths (37-27) – so the total achieving Level 5 in one, the other or both subject in 2012 is 27 + 12 + 11 = 50%.

I cannot find all the data to undertake the same calculation for the equivalent Ofsted-derived definition for the primary sector. We know the national percentages achieving Level 3 in 2012 – 27% reading, 14% writing, 22% maths – but not what proportion of KS1 learners achieved one, two and three Level 3s respectively.

One can reasonably predict that the total will significantly exceed the 24% obtained by the Performance Tables methodology.

Reverting to the secondary data, it might be argued that, while 2012 outcomes are applicable for learners now in Year 7, one must use progressively earlier KS2 data for learners in older year groups.

That might be expected to depress slightly the percentage exceeding the threshold – but it does not alter the fact that the basis of the Ofsted definition is entirely different to (and substantively more generous than) that in the Performance Tables.

And of course we have not yet factored in that proportion of learners judged to have had ‘the potential to achieve Level 5’ (or the equivalent Level 3 at the primary level).

There is little information in ‘The most able students’ about the likely size of this ‘potential high attainers’ group. The definitional footnote mentions EAL learners who do not yet have the skills to demonstrate Level 5 performance, but does not estimate how many learners are affected.

Any methodology adopted by schools might also be expected to factor in:

  • Near misses – most schools would include learners who achieved Level 4A in English or maths or both; and
  • Disadvantaged learners – schools might include learners attracting the Pupil Premium who would have achieved Level 5 had no gap existed between the performance of advantaged and disadvantaged learners.
  • An ideological predisposition. Some schools might base their approach on the principle that all their learners are capable of Level 5 performance; others might regard this as imposing unrealistic expectations on a proportion of their learners.

Were this exercise to be undertaken at national level, it would encompass a comfortable majority of the secondary student population.

As noted above, there is likely to be significant variation between the size of schools’ ‘high attainer and potential high attainer’ populations, but most will find them more substantial than the term ‘most able’ might initially have led them to believe.

So, to summarise: we have two distinct measures of what constitutes a high attainer, one of which also includes potential high attainers. Both are catholic interpretations but one (Ofsted’s) is significantly more generous than the other.

That said, we have had to evolve Ofsted’s definition from indistinct clues. There is nothing overt and explicit to tell us what is meant by the term ‘most able’ as now used in the Inspection Guidance.

This situation is less than optimal for schools wishing to show themselves to best advantage on both sides of the accountability regime.

.

How Much Progress Does the Accountability Regime Expect from High Attainers?

It goes without saying that what constitutes high attainment depends on the context. In England, attainment measures are typically associated with end of key stage assessment and the instruments and grading scales applied to them. High-performing learners are expected to achieve a commensurately high grade in the appropriate assessment.

But, once a learner has demonstrated high attainment, they are expected to continue to demonstrate it, achieving commensurately high grades in subsequent assessments and so making consistently good progress between these different stage-related outcomes.

This assumption is integral to the accountability system, which makes no allowance for the non-linear development of most learners. It is assumed that, when viewed over the longer term, these inconsistencies are smoothed out: high attainers will typically remain so, throughout a key stage and even across key stages, indeed throughout their school careers.

The assumption is contestable but that, too, is an argument for another day.

The rate of progress is currently determined with reference to National Curriculum Levels. It is typically expected that all learners should make at least two levels of progress between KS1 and KS2; and at least three levels of progress between KS2 and KS4, but the reality is somewhat more complex.

For this purpose, GCSE Grades are mapped onto National Curriculum levels in such a way that learners achieving Level 5 at KS2 need to achieve at least GCSE Grade B to show three levels of progress across KS2-4.

.

NC Level 1 2 3 4 5 6 7 8 9 10
GCSE grade C B A A*

.

For learners with Level 5 at KS2, an A grade at GCSE would denote four levels of progress across KS2-4, while an A* grade would mean five levels of progress.

This is the ceiling – it is not possible for any Level 5 learner to achieve more progress than is denoted by an A* grade, though this would of course denote six levels of progress from KS2 Level 4.

So, in effect, the progression ceiling is comparatively lower for those with higher prior attainment than it is for middle and lower attainers, even though the former are arguably more likely to make further and faster progress than their peers.

The ‘levels of progress’ methodology rests on a further assumption – that these steps of progress are equidistant, equally steep, and so equally demanding. Hence it requires the same effort to climb three levels from Level 4 to GCSE Grade C as it does to climb from Level 6 to GCSE Grade A. I have sometimes seen this assumption disputed.

The methodology is far from perfect, which might help to justify the decision to dispense with it when National Curriculum levels go in 2016.

In the meantime, however, schools need to work within current expectations, as applied in the School Performance Tables. These will continue in force for at least two and probably three more sets of Tables, in 2013, 2014 and probably 2015.

So what are the current expectations?

The User Guide to the 2012 Performance Tables includes material which explains what it calls ‘progression trajectories’.

The note relating to the primary tables says:

‘The majority of children are expected to leave Key Stage 1 (age 7), working at least at level 2. During Key Stage 2, pupils are expected to make at least two levels’ progress, with the majority achieving at least a level 4 by age 11. Pupils entering Key Stage 2 at level 3 should progress at least to level 5; while those entering at level 1 should progress at least to level 3…These are minimum expectations and opportunities exist for schools to provide greater stretch for more able children, with the introduction of a level 6 test for 11 year olds.

.

Progression trajectories primary Capture

.

A few hundred pupils a year reach level 4 at Key Stage 1 in maths and/or the different elements of English. An associated technical note reminds us that it was only the advent of KS2 Level 6 tests that enabled these learners to achieve the expected two levels of progress – previously they were limited to only one.

But Level 3 is the norm for primary high attainers and the introduction of the Level 6 tests has raised the ceiling for them, permitting many to exceed the standard expectation by making three levels of progress from KS1 to KS2. Three is the limit however.

Although the proportions of learners achieving Level 6 are still relatively small, numbers are increasing rapidly, especially in maths. The SFR containing provisional results from the 2013 tests shows that 7% of learners achieved Level 6 in KS2 maths.

.

.

The note relating to the secondary performance tables says:

‘The majority of children are expected to leave Key Stage 2 (age 11), working at least at level 4.  By the end of Key Stage 4, pupils who were at level 4 should progress to achieve at least a Grade C at GCSE; while pupils working at level 6 should be expected to achieve at least an A at GCSE…These are minimum expectations.’

.

Progression trajectories secondary Capture .

But the associated technical note disagrees.

A diagram shows that the minimum expected progress from a KS2 Level 6 is Grade B, equivalent to only two levels of progress.

.

Secondary progression matrix diagram Capture

The text reinforces this:

Pupils attaining level 5 or level 6 at KS2 are expected to achieve at least a grade B at GCSE. Therefore all pupils achieving an A* – B are deemed to have made the expected progress, whether or not their prior attainment is known.’

The technical note is the current version. I checked the RAISE Online library in case it contained more recent information but – at the time of writing at least – it does not.

It seems that there is currently some confusion about whether or not learners with Level 6 are expected to progress at least three levels to GCSE grade A, at least as far as the Performance Tables are concerned.

This may be because the interpretation in the technical note predates the interpretation in the guidance note and has not been updated.

Clearly the higher level of expectation is preferable, because it is nonsensical that the very highest attainers need make only two levels of progress across five years of secondary schooling when everyone else is expected to make at least three.

.

Should We Expect More Progress from High Attainers?

Many schools have pushed beyond these minimum expectations, especially for their high attainers. It is fairly common practice for learners to be expected to make somewhere between two and three levels of progress from KS1 to KS2 and four levels of progress from KS 2-4.

Given that this practice is already firmly established, there seems to be a strong case for both arms of the accountability system to emulate it, so raising expectations for high attainers nationally, regardless of the schools they attend.

That would confirm the value and significance of Level 6 tests to primary schools, while secondary schools would reasonably expect the rapidly increasing number of learners arriving with KS2 L6 to reach GCSE A* five years later.

Another option would be to combine this additional stretch with a recalibration of the definition of high attainers – and of course its application to both arms of the accountability regime.

For the evidence from the Secondary Transition Matrices, held in the RAISE online library, shows just how much progress varies according to National Curriculum sub-levels.

When it comes to full levels, the Matrices show that, nationally, 51% of learners with Level 5 in maths made four or more levels of progress (to Grade A or above) in 2012, while 41% with Level 5 did so in English.

(Incidentally, the Matrices do not show progress from Level 6 because the KS2 level relates to performance some years earlier, typically in 2007 for those taking GCSE in 2012.)

.

Maths TM - full grades CaptureEnfglish TM full grade Capture

.

But the real value of these Matrices lies in the breakdown they provide of progression by sub-level.

I have already drawn out the key points in an earlier post and will not repeat that material here, other than to note that, in 2012:

  • 87% of learners with a Level 5A at KS2 in English achieved at least four levels of progress, compared with 64% with 5B and 29% with 5C (the latter lower than the comparable conversion rate for those with a Level 4A). Moreover, 47% of those with 5A achieve five levels of progress to A*, compared with 20 of those with 5B and only 4% of those with 5C;
  • 84% of learners with Level 5A in maths secured at least four levels of progress, whereas 57% of those with 5B and 30% with 5C managed this (and once again, the conversion rate from Level 4A was higher than for 5C). And 50% of those with 5A make five levels of progress to A*, compared with 20% of those with 5B and just 6% of those with 5C

.

TM Maths Capture

 Maths

 .

TM English Capture

English

.

There is a clear distinction between the progress made by learners with Level 5A/B and by those with 5C, which might argue for the Performance Tables to adjust upwards the average point score expected of high attainers, while simultaneously raising the expectation to four levels of progress.

There may be reluctance to adjust the levels-driven progress methodology when it has a limited lifespan of three years at most. On the other hand:

  • There is already an issue – and it will become more pronounced over the next three years as more learners achieve KS2 L6.
  • As the plans for post-2016 assessment and accountability are developed and finalised, it is important that suitably high expectations of high attainers are transferred across from the current system to its successor.
  • Implementation of the 2016 reforms may be dependent on the outcome of the 2015 General Election – there is currently no guarantee that Labour will proceed with the removal of National Curriculum levels and/or follow the timetable laid down by the Government.

.

Summing Up

This marks the end of the first part of this post. I have tried to show:

  • How Ofsted inspection guidance places greater emphasis on high attainers, what this really means and where the meaning is unclear.
  • That the revisions introduced by Ofsted are a not quite comprehensive response to the self-imposed recommendations in ‘The most able students’.
  • How – in the absence of any guidance to the contrary – Ofsted’s assumed definition of high attainers (aka ‘the most able’) differs from that applied in the School Performance Tables, resulting in inconsistency between the two sides of the of the accountability regime which is sub-optimal for schools.
  • That, if Ofsted’s assumed definition stands, schools need to be prepared for the likelihood that the majority of their learners will fall within it.
  • That expectations of progress for high attainers in the Secondary Performance Tables are currently unclear.
  • That there is a strong case for increasing those expectations – for primary as well as secondary high attainers – which could if necessary be combined with an upwards adjustment of the definitions.

I have called for Ofsted to publish brief supplementary guidance to clarify its definitions and the wider implications of the revisions of its inspection guidance. This would ideally confirm that Ofsted’s definitions and Performance Table definitions are one and the same.

In Part Two, I will review how high attainers will be reported on in the 2013 Performance Tables, and how this differs from the arrangements in the 2012 Tables.

I will also set out how the proposals in the consultations on primary, secondary and post-16 accountability are expected to impact on high attainers. (If the response to the secondary consultation appears imminently I will reflect that in the analysis.)

Finally, I will offer some guidance for schools on how they might set about planning to improve their high attainers’ performance – and what that they might include in a basket of key improvement measures, designed to be shared with learners, parents and other key stakeholders.

.

GP

October 2013

Analysis of the Primary Assessment and Accountability Consultation Document

. .

.

This post is part of a bigger one on the relationship between curriculum, assessment and accountability reforms. Given the inordinate length of that piece and the complexity of the proposals for primary assessment and accountability, I have published my analysis of those proposals separately here.

The post sets out what has been published, ruminates on the purpose of the Pupil Premium, undertakes a section-by-section analysis of the consultation document and draws together the issues of greatest concern.

It attempts an overall scaled score assessment of the document and finds it seriously wanting. There are major fault lines running through the proposals and little clarity over several key issues.

These proposals are far from ‘implementation-ready’ and ultimately disappointing, both in terms of the threshold and progress achieved. If there was a floor standard for consultation documents, this would fall significantly short.

Those with little time are recommended to go straight to the latter section – ‘Primary Assessment and Accountability: Issues and Omissions’ which can be found about two-thirds of the way through the post.

.

What has been published?

17 July saw the publication of three documents in the following order:

  • A press release which appeared shortly after a midnight embargo;

There was no response to the parallel ‘Secondary school accountability’ consultation launched on 7 February and completed on 1 May, despite the connectivity between the two sets of proposals – and no firm indication of when that response would be published.

A third consultation, on post-16 assessment and accountability, was not mentioned either.

The staged publication of the primary material meant that initial analysis and questioning of Ministers was based largely on the headlines in the press release rather than on the substance of the proposals.

Initial media appearances appeared to generate a groundswell of hostility that Ministers could not readily counter. The answers to some reasonable questions on the detail were not yet in the public domain.

It was particularly noteworthy that the announcement had integrated within it a second, about the size of Pupil Premium allocations in 2014-15. This was clearly intended to sugar the pill, though the coating is rather thin and there are also potentially wider ramifications (see below).

The Pupil Premium announcement must have been the justification for presentation by Lib Dem Deputy Prime Minister Clegg and Minister of State Laws, rather than by Tory Secretary of State Gove.

He (Gove) must have been delighted at avoiding this particularly poisoned chalice, already delayed into the dog days of summer – often a deliberate strategy for downplaying a particularly contentious announcement.

.

.

The consultation has a deadline of 11 October, allowing a total of 11 weeks and two days for responses, including the entirety of the school summer holidays, so the majority of the consultation period occurs while most schools are closed. This may also serve to mute opposition to the proposals contained in the document.

There is a commitment to publish the outcomes of consultation, together with a response ‘in autumn 2013’, which is a very quick turn round assuming that autumn means November rather than December. If there is any degree of contention, this might well edge close to Christmas.

.

An Aside: The Pupil Premium

The assessment and accountability announcement was sugar-coated by confirmation of the size of Pupil Premium allocations in 2014-15.

But close scrutiny of the coating reveals it as rather a thin veneer.

It was already known that the total Pupil Premium funding envelope would increase  by £625m, from £1.875bn in 2013-14 to £2.5bn in 2014-15, so the overall budget was not in itself newsworthy. There is a degree of economy with the truth at play if the funding is claimed to be ‘new money’.

But the apparent decision to weight this towards primary schools was new. Ministers made much of the 44% increase for primary schools, from £900 to £1,300 per pupil, while conspicuously omitting to confirm the same uprating for secondary schools.

Newly released data for the 2013-14 Premium suggests that it might be possible to afford the same uprating for secondary-age pupils, assuming numbers eligible do not increase between January 2013 and January 2014, but the silence on this point betrays some uncertainty, most probably driven partly by numbers and partly by the early impact of Universal Credit on eligibility.

We do know, from the Spending Review, that the total budget for the Premium will be protected in real terms in 2015-16 but will not be further increased.

It remains to be seen whether any new weighting in favour of the primary sector will be retained, but that seems highly likely given the level of disruption that would be caused by frequent recalibration.

One influential commentator – Institute of Education Director Chris Husbands – has suggested that the bracketing of the two announcements marks a significant adjustment:

‘This is a further twist in the evolving purpose of the pupil premium – once intended as an incentive to primary schools to admit more disadvantaged children, then a compensatory payment for the additional costs involved in meeting the needs of disadvantaged children, it is now more clearly a fund to secure threshold levels of attainment.’

This argument runs like a leitmotif through the analysis below.

.

.

But it also runs counter to the Government’s official position that the Premium is designed to support all disadvantaged pupils and close the attainment gap between them and their peers, a position reinforced by the fact that the Government has delineated separate ‘catch-up premium support’ exclusively for those below the thresholds.

There is no change in recent announcements about strengthening the accountability underpinning Pupil Premium support. Husbands’ argument also runs against the tenor of Ofsted’s publications about effective use of the Premium and the latest Unseen Children report, published following deliberations by an expert panel on which Husbands served.

The source appears to be a recent IPPR publication ‘Excellence and Equity: Tackling Educational Disadvantage in England’s Secondary Schools’, Chapter 4 of which asserts (without supporting evidence) that:

 ‘Policymakers talk interchangeably about the pupil premium being used to support pupils who are falling behind, and it being used to support those who are on free school meals.’

This despite the fact that:

The overlap between these two categories is not as large as many people suppose. Last year, only 23 per cent of low-attaining pupils at the end of primary school were eligible for free school meals, and only 26 per cent of pupils eligible for free school meals were low attaining. This puts schools in the difficult position of having to decide whether to spend their pupil premium resources on pupils who have a learning need, even though many of them will not be eligible for free school meals, or whether they should focus them on FSM pupils, even though many of them will be performing at the expected level.’

The notion that pupils who are performing at the expected levels do not, by definition, have a ‘learning need’ is highly contentious, but let that pass.

The substantive argument is that, because ‘tackling the long tail of low achievement is the biggest challenge facing England’s school system’ and because the Premium ‘provides insufficient funds targeted at the right age range’:

‘In order to have maximum impact, the pupil premium should be explicitly targeted towards raising low achievement in primary and early secondary school… The Department for Education should therefore focus the additional funding at this age range. It should… create a higher level of pupil premium in primary schools, and… increase the ‘catch-up premium’ (for year 7 pupils) in secondary schools; the pupil premium in secondary schools would be held at its current level. This would provide primary schools with sufficient resources to fund targeted interventions, such as Reading Recovery, for all children who are at risk of falling behind. It would also compensate secondary schools that have large numbers of pupils starting school below the expected level of literacy and numeracy.

 …Secondary schools are currently given a catch-up premium for every pupil who enters below level 4 in English and maths. However, there is no mechanism to guarantee that these pupils benefit from the money. The ‘catch-up premium’ should therefore be replaced with a ‘catch-up entitlement’. Every pupil that falls into this category would be entitled to have the money spent specifically on helping to raise his or her attainment. Schools would be required to write a letter to these pupils and their families explaining how the resources are being spent.’

.

.

The Government has potentially front-loaded the Pupil Premium into the primary sector, but not – as far as we are aware – the early years of secondary school. Nor has it increased the catch-up premium, unless by some relatively small amount yet to be announced, or made it an individual entitlement.

Husbands’ initial argument – that the linking of Premium and assessment necessarily means a closer link being forged with tackling below-threshold attainment – depends on his assertion that:

‘The core message of the consultation is that the concern is with absolute attainment – secondary readiness – rather than the progress made by primary schools.’

The analysis below examines the case for that assertion.

 .

What the Primary Assessment Consultation Says

The commentary below follows the sections in the consultation document

.

The case for change

The second paragraph of ‘The case for change’ says:

‘We believe that it is right that the government should set out in detail what pupils should be taught…’

a somewhat different  slant to that adopted in the National Curriculum proposals (and which of course applies only to the core subjects in state-maintained schools).

The next section works towards a definition of the term ‘secondary ready’, described as ‘the single most important outcome that any primary school should strive to achieve’.

It is discussed exclusively in terms of achievement in KS2 English and maths tests, at a level sufficient to generate five GCSE Grades A*-C including English and maths five years later.

This despite the fact that the secondary accountability consultation proposes two quite different headline measures: good GCSE grades in both English and maths and Average Points Score in eight subjects from a three-category menu (neither of which is yet defined against the proposed new 8 to 1 GCSE grading scale).

No other criteria are introduced into the definition, rendering it distinctly narrow. This might arguably be the most important outcome of primary education, but it is not the sole outcome by any stretch.

The Government states an ‘ambition’ that all pupils should achieve this benchmark, excepting a proportion ‘with particular learning needs’.

There is no quantification of this proportion, though it is later used to identify a floor target assumption that 85% of the cohort should achieve the benchmark, so the group with ‘particular learning needs’ must be something less than 15% of all learners.

The introduction of a second and parallel floor target, relating to progression, is justified here on the grounds that ‘some schools have particularly demanding intakes’ so ‘will find it challenging to reach the ambitious [attainment] threshold…’. This will also help to identify coasting schools.

This approach to progression, as a fall back in circumstances where the threshold measure is problematic, lends some weight to Husbands’ contention that absolute attainment is now paramount.

Note that the wording in this section is unclear whether the new floor target consists of both of these measures – secondary readiness and progression – or the imposition of one or the other. This issue comes up again later below.

There is nothing here about the importance of applying measures that do not have in-built perverse incentives to focus on the threshold boundary, but this too will resurface later.

There is early confirmation that:

‘We will continue to prescribe statutory assessment arrangements in English, mathematics and science.’

The ‘core principles’ mentioned in the Assessment Without Levels text appear at this stage to be those proposed in the June 2011 Bew Report rather than any new formulation. Note the second bullet point, which pushes in directly the opposite direction to Husbands’ assertion:

  • ongoing assessment is a crucial part of effective teaching, but it should be left to schools. The government should only prescribe how statutory end of key stage assessment is conducted;
  • external school-level accountability is important, but must be fair. In particular, measures of progress should be given at least as much weight as attainment;
  • a wide range of school performance information should be published to help parents and others to hold schools to account in a fair, rounded way; and
  • both summative teacher assessment and external testing are important forms of statutory assessment and both should be published

Already there are mixed messages.

The next section justifies the removal of National Curriculum levels:

‘Imposing a single system for ongoing assessment, in the way that national curriculum levels are built into the current curriculum and prescribe a detailed sequence for what pupils should be taught, is incompatible with this curriculum freedom. How schools teach their curriculum and track the progress pupils make against it will be for them to decide. Schools will be able to focus their teaching, assessment and reporting not on a set of opaque level descriptions, but on the essential knowledge that all pupils should learn. There will be a clear separation between ongoing, formative assessment (wholly owned by schools) and the statutory summative assessment which the government will prescribe to provide robust external accountability and national benchmarking. Ofsted will expect to see evidence of pupils’ progress, with inspections informed by the school’s chosen pupil tracking data.’

Paraphrasing this statement, one derives the following rather questionable logic:

  • We want to give schools freedom to determine their own approaches to in-school assessment
  • The current system of levels has come to be applied to both statutory and in-school assessment
  • So we are removing levels from both statutory and in-school assessment.

The only justification for this must lie in recognition that the retention of levels in statutory assessment will inevitably have a ‘backwash effect’ on in-school assessment.

Yet this backwash effect is not acknowledged in respect of the proposed new arrangements for end of key stage statutory assessment. There is a fundamental issue here.

Schools will still be required to report to parents at the end of each year and key stage. There will be no imposition of a system for them doing so but, as we have already recognised, parents will more readily understand a system that is fully consistent with that applied for end of key stage assessment, rather than a substantively different approach.

The next segment begins to explore the case for shifting the baseline assessment – on which to build measures of progression in primary schools – back to Year R. This will ‘reinforce the importance of early intervention’. The EYFS profile will be retained but might become non-statutory.

The introduction of new summative assessments at end KS1 and end KS2 is confirmed for 2016, with interim arrangements as confirmed in the National Curriculum documentation. The accountability reforms also take effect at this point, so changes will be introduced in the December 2016/January 2017 Performance Tables.

There is also confirmation that academies’ funding agreements require compliance ‘with statutory assessment arrangements as they apply to maintained schools’. This is as close as we get to an explanation of how statutory assessments that apply to all schools will be derived from the National Curriculum programmes of study and single ‘lowest common denominator’ attainment targets.

.

Teacher assessment and reporting to parents

This section begins with a second justification for the removal of levels. Some anecdotal evidence is cited to support the argument:

‘Teachers have told us that the use of levels for assessment has become burdensome and encouraged crude ‘best fit’ judgements to differentiate pupil progress and attainment.’

This marks the beginning of the justification for a more sophisticated (and hence more complex) approach.

Schools are free to design their assessment systems, though these must be integrated with the school curriculum. There is a hint that these systems might be different for different subjects (adding still further complexity for parents) though ‘groups of schools may wish to use a common approach’.

Paragraph 3.7 is a confusing complement to the Bew-based core principles that appeared earlier:

‘We expect schools to have a curriculum and assessment framework that meets a set of core principles and:

  • sets out steps so that pupils reach or exceed the end of key stage expectations in the new national curriculum;
  • enables them to measure whether pupils are on track to meet end of key stage expectations;
  • enables them to pinpoint the aspects of the curriculum in which pupils are falling behind, and recognise exceptional performance;
  • supports teaching planning for all pupils; and
  • enables them to report regularly to parents and, where pupils move to other schools, providing clear information about each pupils strengths, weaknesses and progress towards the end of key stage expectations.

Question 1: Will these principles underpin an effective curriculum and assessment system?

The ‘and’ in the opening sentence suggests that this isn’t part of the set of core principles, but the question at the end suggests these are the principles we should be considering, rather than those derived from Bew.

So we have two competing sets of core principles, the second set relating to schools’ own curriculum and assessment frameworks, but not to accountability.

The references here – to steps relative to end of KS expectations, measuring progress towards those expectations, identifying areas where learners are ahead and behind, supporting planning and reporting to parents – are entirely familiar. They really describe the functions of assessment rather than any principles that govern its application.

There is a commitment that the Government will ‘provide examples of good practice’ and:

‘Work with professional associations, subject experts, education publishers and external test developers to signpost schools to a range of potential approaches. Outstanding schools and teaching schools have an opportunity to take the lead in developing and sharing curriculum and assessment systems which meet the needs of their pupils…Commercial providers and subject organisations may offer curriculum schemes of work with inbuilt assessment, including class exercises, homework and summative tests.’

The second consultation question asks respondents to identify additional support and ‘other good examples of effective practice’.

The final section on reporting confirms that the Government plans to continue to publish teacher assessment outcomes in the core subjects, in line with Bew’s recommendation. It is not clear whether this is or is not subject to new scoring arrangements.

There is a brief allusion, almost an afterthought, to schools providing information on transfer and transition. There is no acknowledgement that this process becomes more complex when schools are following different curricula and pursuing different in-house assessment systems.

 .

National Curriculum tests in English, maths and science

This section begins with a further set of Bewisms, this time on the uses of data derived from statutory assessment. They are the justification for the continuation of externally-marked National Curriculum tests.

The proposal is that these should continue in maths and in English reading and grammar, spelling and punctuation. Writing will continue to be assessed through externally moderated teacher assessment (suggesting it will be scale scored), while national science sampling will also continue at the end of KS2. The Year 1 Phonics Screening Check will also continue, with results available in Raise Online but not in Performance Tables.

The timetable, including phasing, is rehearsed again, before the critically important tripartite approach to reporting is introduced.

This comprises:

  • A ‘scaled score’
  • Decile-based ranking within the ‘national cohort’ and
  • Progression from the baseline

The scaled score is the threshold marker of whether the learner is ‘secondary-ready’. We knew from previous announcements that this standard would be raised from level 4c equivalent to 4b equivalent.

It is also necessary to standardise the scale – and to know by how much any given learner has undershot or overshot this threshold:

‘Because it is not possible to create tests of precisely the same difficulty every year, the number of marks needed to meet the secondary readiness standard will fluctuate slightly from one year to another. To ensure that results are comparable over time, we propose to convert raw test marks into a scaled score, where the secondary readiness standard will remain the same from year to year.

Scaled scores are used in all international surveys and ensure that test outcomes are comparable over time. The Standards and Testing Agency will develop this scale. If, as an example, we developed scaled scores based on the current national curriculum tests, we might employ a scale from 80 to 130. We propose to use a scaled score of 100 as the secondary ready standard.’

The notion of a scaled score, with current Level 4b benchmarked at 100 and a scale sufficiently long to accommodate all levels of attainment above and below, is familiar from PISA and other international comparisons studies.

If the scale has 50 points, as this example does, then there are 50 potential levels of achievement in each assessment – about three times as many as there are currently.

But the score will also be accompanied by a norm-referenced decile, showing how each learner’s performance compares with their peers.

And an average scaled score is generated for learners with the same prior attainment at the baseline, which might or might not move to Year R, so enabling parents to compare their child’s scaled score with this average.

This material would not be used to generate simpler ‘proxy’ grades but would be provided in this tripartite format.

Assuming the illustrative elements above are adopted:

  • The highest possible KS2 performer would receive a scaled score of 130, confirmation that he is within the top decile of his peers and a comparative average scaled score. If this is less than 130, he has made better progress than those with the same prior baseline attainment. If it is 130 he has made the same progress. By definition his progress cannot be worse than the others.
  • A lowest possible KS2 performer would have a scaled score of 80, confirmation that he is within the bottom decile of the cohort and a comparative average scaled score which could be as low as 80 (all peers with the same prior attainment have made the same limited progress as he) but no lower since that is the extreme of the scale;
  • A median KS2 performer would obtain a scaled score of 100, confirmation that he is within the fifth decile and a correspondingly variable average scaled score.

No illustrative modelling is supplied, but one assumes that average scaled scores for those with similar prior attainment will typically cluster, such that most learners will see relatively little difference, while some outliers might get to +15 or -15. It also seems likely that the ‘progression score’ will eventually be expressed in this manner.

The progress measure is based exclusively on comparison with how other learners are progressing, rather than any objective standard of the progression required.

The document claims that:

‘Reporting a scaled score and decile ranking from national curriculum tests will make it easy to identify the highest attainers for example using the highest scaled scores and the top percentiles of pupils. We do not propose to develop an equivalent to the current level 6 tests, which are used to challenge the highest attaining pupils. Key stage 2 national curriculum tests will include challenging material (at least of the standard of the current level 6 test) which all pupils will have the opportunity to answer, without the need for a separate test.’

But, while parents of high attainers who score close to the maximum might reasonably assume that their offspring have performed in the top one or two percentiles, they will be told only that they are within the top decile. This is rather less differentiated than securing a Level 6 under current arrangements.

Moreover, the preparation of single tests covering the full span of attainment will be a tall order, particularly in maths.

This DfES publication from 2004 notes:

‘It is well known that individual differences in arithmetical performance are very marked in both children and adults.  For example, Cockcroft (1982) reported that an average British class of eleven-year-olds is likely to contain the equivalent of a seven-year range in arithmetical ability. Despite many changes in UK education since then, including the introduction of a standard National Curriculum and a National Numeracy Strategy, almost identical results were obtained by Brown, Askew, Rhodes et al (2002).  They found that the gap between the 5th and 95th percentiles on standardized mathematics tests by children in Year 6 (10 to 11-year-olds) corresponded to a gap of about 7 chronological years in ‘mathematics ages’.’

There is no reference to the test development difficulties that this creates, including the risk that high-attaining learners have to undertake pointless ramping of easy questions, unnecessarily extending the length of their tests.

The text claims that the opposite risk – that ceilings are set too low – will be avoided, with at least Level 6-equivalent questions included, but what will their impact be on low attainers undertaking the tests? This is the KS4 tiering debate rewritten for KS2.

It is possible that statutory teacher assessment in the core subjects – other than KS2 writing – could be reported in whatever format schools prefer, rather than in the same manner as test outcomes are reported but, like much else, this is not made clear in the document.

By implication there will be no reporting from the national sampling tests in science.

 .

Baselines to measure progress

The section on baselines is particularly confusing because of the range of choices it offers consultees.

It begins by stating bluntly that, with the removal of levels, KS1:

‘Teacher assessment of whether a pupil has met the expectations of the programme of study will not provide sufficient information to act as a baseline’.

This is because teacher assessment ‘will not provide differentiated outcomes to allow us to measure progress’, maybe because it won’t attract a scaled score.  But the document says later that KS1 data collected under the existing system might be used as an interim baseline measure.

Two core options are offered:

  • Retaining a baseline at the end of KS1, through new English and maths tests that would be marked by teachers but externally moderated. These would be introduced in ‘summer 2016’ Views are sought over whether these test results should be published, given that publication might reduce the tendency for schools to ‘under-report pupils’ outcomes in the interest of showing the progress pupils have made in the most positive light’.
  • Introducing a new baseline at the start of the reception year, from September 2015, an option that gives credit for progress achieved up to the end of Year 2 and removes a perverse incentive to prioritise early intervention. This is described as ‘a simple check…administered by a teacher within two to six weeks of each pupil entering reception…subject to external monitoring’. It would either be developed in-house or procured from a third party. The existing EYFS Profile would remain in place but become non-statutory, so schools would not have to undertake it and the data would not be moderated or collected.

An array of additional options is set out:

  • Allowing schools to choose their preferred baseline check (presumably always undertaken in Reception, though the document is not clear on this point).
  • Making the baseline check optional, with schools choosing not to use it being ‘judged by attainment alone in performance tables and floor standards’. In other words, the progress measure itself becomes optional, which would appear to run counter to one of Bew’s principles articulated at the beginning of the document and support the Husbands’ line.
  • Assuming a Reception baseline check, making end of KS1 tests non-statutory for primary schools, while retaining statutory tests for infant schools because of their need for such an accountability measure and to provide a baseline for junior schools. KS1 tests would still be available for primary schools to use on an optional basis.

Much of the criticism of the document has focused on the Reception baseline proposal, especially concern that the check will be too demanding for the young children undertaking it. On the face of it, this seems rather unreasonable, but the document is at fault by not specifying more clearly what exactly such a check would entail.

.

.

.

Accountability

The penultimate section addresses performance tables and floor standards. It begins with the usual PISA-referenced arguments for a high autonomy, high accountability system, mentions again the planned data portal and offers continuing commitments to performance tables and floor standards alike.

.

Floor Targets

It includes the statement that:

‘In recent years, we have made the floor both more challenging and fairer, by including a progress element’

even though the text has only just suggested making the progress element optional!

The section on floor standards begins with the exhortation that:

‘All primary schools should ensure that as many pupils as possible leave secondary ready.’

It repeats the intention to raise expectations by increasing the height of the hurdle:

‘We therefore propose a new requirement that 85% of pupils should meet the secondary readiness standard in all the floor standard measures (including writing teacher assessment). This 85% attainment requirement will form part of the floor standard. This standard challenges the assumption that some pupils cannot be secondary ready after seven years of primary school. At the same time it allows some flexibility to recognise that a small number of pupils may not meet the expectations in the curriculum because of their particular needs, and also that some pupils may not perform at their best on any given test day.’

So the 85% threshold is increased from 60% and the standard itself will be calibrated on the current Level 4b rather than 4c. This represents a hefty increase in expectations.

The text above appears to suggest that all pupils should be capable of becoming ‘secondary-ready’, regardless of their baseline – whether in Year R or Year 2 – apart from the group with particular unspecified needs. But, this time round,  there is also  allowance for a second group who might underperform on the day of the test.

Once again, the justification for a parallel progress measure is not to ensure consistency with the Bew principles, but to offer schools with ‘particularly challenging intakes’ a second string to their bows in the form of a progress measure. The precise wording is:

‘We therefore propose that schools would also be above floor standards if they have good progress results.’

Does this mean that schools only have to satisfy one of the two measures, or both? This is not absolutely clear, but the sentence construction is perhaps more consistent with the former rather than the latter.

If we are right, this is substantively different to the requirements in place for 2013 and announced for 2014:

‘In key stage 2 tests in 2014, primary schools will be below the floor standard if:

  • fewer than 65% of its pupils do not achieve Level 4 or above in reading, writing and maths, and
  • it is below the England median for progression by two levels in reading, in writing, and in maths.

*Results in the new grammar, punctuation and spelling test are likely to be part of the floor standard in 2014.

For tests taken this year, primary schools will be below the floor standard if:

  • fewer than 60% of its pupils do not achieve Level 4 or above in reading, writing and maths, and
  • it is below the England median for progression by two levels in reading, in writing, and in maths.

*Results in the new grammar, punctuation and spelling test will not be part of the floor standard this year.’

It is also substantively different to the new arrangements proposed for secondary schools.

Slightly later on, the text explains that schools which exceed the floor target on the basis of progression, while falling below the 85% secondary-ready threshold, will be more likely to be inspected by Ofsted than those exceeding this threshold.

However, Ofsted will also look at progress measures, and:

‘Schools in which low, middle and high attaining pupils all make better than average progress will be much less likely to be inspected.’

The text argues that:

‘Progress measures mean that the improvements made by every pupil count – there is no perverse incentive to focus exclusively on pupils near the borderline of an attainment threshold.’

But, assuming the progression target only comes into play for schools with ‘particularly challenging intakes’, the large majority will have no protection against this perverse incentive unless an optional APS measure is also introduced (see below).

As already stated, the progress measure will be derived from comparison with the average scaled scores of those with similar prior attainment at the baseline – in essence the aggregation of the third element in reporting to parents. Exactly how this aggregation will be calculated is not explained.

Of course, an average measure like this does not preclude schools from giving disproportionately greater attention to learners at different points on the attainment spectrum and comparatively neglecting others.

Unless the performance tables distinguish progress by high attainers, they might be likely to lose out, as will those never likely to achieve the ‘secondary-ready’ attainment threshold. More on this below.

The precise score for the floor targets is yet to be determined, but is expected ‘to be between 98.5 and 99’:

‘Our modelling suggests that a progress measure set at this level, combined with the 85% threshold attainment measure, would result in a similar number of schools falling below the floor as at present. Over time we will consider whether schools should make at least average progress as part of floor standards.’

So the progress element of the standard will be set slightly below average progress to begin with, perhaps to compensate for the much higher attainment threshold. This may support the argument that progress plays second fiddle to attainment.

Finally, the idea of incorporating an ‘average point score attainment measure’ in floor targets is floated:

‘Schools would be required to achieve either the progress measure or both the threshold and average point score attainment measure to be above the floor. This would prevent schools being above floor standards by focusing on pupils close to the expected standard, and would encourage schools to maximise the achievement of all their pupils. Alternatively we could publish the average point score to inform inspections and parents’ choices, but not include the measure in hard accountability.’

The first part of this paragraph reinforces the interpretation that the floor standard is now to be based either on the attainment threshold or the progress measure, but not both. But, under this option, the threshold measure could have an additional APS component to protect against gaming the threshold.

That goes some way towards levelling the playing field in terms of attainment, but of course it does nothing to support a balanced approach to progression in the vast majority of schools.

.

Performance Tables

The treatment of performance tables begins with a further reference to the supporting ‘data portal’ that will include material about ‘the attainment of certain pupil groups’. This is designed to reduce pressure to overload the tables with information, but may also mean the relegation of data about the comparative performance of those different groups.

The description of ‘headline measures’ to be retained in the tables includes, for each test presumably:

  • the percentage of learners who meet ‘the secondary readiness standard’;
  • the school’s average scaled score, comparing it with the average score for the national cohort;
  • the rate of progress of pupils in the school

There will also be a ‘high attainer’ measure:

‘We will also identify how many of the school’s pupils are among the highest-attaining nationally, by including a measure showing the percentage of pupils attaining a high scaled score in each subject.’

The pitch of this high scaled score is not mentioned. It could be set low – broadly the top third, as in the current ‘high attainer’ measure, or at a somewhat more demanding level. This is a significant omission and clarification is required.

Statutory teacher assessment outcomes will also be published (though some at least may follow schools’ chosen assessment systems rather than utilise scaled scores – see above).

All annual results will also be accompanied by three year rolling averages, to improve the identification of trends and protect small schools in particular from year-on-year fluctuation related to the quality of intake. There is an intention to extend rolling averages to floor targets once the data is available.

All these measures will be shown separately for those eligible for the Pupil Premium. This means that, for the first time, high attainers amongst this group will be distinguished, so it will be possible to see the size of any ‘excellence gap’. This is an important and significant change.

There will also be a continuation of the ‘family of schools’ approach – comparing schools with others that have a similar intake – recently integrated into the current Performance Tables.

The Pupil Premium will be increased:

‘To close the attainment gap between disadvantaged pupils and their peers and to help them achieve these higher standards…Schools have the flexibility to spend this money in the best way possible to support each individual child to reach his or her potential.’

So, despite the rider in the second sentence, the purpose of the Premium is now two-fold.

In practice this is likely to mean that schools at risk of being below the standard will focus the Premium disproportionately on those learners that are not deemed ‘secondary-ready’, which further supports the Husbands theory.

 .

Recognising the attainment and progress of all pupils

Rather disappointingly, this final short section is actually exclusively about low attainers and those with SEN – presumably amongst those who will not be able to demonstrate that they are ‘secondary ready’.

It tells us that access arrangements are likely to be unchanged. Although the new KS2 tests will be based on the entire PoS:

‘Even if pupils have not met the expectations for the end of the key stage, most should be able to take the tests and therefore most will have their attainment and progress acknowledged’.

There will also be ‘a small minority’ currently assessed via the P-scales. There is a commit to explore whether the P-scales should be adjusted to ‘align with the revised national curriculum’.

There is an intention to publish data about the progress of pupils with very low prior attainment, though floor standards will not be applied to special schools. The document invites suggestions for what data should be published for accountability purposes.

.

Primary Assessment and Accountability: Issues and Omissions

The extended analysis above reveals a plethora of issues with the various measures proposed within the consultation document.

Equally, it ignores some important questions raised by material already published, especially the parallel secondary consultation document.

So we have a rather distorted picture with several missing pieces.

The longer first section below draws together the shortcomings in the argument constructed by the consultation document. I have organised these thematically rather than present them in order of magnitude – too many are first order issues. I have also included Labour’s response to the document.

The shorter second section presents the most outstanding unanswered questions arising from the relationship between this document and the materials published earlier.

 .

Issues arising from the consultation document

The multiple issues of concern include:

  • The core purpose of the Pupil Premium in primary schools: Is it to narrow attainment gaps between advantaged and disadvantaged learners, or to push the maximum number of schools over more demanding floor targets by delivering more ‘secondary ready’ pupils, regardless of disadvantage. There is much evidence to support the Husbands’ argument that the Premium ‘is now more clearly a fund to secure threshold levels of attainment.’ There is some overlap between the two objectives – though not as much as we commonly think, as the IPPR report quoted above points out. Chasing both simultaneously will surely reduce the chances of success on each count. That does not bode well for the Government’s KPIs.
  • The definition of ‘secondary ready’: This is based exclusively on an attainment measure derived from scores achieved in once-only tests in maths and aspects of English, plus teacher assessment in writing. It is narrow in a curricular sense, but also in the sense that it defines readiness entirely in terms of attainment, even though the document admits that this is ‘the single most important outcome’ rather than the only outcome.
  • The pitch of the new attainment threshold for the floor target: The level of demand has been ratcheted up significantly, by increasing the height of the hurdle from Level 4c to Level 4b-equivalent and increasing the percentage of pupils required to reach this level by 25%, from 60% to 85%. The consultation document says unpublished modelling suggests combining this with fixing the proposed progress measure a percentage or two below the average ‘would result in a similar number of schools falling below the floor as at present’. It would be helpful to see hard evidence that this is indeed the case. Given that the vast majority of schools will be judged against the floor standard solely on the attainment measure (see below), there are grounds for contesting the assertion.
  • Whether the proposed floor target consists of two measures or one of two measures: There is considerable ambiguity within the consultation document on this point, but the weight of evidence suggests that the latter applies, and that progression is only to be brought into the equation when schools ‘have particularly challenging intakes’. This again supports the Husbands line. It is a significant change from current arrangements in the primary sector and is also materially different to proposed arrangements for the secondary sector. It ought to be far more explicit as a consequence.
  • The risk of perverse incentives in the floor targets: The consultation document points out that inclusion of a progress measure reduces a perverse incentive to focus exclusively or disproportionately on learners near the borderline of the attainment threshold. But if the progress measure is only to apply to a small (but unquantified) minority of schools with the most demanding intakes, the perverse incentive remains in place for most. In any case, a measure that focuses on average progress across the cohort does not necessarily militate against disproportionate attention to those at the borderline.
  • Which principles are the core principles? We were promised a set of such principles in the piece quoted above on ‘Assessment without levels’. Instead we seem to have a set of ‘key principles’ on which ‘the proposals in this consultation are based’, these being derived from Bew (paragraph 1.5) and some additional points that the main text concedes do not themselves qualify as core principles (paragraph 3.7). Yet the consultation question about core principles follows directly beneath the latter and, moreover, calls them principles! This is confusing, to say the least.
  • Are the core principles consistently followed? This depends of course on what counts as a core principle. But if one of those principles is Bew’s insistence that ‘measures of progress should be given at least as much weight as attainment’, that does not seem to apply to the treatment of floor targets in the document, where the attainment threshold trumps the progress measure. If one of the core proposals runs counter to the proposed principles, that is clearly a fundamental flaw.
  • Implications of a choice of in-house assessment schemes: Schools will be able to develop their own schemes or else draw on commercially available products. One possibility is that the market will become increasingly dominated by a few commercial providers who profit excessively from this arrangement. Another is that hundreds of alternative schemes will be generated and there will be very little consistency between those in use in different schools. This will render primary-secondary transition and in-phase transfer much more complex, especially for ‘outlier’ learners. It seems that this downside of a market-driven curriculum and assessment model has not been properly quantified or acknowledged.
  • Whether or not these apply to statutory teacher assessment: We know that the results of teacher assessment in writing will feature in the new floor target, alongside the outcomes of tests which attract a new-style scale score. But does this imply that all statutory teacher assessment will attract similar scale scores, or will it be treated as ‘ongoing assessment’. I might have missed it, but I cannot find an authoritative answer to this point in the document.
  • Whether the proposed tripartite report to parents is easier to understand than existing arrangements: This is a particularly significant issue. The argument that the system of National Curriculum levels was not properly understood is arguably a fault of poor communication rather than inherent to the system itself. It is also more than arguable that the alternative now proposed – comprising a scaled score, decile and comparative scaled score in each test – is at least as hard for parents to comprehend. There is no interest in converting this data into a simple set of proxy grades with an attainment and a progression dimension, as I have proposed. The complexity is compounded because schools’ internal assessment systems may well be completely different. Parents are currently able to understand progress within a single coherent framework. In future they will need to relate one system for in-school assessment to another for end of key stage assessment. This is a major shortcoming that is not properly exposed in the document.
  • Whether decile-based differentiation is sufficient: Parents arguably have a right to know in which percentile their children’s performance falls, rather than just the relevant decile. At the top of the attainment spectrum, Level 6 achievement is more differentiated than a top decile measure, in that those who pass the test are a much more selective group than the top ten percent. The use of comparatively vague deciles may be driven by concern about labelling (and perhaps also some recognition of the unreliability of more specific outcomes from this assessment process). The document insists that only parents will be informed about deciles, but it does not require a soothsayer to predict that learners will come to know them, just as they know their levels. (The secondary consultation document sees virtue in older learners knowing and using their ‘APS8 score’ so what is different?) In practice it is hard to imagine a scenario where those in possession of percentile rankings could withhold this data if a parent demanded it.

.

.

  • Norm versus criterion-referencing: Some commentators appear relatively untroubled by a measure of progress that rests entirely on comparison between a learner and his peers. They suppose that most parents are most concerned whether their child is keeping up with their peers, rather than whether their rate of progress is consistent with some abstract measure. That may be true – and it may be also too difficult to design a new progress measure that applies consistently to the non-linear development of every learner, regardless of their prior attainment. On the other hand, it does not seem impossible to contemplate a measure of progress associated with the concept of ‘mastery’ that is now presumed to underpin the National Curriculum, since its proponents are clear that ‘mastery’ does not hold back those who are capable of progressing further and faster.
  • Development of tests to suit all abilities and the risk of ceiling effects: There must be some degree of doubt whether universal tests are the optimal approach to assessment for the full attainment spectrum, especially for those at either end, particularly in maths where the span of the spectrum is huge. The document contains an assurance that the new tests will be at least as demanding as existing Level 6 tests, so single tests will aim to accommodate six levels of attainment in old money. Is that feasible? Despite the assurance, the risk of undesirable ceiling effects is real and of particular concern for the highest attainers.

.

.

  • Where to pitch the baseline: The arguments in favour of a Year R baseline – and the difficulties associated with implementing one – have attracted the lion’s share of the criticism directed at the paper, which has rather served to obscure some of its other shortcomings. The obvious worry is that the baseline check will be either disproportionate or unreliable – and quite possibly both. Most of the focus is on the overall burden of testing: the document floats a variety of ideas that would add another layer of fragmentation and complexity, such as making the check optional, making KS1 tests optional and providing different routes for stand-alone infant/junior schools and all-through primaries.
  • The nature of the baseline check: Conversely, the consultation document is unhelpfully coy about the nature of the check required. If it had made a better fist of describing the likely parameters of the check, exaggerated concerns about its negative impact on young children might have been allayed. Instead, the focus on the overall testing burden leads one to assume that the Year R check will be comparatively onerous.
  • How high attainers will be defined in the performance tables: There are welcome commitments to a ‘high attainer’ measure for each test, based on scaled scores, and the separate publication of this measure for those in receipt of the Pupil Premium. But we are given no idea where the measure will be pitched, nor whether it will address progress as well as attainment. One obvious approach would be to use the top decile, but that runs against an earlier commitment not to incorporate the deciles in performance tables, despite there being no obvious reason why this should be problematic, assuming that anonymity can be preserved (which may not be possible in smaller cohorts). It would be particularly disappointing if high attainers continue to be defined as around one third of the cohort – say the top three deciles, but that may be the path of least resistance.

There are also more technical assessment issues – principally associated with the construction of the scaled score – which I leave it to assessment experts to anlayse.

Labour’s response to the consultation document picks up some of the wider concerns above. Their initial statement focused on the disappearance of ‘national statements of learning outcomes’, how a norm-referenced approach would protect standards over time and the narrowness of the ‘secondary-ready’ concept.

.

.

A subsequent Twigg article begins with the latter point, bemoaning the Government’s:

‘Backward looking vision, premised on rote-learning and a failure to value the importance of the skills and aptitudes that young people need to succeed’.

It moves on to oppose the removal of level descriptors:

‘There might be a case to look at reforming level descriptors to ensure sufficient challenge but scrapping them outright is completely misguided and will undermine standards in primary schools’

 and the adoption of norm-referenced ranking into deciles:

 ‘By ranking pupils against others in their year- rather than against set, year-on-year standards – this will lead to distortions from one year to another. There is not a sound policy case for this.’

 But it offers support for changing the baseline:

‘I have been clear that I want to work constructively on the idea of setting baseline assessments at 5. There is a progressive case for doing this. All-too-often it is the case that the prior attainment of children from socially-deprived backgrounds is much lower than for the rest. It is indeed important that schools are able to identify a baseline of pupil attainment so that teachers can monitor learning and challenge all children to reach their potential.’

Unfortunately, this stops short of a clear articulation of Labour policy on any of these three points, though it does suggest that several aspects of these reforms are highly vulnerable should the 2015 General Election go in Labour’s favour.

.

Omissions

There are several outstanding questions within the section above, but also a shorter list of issues relating to the interface between the primary assessment and accountability consultation document, its secondary counterpart and the National Curriculum proposals. Key amongst them are:

  • Consistency between the primary and secondary floor targets: The secondary consultation is clear ‘that schools should have to meet a set standard on both the threshold and progress measure to be above the floor’. There is no obvious justification for adopting an alternative threshold-heavy approach in the primary sector. Indeed, it is arguable that the principle of a floor relies on broad consistency of application across phases. Progression across the attainment spectrum in the primary phase should not be sacrificed on the altar of a single, narrow ‘secondary ready’ attainment threshold.
  • How the KS2 to KS4 progress measure will be calculated: While the baseline-KS2 progress measure may be second order for the purposes of the primary floor, the KS2-KS4 progression measure is central to the proposals in the secondary consultation document. We now know that this will be based on the relationship between the KS2 scaled score and the APS8 measure. But there is no information about how these two different currencies will be linked together. Will the scaled score be extended into KS3 and KS4 so that GCSE grades are ‘translated’ into higher points on the same scale? Further information is needed before we can judge the appropriateness of the proposed primary scaled scores as a baseline.

.

.

  • How tests will be developed from singleton attainment targets: The process by which tests will be developed in the absence of a framework of level descriptions and given single ‘lowest common denominator’ attainment targets for each programme of study remains shrouded in mystery. This is not simply a dry technical issue, because it informs our understanding of the nature of the tests proposed. It also raises important questions about the relationship academies will need to have with programmes of study that – ostensibly at least – they are not required to follow. One might have hoped that the primary document would throw some light on this matter.

.

.

.

Overall Judgement

Because there has been no effort to link together the proposals in the primary and secondary consultation documents (and we still await a promised post-16 document) there are significant outstanding questions about cross-phase consistency and, especially, the construction of the KS2-KS4 progress measure.

I have identified no fewer than sixteen significant issues with the proposals in the primary consultation document. Several of these are attributable to a lack of clarity within the text, not least over the core principles that should be applied across the piece to ensure policy coherence and internal consistency between different elements of the package. This is a major shortcoming.

The muddle and obfuscation over the nature of the floor target is an obvious concern, together with the decision to hitch the Pupil Premium to the achievement of the floor, as well as to narrowing achievement gaps. There is a fundamental tension here that needs to be unpacked and addressed.

The negative impact of the removal of the underpinning framework ensuring consistency between statutory end of key stage assessment and end-year assessment in schools has been underplayed. There is significant downside to balance against any advantages from greater freedom and autonomy, but this has not been spelled out.

The case for the removal of levels has been asserted repeatedly, despite a significant groundswell of professional opinion against it, stretching back to the original response to consultation on the recommendations of the Expert Panel. There may be reason to believe that Labour would reverse this decision.

While there is apparently cross-party consensus on the wisdom of shifting the KS1 baseline to Year R, big questions remain about the nature of the ‘baseline check’ required.

Despite some positive commitments to make the assessment and accountability regime ‘high attainer friendly’ there are also significant reservations about how high attainment will be defined and reported.

On a scaled score from 80 to 130, I would rate the Government at 85 and, with some benefit of the doubt, put the Opposition at 100.

.

.

 .

In a nutshell…

We have perhaps two-thirds of the bigger picture in place, though some parts are distinctly fuzzy.

The secondary proposals are much more coherent than those for the primary sector and these two do not fit together well.

The primary proposals betray an incoherent vision and vain efforts to reconcile irreconcilably divergent views. It is no surprise that they were extensively delayed, only to be published in the last few days of the summer term.

Has this original June 2012 commitment been met?

‘In terms of statutory assessment, however, I believe that it is critical that we both recognise the achievements of all pupils, and provide for a focus on progress. Some form of grading of pupil attainment in mathematics, science and English will therefore be required, so that we can recognise and reward the highest achievers as well as identifying those that are falling below national expectations.

We have scores rather than grading and they don’t extend to science. High achievers will receive attention but we don’t know whether they will be the highest achievers or a much broader group.

Regrettably then, the answer is no.

.

.

.

GP

July 2013

Accountability, Assessment and the New National Curriculum: Part Two

.

This is the second part of a revised, updated and extended analysis of proposals for the reform of the National Curriculum, its assessment and the use of assessment data to within accountability arrangements.

New material, about the primary assessment and accountability consultation document, is emboldened. I have also published it separately.

Part One concluded with an extended commentary on the newly available consultation document on primary assessment and accountability. Before drawing out the implications of that commentary, I want to return to the National Curriculum proposals.

.

Issues with the National Curriculum Proposals

It is not my purpose here to detail the changes to each programme of study, since several writers have already provided such material

I want to concentrate instead on the broad shape of the National Curriculum and plans for its implementation. The treatment below highlights the six issues I find most concerning, and takes them in order of concern.

.

Phasing of Implementation

It is clear that legal issues did arise from the troublesome mismatch between the timetables for the implementation of National Curriculum and assessment reform.

This has caused the Government to move away from its preferred position of universal implementation (at least up to the end of KS3) from September 2014.

The Government Response to the National Curriculum Consultation says:

‘All maintained schools will be required to teach the new national curriculum for all subjects and at all key stages from September 2014, with two exceptions. The new national curriculum for year 2 and year 6 English, mathematics and science will become compulsory from September 2015, to reflect the fact that key stage 2 tests in summer 2015 will be based on the existing national curriculum. Key stage 4 English, mathematics and science will be taught to year 10 from September 2015 and year 11 from September 2016, to ensure coherence with the reformed GCSE qualifications in these subjects.’

In other words, introduction of the new PoS – in the three core subjects only – is delayed for one year for those learners beginning Year 2 and Year 6 in September 2014.

Similarly, the new core KS4 programmes will be introduced for Year 10 in September 2015 and Year 11 in September 2016, to align with the introduction of new GCSE specifications.

This results in a complex set of transitional arrangements. In primary schools alone:

  • In AY 2013/14, the foundation subjects are disapplied for all, the core subjects are disapplied for Years 3 and 4 and the existing PoS continue to apply for Years 1, 2, 5 and 6.
  • In AY 2014/15, the new National Curriculum applies in foundation subjects for all Years but, in the core subjects, it only applies for Years 1, 3, 4 and 5. Year 2 and Year 6 follow the existing core PoS.
  • In AY 2015/16, the new National Curriculum applies in core and foundation subjects for all Years.

This Table shows the implications for different primary year groups in the core subjects only.

AY 2013/14 AY 2014/15 AY 2015/16
Year 1 Old PoS New Pos New PoS
Year 2 Old PoS Old PoS New PoS
Year 3 Dis New PoS New PoS
Year 4 Dis New PoS New PoS
Year 5 Old Pos New PoS New PoS
Year 6 Old PoS Old PoS New PoS

Depending on a learners’ Year Group in 2013/14, each will experience, over this three year period, one of three combinations:

  • Old, Old, New
  • Old, New, New
  • Disapplied, New, New

Moreover, because there is a different pattern in respect of the foundation subjects, many will be simultaneously pursuing parts of the old National Curriculum and parts of the new National Curriculum in AY2014/15.

As far as the PoS are concerned, that may be fairly straightforward, but which National Curriculum Aims apply? Which Inclusion Statement? What about the requirements for English and maths across the curriculum?

The Inclusion Statement certainly used to be statutory. I have seen no suggestion that the new version is no longer statutory, which causes me to question how two different statutory Inclusion Statements can apply to the same pupils at the same time?

Other commentators have suggested that managing this transition will be a fairly easy ask of schools – and that the compromise presented is an improvement on the previous situation, in which some learners would have followed the new PoS for a year, only to be tested on the old one.

But complexity is the enemy of efficiency, especially in schools that may already be struggling to meet expectations imposed by the accountability framework.

Given that the Government was initially wedded to a ‘big bang’ approach rather than phased implementation, it might have been preferable to have stuck with that decision and delayed implementation of the entire National Curriculum until September 2015.

Failing that, it might have been preferable to have delayed the entire National Curriculum – not just the core subjects – by one year for those beginning Years 2 and 6 in September 2014, so those learners would follow a single version in that year rather than sections of old and new combined.

.

Inclusion statement

The Inclusion Statement for the current National Curriculum has three sections:

‘The curriculum should provide relevant and challenging learning to all children. It should follow the three principles set out in the inclusion statement:

A. setting suitable learning challenges

B. responding to pupils’ diverse learning needs

C. overcoming potential barriers to learning and assessment for individuals and groups of pupils.’

There is not space to quote the full statement here, especially the lengthy third section covering special needs, disabilities and EAL, but here are parts A and B:

‘A. Setting suitable learning challenges

Teachers should aim to give every pupil the opportunity to experience success in learning and to achieve as high a standard as possible. The national curriculum programmes of study set out what most pupils should be taught but teachers should teach the knowledge, skills and understanding in ways that suit their pupils’ abilities. This may mean choosing knowledge, skills and understanding from earlier or later stages so that individual pupils can make progress and show what they can achieve. Where it is appropriate for pupils to make extensive use of content from an earlier stage, there may not be time to teach all aspects of the programmes of study. A similarly flexible approach will be needed to take account of any gaps in pupils’ learning resulting from missed or interrupted schooling.

For pupils whose attainments fall significantly below the expected levels at a particular stage, a much greater degree of differentiation will be necessary. In these circumstances, teachers may need to use the content of programmes of study as a resource or to provide a context, in planning learning appropriate to the requirements of their pupils.

For pupils whose attainments significantly exceed the expected levels, teachers will need to plan suitably challenging work. As well as drawing on work from later stages, teachers may plan further differentiation by extending the breadth and depth of study.

B. Responding to pupils’ diverse learning needs

When planning, teachers should set high expectations and provide opportunities for all pupils to achieve, including boys and girls, pupils with special educational needs, pupils from all social and cultural backgrounds, pupils from different ethnic groups including travellers, refugees and asylum seekers, and those from diverse linguistic backgrounds. Teachers need to be aware that pupils bring to school different experiences, interests and strengths which will influence the way in which they learn. Teachers should plan their approaches to teaching and learning so that pupils can take part in lessons fully and effectively.

To ensure that they meet the full range of pupils’ needs, teachers should be aware of the requirements of the equal opportunities legislation that covers race, gender and disability.

Teachers should take specific action to respond to pupils’ diverse needs by:

  • creating effective learning environments
  • securing their motivation and concentration
  • providing equality of opportunity through teaching approaches
  • using appropriate assessment approaches
  • setting targets for learning.’

Here (again) are the first two paragraphs of the version proposed in the February 2013 Framework Document:

Teachers should set high expectations for every pupil. They should plan stretching work for pupils whose attainment is significantly above the expected standard. They have an even greater obligation to plan lessons for pupils who have low levels of prior attainment or come from disadvantaged backgrounds. Teachers should use appropriate assessment to set targets which are deliberately ambitious….

…Teachers should take account of their duties under equal opportunities legislation that covers disability, ethnicity, gender, sexual identity, gender identity, and religion or belief.’

This is entirely unchanged in the July document (though there has been a minor adjustment further down to reflect concerns expressed by SEN and disability lobbies).

I have already pointed out the shortcomings in the first paragraph, which are even more glaring and serious if this text continues to have a statutory basis (and of course this error should not be used as an excuse to downgrade the statement by removing its statutory footing).

While the version in the current National Curriculum may be prolix, it carries important messages that seem to have been lost in the newer version, about giving ‘every pupil the opportunity to experience success in learning and to achieve as high a standard as possible’ and expecting teachers to ‘provide opportunities for all pupils to achieve’. Overall its significance is depressed.

Revision of the first paragraph is urgent and critical, but the whole statement should be strengthened and – assuming it does still have statutory force – its statutory basis affirmed. Ofsted’s ‘Most Able Students’ Report explains why this is necessary.

.

Attainment Targets

The February consultation invited respondents to say whether they approved of the decision to apply a single standard attainment target to each programme of study.

The consultation document said:

‘Legally, the National Curriculum for each subject must comprise both programmes of study and attainment targets. While programmes of study set out the curriculum content that pupils should be taught, attainment targets define the expected standard that pupils should achieve by the end of each key stage. Under the current National Curriculum, the standard is set out through a system of levels and level descriptions for each subject. The national expectation is defined as a particular level for the end of Key Stages 1, 2 and 3. At Key Stage 4, GCSE qualifications at grade C currently define the expected standard.

The Government has already announced its intention to simplify the National Curriculum by reforming how we report progress. We believe that the focus of teaching should be on subject content as set out in the programmes of study, rather than on a series of abstract level descriptions. Parents deserve a clear assessment of what their children have learned rather than a ‘level description’ which does not convey clear information.

A single statement of attainment that sets out that pupils are expected to know, apply and understand the matters, skills and processes specified in the relevant programme of study will encourage all pupils to aspire to reach demanding standards. Parents will be given clear information on what their children should know at each stage in their education and teachers will be able to report on how every pupil is progressing in acquiring this knowledge.’

The analysis of consultation responses notes that:

‘739 (52%) respondents viewed the wording of the attainment targets as unclear and confusing. Many respondents also commented on the brevity of the attainment targets and felt that clarification would be needed to help schools to identify the standard and to ensure consistency in measuring pupil performance across schools. A number of respondents highlighted the interplay between curriculum and assessment and wanted to review the government’s plans for primary assessment and accountability and for recognising the achievements of low attaining pupils and those pupils with special educational needs (SEN) and disabilities, in order to provide a considered response.’

The Government’s response rather dismisses the views expressed by the majority of respondents, simply restating its case for removing National Curriculum levels and conceding nothing.

‘Schools should then be free to design their approaches to assessment to support pupil attainment and progression. The assessment framework must be built into the curriculum, so that schools can check what pupils have learned and whether they are on track to meet expectations at the end of the key stage, and so that they can report regularly to parents.

We have been clear that we will not prescribe a national system for schools’ ongoing assessment. Ofsted’s inspections will be informed by the pupil tracking data systems that individual schools choose to keep. Schools will continue to benchmark their performance through statutory end of key stage assessments, including national curriculum tests.’

The concern here is driven by lack of information. Respondents to the consultation cannot really be blamed for responding negatively when the Government has so far failed to explain how statutory Key Stage 2 tests and Key Stage 3 assessments will be built on top of the scaffolding supplied by the draft PoS.

It is also a reasonable expectation, on the part of schools, that their internal assessment arrangements are fully consistent with the statutory assessment framework operating at the end of each Key Stage.

There is no recognition, consideration or accommodation of the arguments against the removal of levels. The degree of conviction assumed by the response rings rather hollow given the significant weight of professional opposition to this decision, against which the Government sets the controversial views of its own Expert Panel.

Despite railing against ‘the blob’, this is one occasion where Ministers prefer to side with the views expressed by a handful of academics, rather than those of professional school leaders and teachers.

Mr Twigg called on the Government to rethink the removal of levels when the Ministerial Statement was debated in Parliament (Col 37) which might be indicative that Labour has come round to the view that this would be unwise.

.

Support for Implementation

There was overwhelming concern amongst respondents to consultation about the implementation timetable and a perception that limited support would be provided to manage the transition. ASCL’s call for a thorough and properly resourced implementation plan reflected this concern.

The Consultation Report records that:

‘1,782 (64%) respondents raised the need for funding for materials and resources to support the teaching of the new national curriculum. There was a concern that existing resources would become obsolete and replacing them would incur significant costs.

1,643 (59%) respondents felt that there was a need for staff training and continuing professional development to increase teachers’ confidence and capability in designing and delivering the new curriculum and to respond to the need for specific specialist skills (e.g. computing, language teaching).

1,651 (59%) respondents highlighted the need for schools to have sufficient time to plan for the new curriculum. Some stated that schools would need the final new national curriculum at the start of the coming academic year to enable them to prepare for teaching the new curriculum from September 2014.’

In responses to questions about who is best placed to develop resources and provide such support, 42% of respondents mentioned schools and teachers, 21% advocated inter-school collaboration, 36% mentioned teaching and subject associations, 31% local authorities and 13% the government. Publishers were also nominated.

The extended section in the Government’s response to the consultation is long on advocacy of a school- and market-driven system – and correspondingly short on central support to enable this process to operate effectively.

It tells us that:

‘There will be no new statutory document or guidance from Whitehall telling teachers how to do this. Government intervention will be minimal

…We believe that schools are best placed to decide which resources meet their needs and to secure these accordingly. We want to move away from large-scale, centralised training programmes, which limit schools’ autonomy, and towards a market-based approach in which schools can work collaboratively to provide professional development tailored to individual needs. We expect schools to take advantage of existing INSET days and wider opportunities to bring staff together to consider the development needs that the new curriculum may pose.

… The Leading Curriculum Change resources developed through the National College for Teaching and Leadership (NCTL) by National Leaders of Education will inspire and guide school leaders through this process and teaching schools and others will support their use.

Sector-led expert groups have been looking at how existing resources can support the new curriculum and identifying any significant gaps… Resources and opportunities will be signposted from our website once the new national curriculum is finalised in the autumn and hosted by subject associations and other organisations.

Current government-funded provision is being refocused to support the new national curriculum. This includes support provided by the national network of Science Learning Centres, the work of the National Centre for Excellence in the Teaching of Mathematics (NCETM) and the extension of match funding for phonics resources and training until October 2013.

New support includes ring-fenced funding for sport in primary schools and over £2 million worth of support to bolster the supply of computing teachers. In addition, we will make a fund of £2 million available to teaching schools and national support schools, to enable them to support the delivery of the new curriculum across their alliances and networks in the coming academic year.

We have been working with publishers and educational suppliers throughout the review to ensure that they are well informed about changes to the curriculum and can meet schools’ needs by adapting existing products and by identifying what additional materials will be needed in time to support schools to prepare to teach the new curriculum from September 2014. We know that schools will prioritise, budget and plan for when and how to add gradually to – or indeed replace – resources and we expect publishers and suppliers to take this into account.’

As far as I can establish, only the £2 million for teaching schools and national support schools (the schools where National Leaders of Education are located) is new provision.  Many of these will be academies, not required to follow the National Curriculum. Some state-funded schools might reasonably look askance at their suitability and capacity to provide the requisite support.

Since there are likely to be somewhere between 1,100 and 1,500 institutions of this kind active during this period, this funding could amount to as little as £1,333 per school.

We do not know what capacity the National College, NCETM and the National Science Centres are devoting to their contribution.

By and large, schools are expected to meet any additional costs from their existing budgets. The combined cost of resources, professional development and staff time are likely to be significant, especially in larger secondary schools.

It seems that the Government will advertise online any ‘significant gaps’ in the availability of resources to support the curriculum and look to the market to respond within the 11 months available prior to implementation (though schools would clearly prefer to have such materials much earlier than that)..

A story on the progress made by the groups established to identify such gaps was published in the Guardian in late June, but based on papers dating from a month earlier. It is clear that they were then hamstrung by the draft status of the PoS and the likelihood of further significant change before they were finalised.

We have no idea of the magnitude of the gaps that are being identified and how those balance out between key stages and subjects. This information will not be released before the early Autumn.

There is no sign of extra dedicated INSET days to support the implementation process in schools, or of the implementation plan called for by ASCL.

The Government is continuing to push schools to take lead responsibility and ownership of the reform process, while the bodies representing heads and teachers are insisting that the Government is abdicating responsibility and they need more central support.

The distinct possibility that this state of confrontation will not result in uniformly effective implementation is likely to feature rather prominently in the Government’s risk registers.

.

Challenge

When asked whether the draft PoS were sufficiently challenging, just 22% of consultation respondents agreed that they were sufficiently challenging, while 39% said that they were not.

The latter:

‘Felt that the proposed curriculum would not prepare pupils for the challenges of the 21st Century. Some of these respondents stated that the level of challenge could not be determined in foundation subjects due to insufficient detail in the programmes of study.’

The Government’s response does not expressly address this point, other than by restating the rationale for the approach it has adopted.

Moreover, 61% of respondents said that the draft PoS do not provide for effective progression between key stages and 63% said the new national curriculum does not embody an expectation of higher standards for all children.

These hardly amount to a ringing endorsement. Moreover, it is unlikely that the changes that have been introduced since the last round of consultation will have been sufficient in aggregate to alter this judgement. But we will never know because this question will not be repeated in the final round of consultation – the pitch of the PoS is now fixed until any future review.

.

Aims

The overarching National Curriculum aims have been revised slightly from:

‘The National Curriculum provides pupils with an introduction to the core knowledge that they need to be educated citizens. It introduces pupils to the best that has been thought and said; and helps engender an appreciation of human creativity and achievement.

The National Curriculum is just one element in the education of every child. There is time and space in the school day and in each week, term and year to range beyond the National Curriculum specifications. The National Curriculum provides an outline of core knowledge around which teachers can develop exciting and stimulating lessons.’

To:

‘The national curriculum provides pupils with an introduction to the essential knowledge that they need to be educated citizens. It introduces pupils to the best that has been thought and said; and helps engender an appreciation of human creativity and achievement.

The national curriculum is just one element in the education of every child. There is time and space in the school day and in each week, term and year to range beyond the national curriculum specifications. The national curriculum provides an outline of core knowledge around which teachers can develop exciting and stimulating lessons to promote the development of pupils’ knowledge, understanding and skills as part of the wider school curriculum.’

19% of consultation respondents liked the aims, but another 19% found them too vague. Some wanted guidance on the time the national curriculum should take up. Some 36% argued that the aims are over-focused on knowledge at the expense of skills and understanding.

Some 44% approved of the proposal to drop subject-specific aims but 37% opposed this. The Government has decided to retain them ‘to support and guide schools in their teaching and to help parents and pupils understand the desired outcomes of the curriculum’.

The statements of cross-curricular emphasis on English and maths have been strengthened slightly. A section on vocabulary development has been added to English – and, for some unknown reason, the order has been reversed, with maths now coming first.

The Government’s response in defence of its aims argues that the emphasis on knowledge reflects the purpose of the curriculum and that its accentuation was one of the objectives of the review.

While it is undeniably the role of schools to develop skills and understanding, the aims ‘are not…intended to capture everything that schools teach and do’. The revised version is intended to reflect more accurately the purpose and status of the aims.

The logic of a National Curriculum that gives statutory definition to knowledge but neglects skills and understanding is questionable.

Such a defence rather undermines the argument – advanced by proponents and opponents of Hirsch alike – that these elements do not lend themselves readily to artificial separation, gaining strength and significance from their inter-relationship, such that the whole is greater than the sum of parts. Schools may be hindered rather than helped by this document in their efforts to reunite them.

.

Primary Assessment and Accountability: Issues and Omissions

The extended analysis in Part One revealed a plethora of issues with the various measures proposed within the consultation document it.

Equally, it ignores some important questions raised by material already published, especially the parallel secondary consultation document.

So we have a rather distorted picture with several missing pieces.

The longer first section below draws together the shortcomings in the argument constructed by the consultation document. I have organised these thematically rather than present them in order of magnitude – too many are first order issues.

The shorter second section presents the most outstanding unanswered questions.

.

Issues arising from the consultation document

The multiple issues of concern include:

  • The core purpose of the Pupil Premium in primary schools: Is it to narrow attainment gaps between advantaged and disadvantaged learners, or to push the maximum number of schools over more demanding floor targets by delivering more ‘secondary ready’ pupils, regardless of disadvantage. There is much evidence to support the Husbands’ argument that the Premium ‘is now more clearly a fund to secure threshold levels of attainment.’ There is some overlap between the two objectives – though not as much as we commonly think, as the IPPR report quoted above points out. Chasing both simultaneously will surely reduce the chances of success on each count. That does not bode well for the Government’s KPIs.
  • The definition of ‘secondary ready’: This is based exclusively on an attainment measure derived from scores achieved in once-only tests in maths and aspects of English, plus teacher assessment in writing. It is narrow in a curricular sense, but also in the sense that it defines readiness entirely in terms of attainment, even though the document admits that this is ‘the single most important outcome’ rather than the only outcome.
  • The pitch of the new attainment threshold for the floor target: The level of demand has been ratcheted up significantly, by increasing the height of the hurdle from Level 4c to Level 4b-equivalent and increasing the percentage of pupils required to reach this level by 25%, from 60% to 85%. The consultation document says unpublished modelling suggests combining this with fixing the proposed progress measure a percentage or two below the average ‘would result in a similar number of schools falling below the floor as at present’. It would be helpful to see hard evidence that this is indeed the case. Given that the vast majority of schools will be judged against the floor standard solely on the attainment measure (see below), there are grounds for contesting the assertion.
  • Whether the proposed floor target consists of two measures or one of two measures: There is considerable ambiguity within the consultation document on this point, but the weight of evidence suggests that the latter applies, and that progression is only to be brought into the equation when schools ‘have particularly challenging intakes’. This again supports the Husbands line. It is a significant change from current arrangements in the primary sector and is also materially different to proposed arrangements for the secondary sector. It ought to be far more explicit as a consequence.
  • The risk of perverse incentives in the floor targets: The consultation document points out that inclusion of a progress measure reduces a perverse incentive to focus exclusively or disproportionately on learners near the borderline of the attainment threshold. But if the progress measure is only to apply to a small (but unquantified) minority of schools with the most demanding intakes, the perverse incentive remains in place for most. In any case, a measure that focuses on average progress across the cohort does not necessarily militate against disproportionate attention to those at the borderline.
  • Which principles are the core principles? We were promised a set of such principles in the piece quoted above on ‘Assessment without levels’. Instead we seem to have a set of ‘key principles’ on which ‘the proposals in this consultation are based’, these being derived from Bew (paragraph 1.5) and some additional points that the main text concedes do not themselves qualify as core principles (paragraph 3.7). Yet the consultation question about core principles follows directly beneath the latter and, moreover, calls them principles! This is confusing, to say the least.
  • Are the core principles consistently followed? This depends of course on what counts as a core principle. But if one of those principles is Bew’s insistence that ‘measures of progress should be given at least as much weight as attainment’, that does not seem to apply to the treatment of floor targets in the document, where the attainment threshold trumps the progress measure. If one of the core proposals runs counter to the proposed principles, that is clearly a fundamental flaw.
  • Implications of a choice of in-house assessment schemes: Schools will be able to develop their own schemes or else draw on commercially available products. One possibility is that the market will become increasingly dominated by a few commercial providers who profit excessively from this arrangement. Another is that hundreds of alternative schemes will be generated and there will be very little consistency between those in use in different schools. This will render primary-secondary transition and in-phase transfer much more complex, especially for ‘outlier’ learners. It seems that this downside of a market-driven curriculum and assessment model has not been properly quantified or acknowledged.
  • Whether or not these apply to statutory teacher assessment: We know that the results of teacher assessment in writing will feature in the new floor target, alongside the outcomes of tests which attract a new-style scale score. But does this imply that all statutory teacher assessment will attract similar scale scores, or will it be treated as ‘ongoing assessment’. I might have missed it, but I cannot find an authoritative answer to this point in the document.
  • Whether the proposed tripartite report to parents is easier to understand than existing arrangements: This is a particularly significant issue. The argument that the system of National Curriculum levels was not properly understood is arguably a fault of poor communication rather than inherent to the system itself. It is also more than arguable that the alternative now proposed – comprising a scaled score, decile and comparative scaled score in each test – is at least as hard for parents to comprehend. There is no interest in converting this data into a simple set of proxy grades with an attainment and a progression dimension, as I have proposed. The complexity is compounded because schools’ internal assessment systems may well be completely different. Parents are currently able to understand progress within a single coherent framework. In future they will need to relate one system for in-school assessment to another for end of key stage assessment. This is a major shortcoming that is not properly exposed in the document.
  • Whether decile-based differentiation is sufficient: Parents arguably have a right to know in which percentile their children’s performance falls, rather than just the relevant decile. At the top of the attainment spectrum, Level 6 achievement is more differentiated than a top decile measure, in that those who pass the test are a much more selective group than the top ten percent. The use of comparatively vague deciles may be driven by concern about labelling (and perhaps also some recognition of the unreliability of more specific outcomes from this assessment process). The document insists that only parents will be informed about deciles, but it does not require a soothsayer to predict that learners will come to know them, just as they know their levels. (The secondary consultation document sees virtue in older learners knowing and using their ‘APS8 score’ so what is different?) In practice it is hard to imagine a scenario where those in possession of percentile rankings could withhold this data if a parent demanded it.
  • Norm versus criterion-referencing: Some commentators appear relatively untroubled by a measure of progress that rests entirely on comparison between a learner and his peers. They suppose that most parents are most concerned whether their child is keeping up with their peers, rather than whether their rate of progress is consistent with some abstract measure. That may be true – and it may be also too difficult to design a new progress measure that applies consistently to the non-linear development of every learner, regardless of their prior attainment. On the other hand, it does not seem impossible to contemplate a measure of progress associated with the concept of ‘mastery’ that is now presumed to underpin the National Curriculum, since its proponents are clear that ‘mastery’ does not hold back those who are capable of progressing further and faster.
  • Development of tests to suit all abilities and the risk of ceiling effects: There must be some degree of doubt whether universal tests are the optimal approach to assessment for the full attainment spectrum, especially for those at either end, particularly in maths where the span of the spectrum is huge. The document contains an assurance that the new tests will be at least as demanding as existing Level 6 tests, so single tests will aim to accommodate six levels of attainment in old money. Is that feasible? Despite the assurance, the risk of undesirable ceiling effects is real and of particular concern for the highest attainers.
  • Where to pitch the baseline: The arguments in favour of a Year R baseline – and the difficulties associated with implementing one – have attracted the lion’s share of the criticism directed at the paper, which has rather served to obscure some of its other shortcomings. The obvious worry is that the baseline check will be either disproportionate or unreliable – and quite possibly both. Most of the focus is on the overall burden of testing: the document floats a variety of ideas that would add another layer of fragmentation and complexity, such as making the check optional, making KS1 tests optional and providing different routes for stand-alone infant/junior schools and all-through primaries.
  • The nature of the baseline check: Conversely, the consultation document is unhelpfully coy about the nature of the check required. If it had made a better fist of describing the likely parameters of the check, exaggerated concerns about its negative impact on young children might have been allayed. Instead, the focus on the overall testing burden leads one to assume that the Year R check will be comparatively onerous.
  • How high attainers will be defined in the performance tables: There are welcome commitments to a ‘high attainer’ measure for each test, based on scaled scores, and the separate publication of this measure for those in receipt of the Pupil Premium. But we are given no idea where the measure will be pitched, nor whether it will address progress as well as attainment. One obvious approach would be to use the top decile, but that runs against an earlier commitment not to incorporate the deciles in performance tables, despite there being no obvious reason why this should be problematic, assuming that anonymity can be preserved (which may not be possible in smaller cohorts). It would be particularly disappointing if high attainers continue to be defined as around one third of the cohort – say the top three deciles, but that may be the path of least resistance.

Labour’s response to the consultation document picks up some of these issues. Their initial statement focused on the disappearance of ‘national statements of learning outcomes’, how a norm-referenced approach would protect standards over time and the narrowness of the ‘secondary-ready’ concept.

A subsequent Twigg article begins with the latter point, bemoaning the Government’s:

‘Backward looking vision, premised on rote-learning and a failure to value the importance of the skills and aptitudes that young people need to succeed’.

It moves on to oppose the removal of level descriptors:

‘There might be a case to look at reforming level descriptors to ensure sufficient challenge but scrapping them outright is completely misguided and will undermine standards in primary schools’

and the adoption of norm-referenced ranking into deciles:

‘By ranking pupils against others in their year- rather than against set, year-on-year standards – this will lead to distortions from one year to another. There is not a sound policy case for this.’

But it offers support for changing the baseline:

‘I have been clear that I want to work constructively on the idea of setting baseline assessments at 5. There is a progressive case for doing this. All-too-often it is the case that the prior attainment of children from socially-deprived backgrounds is much lower than for the rest. It is indeed important that schools are able to identify a baseline of pupil attainment so that teachers can monitor learning and challenge all children to reach their potential.’

Unfortunately, this stops short of a clear articulation of Labour policy on any of these three points, though it does suggest that several aspects of these reforms are highly vulnerable should the 2015 General Election go in Labour’s favour.

.

Omissions

There are several outstanding questions within the section above, but also a shorter list of issues relating to the interface between the primary assessment and accountability consultation document, its secondary counterpart and the National Curriculum proposals. Key amongst them are:

  • Consistency between the primary and secondary floor targets: The secondary consultation is clear ‘that schools should have to meet a set standard on both the threshold and progress measure to be above the floor’. There is no obvious justification for adopting an alternative threshold-heavy approach in the primary sector. Indeed, it is arguable that the principle of a floor relies on broad consistency of application across phases. Progression across the attainment spectrum in the primary phase should not be sacrificed on the altar of a single, narrow ‘secondary ready’ attainment threshold.
  • How the KS2 to KS4 progress measure will be calculated: While the baseline-KS2 progress measure may be second order for the purposes of the primary floor, the KS2-KS4 progression measure is central to the proposals in the secondary consultation document. We now know that this will be based on the relationship between the KS2 scaled score and the APS8 measure. But there is no information about how these two different currencies will be linked together. Will the scaled score be extended into KS3 and KS4 so that GCSE grades are ‘translated’ into higher points on the same scale? Further information is needed before we can judge the appropriateness of the proposed primary scaled scores as a baseline.
  • How tests will be developed from singleton attainment targets: This issue has already been raised in the National Curriculum section above. The process by which tests will be developed in the absence of a framework of level descriptions and given single ‘lowest common denominator’ attainment targets for each programme of study remains shrouded in mystery. This is not simply a dry technical issue, because it informs our understanding of the nature of the tests proposed. It also raises important questions about the relationship academies will need to have with programmes of study that – ostensibly at least – they are not required to follow. One might have hoped that the primary document would throw some light on this matter.

.

Overall Judgement

National Curriculum

.

On the National Curriculum side I have flagged up some significant concerns.

There are some major implementation challenges ahead, which now extend beyond AY 2013/14 into the following year.

The decision to phase national curriculum implementation – ultimately forced on the Government by its decision to stagger curriculum and assessment reforms – is rather more likely to increase those challenges than to temper them. There are significant question marks over whether the selected approach to phasing is optimal, either for schools or learners.

The first paragraph of the Inclusion Statement is plain wrong, especially given its statutory status. It requires amendment.

As things stand, the National Curriculum has a limited shelf-life under the Coalition. If it does not wither on the vine as a consequence of continuing conversion to academy status, it is likely to be marginalised in the medium term – and the new iteration will not be replaced.

As for Labour, your guess is as good as mine. Her Majesty’s Opposition has committed simultaneously to removing and retaining a National Curriculum, should it be elected in 2015. That is neither sensible nor sustainable – nor can this confusion add up to a vote-attracting proposition.

On a scaled score from 80 to 130 I would rate the Government at 95 and the Opposition at 80.

.

Assessment and accountability

Because there has been no effort to link together the proposals in the primary and secondary consultation documents (and we still await a promised post-16 document) there are significant outstanding questions about cross-phase consistency and, especially, the construction of the KS2-KS4 progress measure.

I have identified no less than sixteen significant issues with the proposals in the primary consultation document. Several of these are attributable to a lack of clarity within the text, not least over the core principles that should be applied across the piece to ensure policy coherence and internal consistency between different elements of the package. This is a major shortcoming.

The muddle and obfuscation over the nature of the floor target is an obvious concern, together with the decision to hitch the Pupil Premium to the achievement of the floor, as well as to narrowing achievement gaps. There is a fundamental tension here that needs to be addressed.

The negative impact of the removal of the underpinning framework ensuring consistency between summative statutory end of key stage assessment and summative end-year assessment in schools has been underplayed. There is significant downside to balance against any advantages from greater freedom and autonomy, but this has not been spelled out.

The case for the removal of levels has been asserted repeatedly, despite a significant groundswell of professional opinion against it, stretching back to the original response to consultation on the recommendations of the Expert Panel. There may be reason to believe that Labour would reverse this decision.

While there is apparently cross-party consensus on the wisdom of shifting the KS1 baseline to Year R, big questions remain about the nature of the ‘baseline check’ required.

Despite some positive commitments to make the assessment and accountability regime ‘high attainer friendly’ there are also significant reservations about how high attainment will be defined and reported.

On a scaled score from 80 to 130, I would rate the Government at 85 and, with some benefit of the doubt, put the Opposition at 100.

.

In a nutshell…

We have perhaps two-thirds of the bigger picture in place, though some parts are distinctly fuzzy.

The secondary proposals are much more coherent than those for the primary sector and these two do not fit together well.

The primary proposals betray an incoherent vision and vain attempts to reconcile irreconcilably divergent views. It is no surprise that they were extensively delayed, only to be published in the last few days of the summer term.

Has this original June 2012 commitment been met?

‘In terms of statutory assessment, however, I believe that it is critical that we both recognise the achievements of all pupils, and provide for a focus on progress. Some form of grading of pupil attainment in mathematics, science and English will therefore be required, so that we can recognise and reward the highest achievers as well as identifying those that are falling below national expectations.

We have scores rather than grading and they don’t extend to science. High achievers will receive attention but we don’t know whether they will be the highest achievers or a much broader group.

Regrettably, the answer is no.

.

.

GP

June 2013

Accountability, Assessment and the New National Curriculum: Part One

.

This post examines the Primary assessment and accountability consultation document published on 17 July 2013, considering its contribution to the emerging picture of National Curriculum, assessment and accountability reform across the primary and secondary phases.

It is a revised, expanded and updated version of an earlier post, published on 10 July, which foregrounded the revised National Curriculum proposals published two days earlier. Given its length I have divided it into two parts of roughly equal length.

Readers who prefer to focus exclusively on the fresh material should go to the emboldened sections of the text, or to this separate post containing the core argument.

I had always intended that this final version would explore the interplay between three major reforms – the revised proposals for the new National Curriculum, its assessment from 2016 when National Curriculum Levels are taken out of service and the associated arrangements for the publication of assessment outcomes in School Performance Tables – and offer some preliminary judgement of whether, taken together, they amount to a coherent and viable policy package.

There is a symbiotic relationship between curriculum, assessment and accountability. There are also important considerations associated with continuity and progression between phases.

The long-delayed primary assessment and accountability document had been expected since June 2012 and the timetable for publication was extended on more than one occasion. Such delay is typically evidence that there is disagreement over fundamental aspects of the policy – and that securing consensus has been problematic.

We have still not seen a promised consultation on post-16 assessment and accountability, and we await the outcome of the parallel secondary consultation, which closed on 1 May.

The extended disjunction between curriculum and assessment – apparent in both policy development and the timetable for implementation of these various reforms – has created unnecessary and potentially avoidable difficulties, for the Government and stakeholders alike.

There are also issues with the additional disjunction between primary and secondary (and post-16) assessment and accountability reforms. The Government’s decision to consult on these consecutively, without addressing important questions about how they fit together, suggests that critical pieces of the jigsaw are missing.

Finally, the decision to remove National Curriculum levels raises several difficult questions about how the Government will measure and monitor national progress in raising educational standards and narrowing gaps between advantaged and disadvantaged learners.

Part of the purpose of this post is to expose these rifts, so we can judge how robustly they are addressed in the next few months.

 

What has been published?

 .

Primary assessment and accountability reforms

17 July saw the publication of three documents in the following order:

  • A press release which appeared shortly after a midnight embargo;

There was no response to the parallel ‘Secondary school accountability’ consultation launched on 7 February and completed on 1 May, despite the connectivity between the two sets of proposals – and no firm indication of when that response would be published.

A third consultation, on post-16 assessment and accountability, was not mentioned either.

The staged publication of the primary material meant that initial analysis and questioning of Ministers was based largely on the headlines in the press release rather than on the substance of the proposals.

Initial media appearances appeared to generate a groundswell of hostility that Ministers could not readily counter. The answers to some reasonable questions on the detail were not yet in the public domain.

It was particularly noteworthy that the announcement had integrated within it a second, about the size of Pupil Premium allocations in 2014-15. This was clearly intended to sugar the pill, though the coating is rather thin and there are also potentially wider ramifications (see below).

The Pupil Premium announcement must have been the justification for presentation by Lib Dem Deputy Prime Minister Clegg and Minister of State Laws, rather than by Tory Secretary of State Gove.

He (Gove) must have been delighted at avoiding this particularly poisoned chalice, already delayed into the dog days of summer – often a deliberate strategy for downplaying a particularly contentious announcement.

The consultation has a deadline of 11 October, allowing a total of 11 weeks and two days for responses, including the entirety of the school summer holidays, so the majority of the consultation period occurs while most schools are closed. This may also serve to mute opposition to the proposals contained in the document.

There is a commitment to publish the outcomes of consultation, together with a response ‘in autumn 2013’, which is a very quick turn round assuming that autumn means November rather than December. If there is any degree of contention, this might well edge close to Christmas.

.

National Curriculum publications

Nine days earlier, on 8 July 2013, a raft of National Curriculum proposals had appeared. The first iteration of this post concentrated primarily on these documents:

  • A Press Release ‘Education reform: a world-class curriculum to drive up standards and fuel aspiration’.
  •  A Consultation Document ‘National curriculum review: new programmes of study and attainment targets from September 2014’, with responses due by 8 August.
  • An updated framework document ‘The National Curriculum in England’ which includes the generic elements of the National Curriculum as well as each Programme of Study.

I have retained largely unchanged in this final version my record of recent history, to set the context for the analysis that follows.

 

A Recap of the last round of consultation and developments

 .

The February 2013 Package

Back in February, the Government released the draft and consultation documents that informed the preparation and publication of the latest round of material set out above.

They included:

  • A full set of draft National Curriculum Programmes of Study for Key Stages 1-3, as well as drafts of the PoS for Key Stage 4 English, maths, science, PE and Citizenship.
  • An earlier version of the National Curriculum Consultation Framework Document incorporating all those draft PoS, with the exception of the KS4 core subjects, plus the generic elements of the National Curriculum including draft Aims and a draft Inclusion Statement.
  • The Secondary School Accountability Consultation Document focused principally on the development of accountability measures and their publication within the School Performance Tables. Consultation closed on 1 May 2013. This promised parallel consultation documents on accountability for primary schools and post-16 providers ‘shortly’.
  • The Government’s response to an earlier consultation on reforming Key Stage 4 Qualifications and an associated letter to Ofqual. This resulted in a further consultation on the future shape of GCSE examinations (see below).

I produced an analysis and assessment of this package shortly after publication.

Key points included:

  • Significant disparities between the length and degree of prescription of different draft PoS, with the primary core at one extreme (long and prescriptive) and the secondary foundation subjects at another (short and flexible). This suggested that the Government’s commitment to schools’ autonomy is highly variable by subject and phase, and tailored deliberately to fit the profile of academisation.
  • The rather basic nature of the overarching National Curriculum Aims:

‘The National Curriculum provides pupils with an introduction to the core knowledge that they need to be educated citizens. It introduces pupils to the best that has been thought and said; and helps engender an appreciation of human creativity and achievement.

The National Curriculum is just one element in the education of every child. There is time and space in the school day and in each week, term and year to range beyond the National Curriculum specifications. The National Curriculum provides an outline of core knowledge around which teachers can develop exciting and stimulating lessons.’

and an associated proposal to dispense with subject-specific aims in each draft PoS, assumed to be superfluous given the generic statement above.

  • The wording of the draft Inclusion Statement, which was seriously flawed. It said (my emphases) that:

‘Teachers should set high expectations for every pupil. They should plan stretching work for pupils whose attainment is significantly above the expected standard. They have an even greater obligation to plan lessons for pupils who have low levels of prior attainment or come from disadvantaged backgrounds. Teachers should use appropriate assessment to set targets which are deliberately ambitious.’

I took issue with this because of the two infelicitous assumptions it contains –  first, that teachers somehow have a ‘greater obligation’ to plan for low attainers than for high attainers, rather than having an overriding obligation to  treat them equally;  second,  that learners from disadvantaged backgrounds cannot be included amongst the ranks of high attainers.

The first is against the basic principles of comprehensive education and profoundly inequitable; the second is anathema, including to Secretary of State Gove, who has constantly and correctly cautioned against harbouring low expectations of disadvantaged learners.

  • The decision to disapply the bulk of the existing National Curriculum, PoS, attainment targets and assessment arrangements in academic year 2013/14. Schools would be required to teach the subjects of the National Curriculum, but not the content of the PoS. At primary level this would apply across KS1 and 2 for all foundation subjects. But, for core subjects, it would apply only to Years 3 and 4. At secondary level, disapplication would apply across all subjects at KS3 and to English, maths, science, ICT, PE and citizenship at KS4. The disapplication at KS4 would continue until the new PoS came into force for each subject and year group (so leaving the way open for phasing). For, if schools – whether state-maintained or academies – can operate successfully without the PoS for a year, why bother to reimpose the requirement on the state-maintained only from 2014?
  • The ‘lowest common denominator’ approach to attainment targets, which relied on a single standard AT in each PoS:

‘By the end of each key stage, pupils are expected to know, apply and understand the matters, skills and processes specified in the relevant programme of study.’

This – together with the scrapping of associated level descriptions – removes all scaffolding for the effective differentiation of the PoS, (with potentially negative implications for high attainers, amongst others, if they are insufficiently stretched). It also raises potentially awkward questions about the relationship between the PoS and assessment (see below). Finally, it  leaves the accountability framework – with the possible addition of the ‘power of the market’ – as the last remaining policy levers to bring poor performing schools into line.

  • How low, middle and high attainers will be distinguished in Performance Tables once National Curriculum Levels disappear, since the current distinction is based on achievement of Level equivalents at KS1 (for KS2) and at KS2 (for KS4). Such a distinction will be retained since the secondary accountability consultation mentions a ‘headline measure showing the progress of pupils in each of English and mathematics’ that will continue to ‘show how pupils with low, medium and high prior attainment perform’.
  • Whether these distinctions will be applied in Performance Tables to those eligible for the Pupil Premium, so parents and others can understand the gap within each school between the performance of high attainers from advantaged and disadvantaged backgrounds respectively (not forgetting middle and low attainers too).
  • The future of Key Stage 3 assessment, given the disappearance of levels and proposals to remove the requirement on schools to report to the centre the outcomes of teacher assessment. Will it be left entirely to schools to design an assessment system or will a standard national framework continue to operate in the core subjects?
  • The potential implications of the proposed introduction of PISA-style sampling tests at KS4 to ‘track national standards over time’, including any potential ‘washback’ effect on the curriculum.
  • Several unanswered questions about the nature of the proposed value-added KS2-KS4 progress measure, with: separate and as-yet-unknown KS2 and KS4 grading systems; KS2 benchmarks based on performance in KS2 English and maths tests; and KS4 benchmarks based on a new ‘Average Points Score across a balanced scorecard of eight qualifications, including English and maths, three other EBacc subjects and three further ‘high value qualifications’. The consultation document says this measure:

‘Will take the progress each pupil makes between Key Stage 2 and Key Stage 4 and compare that with the progress that we expect to be made by pupils nationally who had the same level of attainment at Key Stage 2 (calculated by combining results at end of Key Stage 2 in English and mathematics).’

A week later I published another post: ‘Whither National Curriculum Assessment Without Levels?’ that set out the history of the decision to dispense with levels and explored some of the issues this raises for assessment, in a context where the majority of secondary schools and a minority of primary schools are no longer bound by the National Curriculum.

This noted:

  • One implication of wholesale exemption from the National Curriculum for academies is that KS2 tests will need to be derived somehow from the content descriptions in the Programmes of Study. The manner in which this will be done is unclear, since it is open to question whether even the detailed draft PoS in the primary core contain sufficiently robust outcome statements to support grade-based statutory assessment at the end of Key Stage 2, especially given the very basic approach to attainment targets outlined above.
  • The desirability of harmonised end of KS2 and end of KS4 assessment and grading systems, so that progression between those two points is easier for parents and learners to follow and understand.
  • The desirability of ensuring that schools’ internal end-of-year assessment systems harmonise with the external assessment systems at end KS2 and end KS4 respectively, so that parents (and teachers) can more easily track progression between those two points.
  • The development of a grading scale that links attainment to the concept of ‘mastery’ of the PoS and progress to a judgement whether performance has improved, been maintained or declined compared with the previous year. I proposed my own ‘aunt sally’ to illustrate this point.

.

Developments since February

In the five months that elapsed between the appearance of the two curriculum-related consultation packages there were several material developments that impacted significantly on the outcomes of the process and the future of the National Curriculum, assessment and accountability, including on the other side of the 2015 General Election.

I sought to capture those in this recent round-up of activity on the Gifted Phoenix Twitter feed.

Some of the most significant include:

  • A piece by Brian Lightman of ASCL arguing that we should not be trying to drive the curriculum through the assessment system.
  • A speech from David Laws confirming that the future equivalent of Level 4b will become the new KS2 ‘pass’ with effect from 2016, so heralding a recalibration of expectations on individual learners and raising the stakes for accountability purposes.
  • A speech from Brian Lightman at the ASCL Annual Conference which argued that the abolition of National Curriculum levels creates an unhelpful policy vacuum.

‘So I predict that in the months and years to come the best curriculums will be developed – and refined – in schools across the country by teachers for teachers.

And that is why I think this national curriculum may well be the last national curriculum. Because in future teachers will be doing it for themselves.’

  • An admission that the deadline for the publication of the consultation document on primary accountability had slipped to the end of the summer term (Col 383W).
  • Apparent confirmation from DfE that pupils ending Key Stage 2 in 2015 would be taught the new National Curriculum in  academic year 2014/15 but would be assessed against the old one in May 2015.

‘So Labour will give all schools the same freedom over the curriculum that academies currently enjoy while continuing to insist that all schools teach a core curriculum including English, Maths and Science.’

Some have suggested that this is different to the current requirement imposed on academies but the highlighted part of the sentence above explicitly counters that – and adding any greater specificity to future core curriculum requirements would of course reduce academies’ freedoms – an idea that goes against the entire tenor of Twigg’s speech:

‘Academies say freedom to innovate in the curriculum has given their teachers a new sense of confidence and professionalism. All young people should benefit from the positive impact this brings – trusting teachers to get on with the job.’

‘Develop progress measures to identify how well the most able students have progressed from Year 6 through Key Stage 4 to the end of Key Stage 5.’

  • A Sunday Times story announcing that the primary accountability consultation document would not be released alongside the National Curriculum documentation as anticipated, and suggesting that Ministers were considering KS2 tests in English, maths and science that would enable them to rank learners by performance and so identify the top 10%, (though it was unclear at this stage whether this was across the piece or in each subject).

 .

Three idiosyncratic interventions

One day after the publication of the second tranche of documents, Mr Twigg published a piece on the Labour List website implying a ‘volte face’ from his previous position, or else a contradictory muddle that requires urgent clarification.

The broad theme of the article is that the draft National Curriculum is insufficiently ambitious. But this would prompt the obvious riposte – if that’s the case, why are you committing Labour to doing without a National Curriculum altogether? Isn’t that even less ambitious by definition?

Mr Twigg strives to unhitch himself from the horns of this dilemma by repeating the commitment in his June speech:

‘Michael Gove believes only Academies and Free Schools can be trusted with the freedom to innovate in what they teach, other state schools must follow his highly prescriptive curriculum. Labour would end this divided system and extend these freedoms over the curriculum to all schools. All qualified teachers should be trusted to get on with the job and all schools should have the same freedoms to raise standards and innovate.’

That must mean extending to all the existing curricular freedoms enjoyed by academies. But then another paragraph is tacked on to the end of the article, almost as an afterthought:

‘His [ie Gove’s] divisive approach means curriculum freedom only applies to some schools. Instead, Labour would develop a reformed National Curriculum which allows teachers in all schools the freedom to innovate and prepares young people for the challenges of the modern economy.’

It is not possible to square these two contradictory statements. The freedoms currently enjoyed by academies do not amount to a National Curriculum (they are required to teach the three core subjects but are free to determine their content). As noted above, any universal National Curriculum would reduce academies’ freedoms rather than increase them.

.

Slightly before the 8 July publications, DfE released a short statement on ‘Assessing without levels’ which restated its case for abolishing them, adding:

Schools will be able to introduce their own approaches to formative assessment, to support pupil attainment and progression. The assessment framework should be built into the school curriculum, so that schools can check what pupils have learned and whether they are on track to meet expectations at the end of the key stage, and so that they can report regularly to parents.

Ofsted’s inspections will be informed by whatever pupil tracking data schools choose to keep. Schools will continue to benchmark their performance through statutory end of key stage assessments, including national curriculum tests. In the consultation on primary assessment and accountability, the department will consult on core principles for a school’s curriculum and assessment system.

Although schools will be free to devise their own curriculum and assessment system, we will provide examples of good practice which schools may wish to follow.’

So the core principles would be an important feature of the upcoming consultation document, but it would need to extend beyond those to satisfy the June 2012 commitment:

‘In terms of statutory assessment, however, I believe that it is critical that we both recognise the achievements of all pupils, and provide for a focus on progress. Some form of grading of pupil attainment in mathematics, science and English will therefore be required, so that we can recognise and reward the highest achievers as well as identifying those that are falling below national expectations. We will consider further the details of how this will work.’

And of course some kind of framework would be required for the KS2 core to support the commitment to KS2-4 progression measures in the consultation on secondary accountability.

This statement rather set to one side the strong case for aligning schools’ own internal end-year assessment arrangements with the statutory end of Key Stage arrangements that will be in place from 2016.

.

One further important signal towards the future direction of travel appeared, in the shape of Ofqual’s GCSE reform consultation published in June 2013, which sets out as its ‘preferred approach’ to GCSE grading an eight point numerical system, from Grade 8 down to Grade 1.

No convincing explanation is given for placing Grade 8 at the top of the scale rather than Grade 1, so following the precedent set by musical examinations rather than the more universally familiar approach taken in CSE and O level examinations (the latter prior to 1975).

Were this to be applied to the ‘APS8 measure outlined above, it would mean each student achieving a numerical score between 8 and 64. Top-performing schools could vie with each other over the number of their students achieving the magical 64 rating.

Assuming a similarly constructed grading system for the three primary core tests, this could provide the basis for a straightforward ratio of progression from KS2 to KS4, and even possibly on to KS5 as well.

But the Sunday Times story led us to assume that this might be set aside in favour of an equation based on percentiles. Whether this would be designed to accommodate the current predilection for ‘comparable outcomes’ remained unclear.

.

An Aside: The Pupil Premium

The assessment and accountability announcement was sugar-coated by confirmation of the size of Pupil Premium allocations in 2014-15.

But close scrutiny of the coating reveals it as rather a thin veneer.

It was already known that the total Pupil Premium funding envelope would increase  by £625m, from £1.875bn in 2013-14 to £2.5bn in 2014-15, so the overall budget was not in itself newsworthy.

But the decision to weight this towards primary schools was new. Ministers made much of the 44% increase for primary schools, from £900 to £1,300 per pupil, while barely mentioning that this must be achieved at the expense of the allocation for secondary schools.

One assumes that the secondary allocation has been frozen at £900 per learner but, at the time of writing, I have seen no official confirmation of that. Hence there is a degree of economy with the truth at play if the funding is claimed to be ‘new money’.

We do know, from the Spending Review, that the total budget for the Premium will be protected in real terms in 2015-16 but will not be further increased.

It remains to be seen whether the new weighting in favour of the primary sector will be retained, but that seems highly likely given the level of disruption that would be caused by frequent recalibration.

One influential commentator – Institute of Education Director Chris Husbands – has suggested that the bracketing of the two announcements marks a significant adjustment:

‘This is a further twist in the evolving purpose of the pupil premium – once intended as an incentive to primary schools to admit more disadvantaged children, then a compensatory payment for the additional costs involved in meeting the needs of disadvantaged children, it is now more clearly a fund to secure threshold levels of attainment.’

This argument runs like a leitmotif through the analysis below.

But it also runs counter to the Government’s official position that the Premium is designed to support all disadvantaged pupils and close the attainment gap between them and their peers, a position reinforced by the fact that the Government has delineated separate ‘catch-up premium support’ exclusively for those below the thresholds.

There is no change in recent announcements about strengthening the accountability underpinning Pupil Premium support. Husbands’ argument also runs against the tenor of Ofsted’s publications about effective use of the Premium and the latest Unseen Children report, published following deliberations by an expert panel on which Husbands served.

The source appears to be a recent IPPR publication ‘Excellence and Equity: Tackling Educational Disadvantage in England’s Secondary Schools’, Chapter 4 of which asserts (without supporting evidence) that:

‘Policymakers talk interchangeably about the pupil premium being used to support pupils who are falling behind, and it being used to support those who are on free school meals.’

This despite the fact that:

‘The overlap between these two categories is not as large as many people suppose. Last year, only 23 per cent of low-attaining pupils at the end of primary school were eligible for free school meals, and only 26 per cent of pupils eligible for free school meals were low attaining. This puts schools in the difficult position of having to decide whether to spend their pupil premium resources on pupils who have a learning need, even though many of them will not be eligible for free school meals, or whether they should focus them on FSM pupils, even though many of them will be performing at the expected level.’

The notion that pupils who are performing at the expected levels do not, by definition, have a ‘learning need’ is highly contentious, but let that pass.

The substantive argument is that, because ‘tackling the long tail of low achievement is the biggest challenge facing England’s school system’ and because the Premium ‘provides insufficient funds targeted at the right age range’:

‘In order to have maximum impact, the pupil premium should be explicitly targeted towards raising low achievement in primary and early secondary school… The Department for Education should therefore focus the additional funding at this age range. It should… create a higher level of pupil premium in primary schools, and… increase the ‘catch-up premium’ (for year 7 pupils) in secondary schools; the pupil premium in secondary schools would be held at its current level. This would provide primary schools with sufficient resources to fund targeted interventions, such as Reading Recovery, for all children who are at risk of falling behind. It would also compensate secondary schools that have large numbers of pupils starting school below the expected level of literacy and numeracy.

…Secondary schools are currently given a catch-up premium for every pupil who enters below level 4 in English and maths. However, there is no mechanism to guarantee that these pupils benefit from the money. The ‘catch-up premium’ should therefore be replaced with a ‘catch-up entitlement’. Every pupil that falls into this category would be entitled to have the money spent specifically on helping to raise his or her attainment. Schools would be required to write a letter to these pupils and their families explaining how the resources are being spent.’

As we now know, the Government has front-loaded the Pupil Premium into the primary sector, but not – as far as we are aware – the early years of secondary school. Nor has it increased the catch-up premium, unless by some relatively small amount yet to be announced, or made it an individual entitlement.

Husbands’ initial argument – that the linking of Premium and assessment necessarily means a closer link being forged with tackling below-threshold attainment – depends on his assertion that:

‘The core message of the consultation is that the concern is with absolute attainment – secondary readiness – rather than the progress made by primary schools.’

The analysis below examines the case for that assertion.

.

What the Primary Assessment Consultation Says

The commentary below follows the sections in the consultation document

.

The case for change

The second paragraph of ‘The case for change’ says:

‘We believe that it is right that the government should set out in detail what pupils should be taught…’

a somewhat different  slant to that adopted in the National Curriculum proposals (and which of course applies only to the core subjects in state-maintained schools).

The next section works towards a definition of the term ‘secondary ready’, described as ‘the single most important outcome that any primary school should strive to achieve’.

It is discussed exclusively in terms of achievement in KS2 English and maths tests, at a level sufficient to generate five GCSE Grades A*-C including English and maths five years later.

This despite the fact that the secondary accountability consultation proposes two quite different headline measures: good GCSE grades in both English and maths and Average Points Score in eight subjects from a three-category menu (neither of which is yet defined against the proposed new 8 to 1 GCSE grading scale).

No other criteria are introduced into the definition, rendering it distinctly narrow. This might arguably be the most important outcome of primary education, but it is not the sole outcome by any stretch of the imagination.

The Government states an ‘ambition’ that all pupils should achieve this benchmark, excepting a proportion ‘with particular learning needs’.

There is no quantification of this proportion, though it is later used to identify a floor target assumption that 85% of the cohort should achieve the benchmark, so the group with ‘particular learning needs’ must be something less than 15% of all learners.

The introduction of a second and parallel floor target, relating to progression, is justified here on the grounds that ‘some schools have particularly demanding intakes’ so ‘will find it challenging to reach the ambitious [attainment] threshold…’. This will also help to identify coasting schools.

This approach to progression, as a fall back in circumstances where the threshold measure is problematic, lends some weight to Husbands’ contention that absolute attainment is now paramount.

Note that the wording in this section is unclear whether the new floor target consists of both of these measures – secondary readiness and progression – or the imposition of one or the other. This issue comes up again later below.

There is nothing here about the importance of applying measures that do not have in-built perverse incentives to focus on the threshold boundary, but this too will reappear later.

There is early confirmation that:

‘We will continue to prescribe statutory assessment arrangements in English, mathematics and science.’

The ‘core principles’ mentioned in the Assessment Without Levels text appear at this stage to be those proposed in the June 2011 Bew Report rather than any new formulation. Note the second bullet point, which pushes in directly the opposite direction to Husbands’ assertion:

  • ongoing assessment is a crucial part of effective teaching, but it should be left to schools. The government should only prescribe how statutory end of key stage assessment is conducted;
  • external school-level accountability is important, but must be fair. In particular, measures of progress should be given at least as much weight as attainment;
  • a wide range of school performance information should be published to help parents and others to hold schools to account in a fair, rounded way; and
  • both summative teacher assessment and external testing are important forms of statutory assessment and both should be published

Already there are mixed messages.

The next section justifies the removal of National Curriculum levels:

‘Imposing a single system for ongoing assessment, in the way that national curriculum levels are built into the current curriculum and prescribe a detailed sequence for what pupils should be taught, is incompatible with this curriculum freedom. How schools teach their curriculum and track the progress pupils make against it will be for them to decide. Schools will be able to focus their teaching, assessment and reporting not on a set of opaque level descriptions, but on the essential knowledge that all pupils should learn. There will be a clear separation between ongoing, formative assessment (wholly owned by schools) and the statutory summative assessment which the government will prescribe to provide robust external accountability and national benchmarking. Ofsted will expect to see evidence of pupils’ progress, with inspections informed by the school’s chosen pupil tracking data.’

Paraphrasing this statement, one can extract the following rather questionable logic:

  • We want to give schools freedom to determine their own approaches to formative assessment
  • The current system of levels has come to be applied to both formative and summative assessment
  • So we are removing levels from both formative and summative assessment.

The only justification for this must lie in recognition that the retention of levels in summative assessment will inevitably have a ‘backwash effect’ on formative assessment.

Yet this backwash effect is not acknowledged in respect of the proposed new arrangements for summative assessment. There is a fundamental issue here.

Schools will still be required to report to parents at the end of each year and key stage. There will be no imposition of a system for them doing so but, as we have already recognised, parents will more readily understand a system that is fully consistent with that applied for end of key stage assessment, rather than a substantively different approach.

The next segment begins to explore the case for shifting the baseline assessment – on which to build measures of progression in primary schools – back to Year R. This will ‘reinforce the importance of early intervention’. The EYFS profile will be retained but might be rendered non-statutory.

The introduction of new summative assessments at end KS1 and end KS2 is confirmed for 2016, with interim arrangements as noted elsewhere and accountability reforms also taking effect at this point (so in the December 2016/January 2017 Performance Tables).

There is also confirmation that academies’ funding agreements require compliance ‘with statutory assessment arrangements as they apply to maintained schools’. This is as close as we get to an explanation of how statutory assessments that apply to all schools will be derived from the National Curriculum PoS and single ‘lowest common denominator’ attainment targets.

.

Teacher assessment and reporting to parents

This section begins with a second justification for the removal of levels. Some anecdotal evidence is cited to support the argument:

‘Teachers have told us that the use of levels for assessment has become burdensome and encouraged crude ‘best fit’ judgements to differentiate pupil progress and attainment.’

This is the beginning of the justification for a more sophisticated (and hence more complex) approach.

Schools are free to design their assessment systems, though these must be integrated with the school curriculum (in a way that these separate government proposals have not been integrated).

There is a hint that these systems might be different for different subjects (adding still further complexity for parents) though ‘groups of schools may wish to use a common approach’.

Paragraph 3.7 is a confusing complement to the Bew-based core principles that appeared earlier:

‘We expect schools to have a curriculum and assessment framework that meets a set of core principles and:

  • sets out steps so that pupils reach or exceed the end of key stage expectations in the new national curriculum;
  • enables them to measure whether pupils are on track to meet end of key stage expectations;
  • enables them to pinpoint the aspects of the curriculum in which pupils are falling behind, and recognise exceptional performance;
  • supports teaching planning for all pupils; and
  • enables them to report regularly to parents and, where pupils move to other schools, providing clear information about each pupils strengths, weaknesses and progress towards the end of key stage expectations.

Question 1: Will these principles underpin an effective curriculum and assessment system?’

The ‘and’ in the opening sentence suggests that this isn’t part of the set of core principles, but the question at the end suggests these are the principles we should be considering, rather than those derived from Bew.

So we have two competing sets of core principles, the latter referring to schools’ own curriculum and assessment frameworks, but not to accountability.

The references here – to steps relative to end of KS expectations, measuring progress towards those expectations, identifying areas where learners are ahead and behind, supporting planning and reporting to parents – are entirely familiar. They really describe the functions of assessment rather than any principles that govern its application.

There is a commitment that the Government will ‘provide examples of good practice’ and:

‘Work with professional associations, subject experts, education publishers and external test developers to signpost schools to a range of potential approaches. Outstanding schools and teaching schools have an opportunity to take the lead in developing and sharing curriculum and assessment systems which meet the needs of their pupils…Commercial providers and subject organisations may offer curriculum schemes of work with inbuilt assessment, including class exercises, homework and summative tests.’

The second consultation question asks respondents to identify additional support and ‘other good examples of effective practice’.

The final section on reporting confirms that the Government plans to continue to publish teacher assessment outcomes in the core subjects, in line with Bew’s recommendation.

There is a brief reference, almost an afterthought, to schools providing information on transfer and transition. There is no acknowledgement that this process becomes more complex when schools are following different curricula and pursuing different in-house assessment systems.

 .

National Curriculum tests in English, maths and science

This section begins with a further set of Bewisms, this time on the uses of data derived from statutory assessment. They are the justification for the continuation of externally-marked National Curriculum tests.

The proposal is that these should continue in maths and in English reading and grammar, spelling and punctuation. Writing will continue to be assessed through externally moderated teacher assessment, while national science sampling will also continue at the end of KS2. The Year 1 Phonics Screening Check will also continue, with results available in Raise Online but not in Performance Tables.

The timetable, including phasing, is rehearsed again, before the critically important tripartite approach to reporting is introduced.

This comprises:

  • A ‘scaled score’
  • Decile-based ranking within the ‘national cohort’ and
  • Progression from the baseline

The scaled score is the threshold marker of whether the learner is ‘secondary-ready’. We knew from previous announcements that this standard would be raised from the equivalent of 4c to the equivalent of 4b.

It is also necessary to know by how much any given learner has undershot or overshot this threshold. Hence:

‘We propose to report this attainment using a scaled score. Because it is not possible to create tests of precisely the same difficulty every year, the number of marks needed to meet the secondary readiness standard will fluctuate slightly from one year to another. To ensure that results are comparable over time, we propose to convert raw test marks into a scaled score, where the secondary readiness standard will remain the same from year to year.

Scaled scores are used in all international surveys and ensure that test outcomes are comparable over time. The Standards and Testing Agency will develop this scale. If, as an example, we developed scaled scores based on the current national curriculum tests, we might employ a scale from 80 to 130. We propose to use a scaled score of 100 as the secondary ready standard.’

The notion of a scaled score, with current Level 4b benchmarked at 100 and a scale sufficiently long to accommodate all levels of attainment above and below, is familiar from PISA and other international comparisons studies.

If the scale has 50 points, as this example does, then there are 50 potential levels of achievement in each assessment – about three times as many as there are currently.

But the score will also be accompanied by a norm-referenced decile, showing how each learner’s performance compares with their peers.

And an average scaled score is generated for learners with the same prior attainment at the baseline, which might or might not move to Year R, so enabling parents to compare their child’s scaled score with this average.

This material would not be used to generate simpler ‘proxy’ grades but would be provided in this tripartite format.

Assuming the illustrative elements above are adopted:

  • The highest possible KS2 performer would receive a scaled score of 130, confirmation that he is within the top decile of his peers and a comparative average scaled score. If this is less than 130, he has made better progress than those with the same prior baseline attainment. If it is 130 he has made the same progress. By definition his progress cannot be worse than the others.
  • A lowest possible KS2 performer would have a scaled score of 80, confirmation that he is within the bottom decile of the cohort and a comparative average scaled score which could be as low as 80 (all peers with the same prior attainment have made the same limited progress as he) but no lower since that is the extreme of the scale;
  • A median KS2 performer would obtain a scaled score of 100, confirmation that he is within the fifth decile and a correspondingly variable average scaled score.

No illustrative modelling is supplied, but one assumes that average scaled scores for those with similar prior attainment will typically group in a cluster, such that most learners will see relatively little difference, while some outliers might get to +15 or -15. It also seems likely that the ‘progression score’ will eventually be expressed in this manner.

The progress measure is based exclusively on comparison with how other learners are progressing, rather than any objective standard of the progression required.

The document claims that:

‘Reporting a scaled score and decile ranking from national curriculum tests will make it easy to identify the highest attainers for example using the highest scaled scores and the top percentiles of pupils. We do not propose to develop an equivalent to the current level 6 tests, which are used to challenge the highest attaining pupils. Key stage 2 national curriculum tests will include challenging material (at least of the standard of the current level 6 test) which all pupils will have the opportunity to answer, without the need for a separate test.’

But, while parents of high attainers who score close to the maximum might reasonably assume that their offspring have performed in the top one or two percentiles, they will be told only that they are within the top decile. This is rather less differentiated than securing a Level 6 under current arrangements.

Moreover, the preparation of single tests covering the full span of attainment will be a tall order, particularly in maths.

This DfES publication from 2004 notes:

‘It is well known that individual differences in arithmetical performance are very marked in both children and adults.  For example, Cockcroft (1982) reported that an average British class of eleven-year-olds is likely to contain the equivalent of a seven-year range in arithmetical ability. Despite many changes in UK education since then, including the introduction of a standard National Curriculum and a National Numeracy Strategy, almost identical results were obtained by Brown, Askew, Rhodes et al (2002).  They found that the gap between the 5th and 95th percentiles on standardized mathematics tests by children in Year 6 (10 to 11-year-olds) corresponded to a gap of about 7 chronological years in ‘mathematics ages’.’

There is no reference to the test development difficulties that this creates, including the risk that high-attaining learners have to undertake pointless ramping of easy questions, unnecessarily extending the length of their tests.

The text claims that the opposite risk – that ceilings are set too low – will not exist, with at least Level 6-equivalent questions included, but what will their impact be on low attainers undertaking the tests? This is the KS4 tiering debate rewritten for KS2.

One assumes that statutory teacher assessment in the core subjects will be reported in whatever format schools prefer, rather than in the same manner as test outcomes are reported but, like much else, this is not made clear in the document.

By implication there will be no reporting from the national sampling tests in science.

.

Baselines to measure progress

The section on baselines is particularly confusing because of the range of choices it offers consultees.

It begins by stating bluntly that, with the removal of levels, KS1:

‘Teacher assessment of whether a pupil has met the expectations of the programme of study will not provide sufficient information to act as a baseline’.

This is because teacher assessment ‘will not provide differentiated outcomes to allow us to measure progress’. This despite the fact that the document says later on that KS1 data collected under the existing system might be used as an interim baseline measure.

Two core options are set out:

  • Retaining a baseline at the end of KS1, through new English and maths tests that would be marked by teachers but externally moderated. These would be introduced in ‘summer 2016’ Views are sought over whether these test results should be published, given that publication might reduce the tendency for schools to ‘under-report pupils’ outcomes in the interest of showing the progress pupils have made in the most positive light’.
  • Introducing a new baseline at the start of the reception year, from September 2015, an option that gives credit for progress achieved up to the end of Year 2 and removes a perverse incentive to prioritise early intervention. This is described as ‘a simple check…administered by a teacher within two to six weeks of each pupil entering reception…subject to external monitoring’. It would either be developed in-house or procured from a third party. The existing EYFS Profile would remain in place but become non-statutory, so schools would not have to undertake it and the data would not be moderated or collected.

But an array of additional options is also offered:

  • Allowing schools to choose their preferred baseline check (presumably always undertaken in Reception, though the consultation is not clear on this point).
  • Making the baseline check optional, with schools choosing not to use it being ‘judged by attainment alone in performance tables and floor standards’. In other words, the progress measure itself becomes optional, which would appear to run counter to one of Bew’s principles articulated at the beginning of the document and support the Husbands’ line.
  • Assuming a Reception baseline check, making end of KS1 tests non-statutory for primary schools, while retaining statutory tests for infant schools because of their need for such an accountability measure and to provide a baseline for junior schools. KS1 tests would still be available for primary schools to use on an optional basis.

Much of the criticism of the document has focused on the Reception baseline proposal, especially concern that the check will be too demanding for the young children undertaking it. On the face of it, this seems rather unreasonable, but the document is at fault by not specifying more clearly what exactly such a check would entail.

.

Accountability

The penultimate section addresses performance tables and floor standards. It begins with the usual PISA-referenced arguments for a high autonomy, high accountability system, mentions again the planned data portal and offers continuing commitments to performance tables and floor standards alike.

It includes the statement that:

‘In recent years, we have made the floor both more challenging and fairer, by including a progress element’

even though the text has only just suggested making the progress element optional!

The section on floor standards begins with the exhortation that:

‘All primary schools should ensure that as many pupils as possible leave secondary ready.’

It repeats the intention to raise expectations by increasing the height of the hurdle:

‘We therefore propose a new requirement that 85% of pupils should meet the secondary readiness standard in all the floor standard measures (including writing teacher assessment). This 85% attainment requirement will form part of the floor standard. This standard challenges the assumption that some pupils cannot be secondary ready after seven years of primary school. At the same time it allows some flexibility to recognise that a small number of pupils may not meet the expectations in the curriculum because of their particular needs, and also that some pupils may not perform at their best on any given test day.’

So the 85% threshold is increased from 60% and the standard itself will be calibrated on the current Level 4b rather than 4c. This represents a hefty increase in expectations.

The text above appears to suggest that all pupils should be capable of becoming ‘secondary-ready’, regardless of their baseline – whether in Year R or Year 2 – apart from the group with particular unspecified needs. But, this time round,  there is also  allowance for a second group who might underperform on the day of the test.

Once again, the justification for a parallel progress measure is not to ensure consistency with the Bew principles, but to offer schools with ‘particularly challenging intakes’ a second string to their bows in the form of a progress measure. The precise wording is:

‘We therefore propose that schools would also be above floor standards if they have good progress results.’

Does this mean that schools only have to satisfy one of the two measures, or both? This is not absolutely clear, but the sentence construction is perhaps more consistent with the former rather than the latter.

If we are right, this is substantively different to the requirements in place for 2013 and announced for 2014:

‘In key stage 2 tests in 2014, primary schools will be below the floor standard if:

  • fewer than 65% of its pupils do not achieve Level 4 or above in reading, writing and maths, and
  • it is below the England median for progression by two levels in reading, in writing, and in maths.

*Results in the new grammar, punctuation and spelling test are likely to be part of the floor standard in 2014.

For tests taken this year, primary schools will be below the floor standard if:

  • fewer than 60% of its pupils do not achieve Level 4 or above in reading, writing and maths, and
  • it is below the England median for progression by two levels in reading, in writing, and in maths.

*Results in the new grammar, punctuation and spelling test will not be part of the floor standard this year.’

It is also substantively different to the arrangements proposed for secondary schools.

Slightly later on, the text explains that schools which exceed the floor target on the basis of progression, while falling below the 85% secondary-ready threshold, will be more likely to be inspected by Ofsted than those exceeding this threshold.

However, Ofsted will also look at progress measures, and:

‘Schools in which low, middle and high attaining pupils all make better than average progress will be much less likely to be inspected.’

The text argues that:

‘Progress measures mean that the improvements made by every pupil count – there is no perverse incentive to focus exclusively on pupils near the borderline of an attainment threshold.’

But, assuming the progression target only comes into play for schools with ‘particularly challenging intakes’, the large majority will have no protection against this perverse incentive.

As already stated, the progress measure will be derived from comparison with the average scaled scores of those with similar prior attainment at the baseline – in essence the aggregation of the third element in reporting to parents. Exactly how this aggregation will be calculated is not explained.

Of course, an average measure like this does not preclude schools from giving disproportionately greater attention to learners at different points on the attainment spectrum and comparatively neglecting others.

Unless the performance tables distinguish progress by high attainers, they might be likely to lose out, as will those never likely to achieve the ‘secondary-ready’ attainment threshold.

The precise score for the floor targets is yet to be determined, but is expected ‘to be between 98.5 and 99’:

‘Our modelling suggests that a progress measure set at this level, combined with the 85% threshold attainment measure, would result in a similar number of schools falling below the floor as at present. Over time we will consider whether schools should make at least average progress as part of floor standards.’

So the progress element of the standard will be set slightly below average progress to begin with, perhaps to compensate for the much higher attainment threshold. This may support the argument that progress plays second fiddle to attainment.

Finally, the idea of incorporating an ‘average point score attainment measure’ in floor targets is floated:

‘Schools would be required to achieve either the progress measure or both the threshold and average point score attainment measure to be above the floor. This would prevent schools being above floor standards by focusing on pupils close to the expected standard, and would encourage schools to maximise the achievement of all their pupils. Alternatively we could publish the average point score to inform inspections and parents’ choices, but not include the measure in hard accountability.’

The first part of this paragraph reinforces the interpretation that the floor standard is now to be based either on the attainment threshold or the progress measure, but not both. But, under this option, the threshold measure could have an additional APS component to protect against gaming the threshold.

That goes some way towards levelling the playing field in terms of attainment, but of course it does nothing to support a balanced approach to progression in the vast majority of schools.

The section on performance tables begins with a further reference to the supporting ‘data portal’ that will include material about ‘the attainment of certain pupil groups’. This is designed to reduce pressure to overload the tables with information, but may also mean the relegation of data about the comparative performance of those different groups.

The description of ‘headline measures’ to be retained in the tables includes, for each test presumably:

  • the percentage of learners who meet ‘the secondary readiness standard’;
  • the school’s average scaled score, comparing it with the average score for the national cohort
  • the rate of progress of pupils in the school

There will also be a ‘high attainer’ measure:

‘We will also identify how many of the school’s pupils are among the highest-attaining nationally, by including a measure showing the percentage of pupils attaining a high scaled score in each subject.’

The pitch of this high scaled score is not mentioned. It could be set low – broadly the top third, as in the current ‘high attainer’ measure, or at a somewhat more demanding level. This is a significant omission and clarification is required.

Statutory teacher assessment outcomes will also be published (though presumably these will follow schools’ chosen assessment systems rather than scaled scores – see above).

All annual results will also be accompanied by three year rolling averages, to improve the identification of trends and protect small schools in particular from year-on-year fluctuation related to the quality of intake. There is an intention to extend rolling averages to floor targets once the data is available.

All these measures will be shown separately for those eligible for the Pupil Premium. This means that, for the first time, high attainers amongst this group will be distinguished, so it will be possible to see the size of any ‘excellence gap’. This is an important and significant change.

There will also be a continuation of the ‘family of schools’ approach – comparing schools with others that have a similar intake – recently integrated into the current Performance Tables.

The Pupil Premium will be increased:

‘To close the attainment gap between disadvantaged pupils and their peers and to help them achieve these higher standards…Schools have the flexibility to spend this money in the best way possible to support each individual child to reach his or her potential.’

So, despite the rider in the second sentence, the purpose of the Premium is now two-fold.

In practice this is likely to mean that schools at risk of being below the standard will focus the Premium disproportionately on those learners that are not deemed ‘secondary-ready’, which further supports the Husbands theory.

.

Recognising the attainment and progress of all pupils

Rather disappointingly, this final short section is actually exclusively about low attainers and those with SEN – presumably amongst those who will not be able to demonstrate that they are ‘secondary ready’.

It tells us that access arrangements are likely to be unchanged. Although the new KS2 tests will be based on the entire PoS:

‘Even if pupils have not met the expectations for the end of the key stage, most should be able to take the tests and therefore most will have their attainment and progress acknowledged’.

There will also be ‘a small minority’ currently assessed via the P-scales. There is a commit to explore whether the P-scales should be adjusted to ‘align with the revised national curriculum’.

There is an intention to publish data about the progress of pupils with very low prior attainment, though floor standards will not be applied to special schools. The document invites suggestions for what data should be published for accountability purposes.

Here ends the first part of this analysis. Part Two begins with a review of the issues arising from the revised National Curriculum proposals and from the summary of the assessment consultation document above.

.

.

GP

July 2013

Gifted Phoenix Twitter Round-Up Volume 12: Curriculum, Assessment, Fair Access and Gap-Narrowing


.

This is the second section of my retrospective review of the Gifted Phoenix Twitter feed covering the period from February 24 to July 7 2013.

4-Eyes-resized-greenjacketfinalIt complements the first section, which concentrated on Giftedness and Gifted Education.

This section includes material relating to other ‘specialist subjects’: curriculum and assessment, accountability, social mobility and fair access, disadvantage and gap-narrowing.

It provides a benchmark for consideration of forthcoming announcements about the National Curriculum , assessment and accountability which are expected next week.

This is a chronological narrative of developments over the last four months – my best effort at recording faithfully every key piece of information in the public domain.

I have divided the material as follows:

  • National Curriculum
  • Other Curriculum-related
  • National Curriculum Assessment
  • Qualifications
  • Performance Tables and Floor Targets
  • Ofsted
  • International Comparisons
  • Social Mobility
  • Fair Access
  • Careers
  • Pupil Premium
  • FSM gaps, Poverty, Disadvantage and EEF
  • Selection and Independent Sector

Please feel free to use this post as a resource bank when you reflect on and respond to the material we expect to be released over the next few days.

You might also like to visit some previous posts:

.

National Curriculum

Pressure from the Board of Deputies for special treatment for Hebrew, exempted from NC language list: http://t.co/CwlD2YAotp  – Tricky

Support in principle and claims that new NC History PoS is unworkable are not mutually exclusive: http://t.co/PZ1tiUQ2Md

Interesting study of competency-based learning in New Hampshire throws light on NC Expert Panel’s ‘Mastery’ vision: http://t.co/58dlXuFY17

MT @michaelt1979: DfE appointed experts to review National Curriculum – and then ignored all of their advice! Blog: http://t.co/zMwl3atP5b

National Curriculum “Cookery would only be compulsory in those schools with kitchen facilities” – surely not? http://t.co/Bu8TEZej0M

The vexed history of the draft ICT programme of study: http://t.co/dpovmo9UM8 – a microcosm of current autonomy/prescription tensions

.

https://twitter.com/GiftedPhoenix/status/309713663881781248

.

What the NUT welcomes and doesn’t welcome about the draft National Curriculum: http://t.co/gdaijpaUaR – seems pretty representative

Latest from @warwickmansell on the shortcomings of the NC review (this time featuring the history PoS): http://t.co/sgWtXMlKzX

PQ reply on support for National Curriculum implementation: http://t.co/kLbGXRJz9p (Col 240W)

Cannadine on draft History PoS: http://t.co/465Ruv9kIy – includes a robust critique of the drafting process as well as the content

Confused story on climate change in draft NC: http://t.co/AbgPYZRkKP Does it give the lie to Gove’s claim at ASCL that PoS is uncontentious?

New Truss speech on National Curriculum: http://t.co/ZvFa0zf5Lw – Have I missed the golden nugget of news it contains?

More on climate change in the National Curriculum. I think it contradicts yesterday’s piece: http://t.co/DhWOHruVrm

Third in a triumvirate of climate change in the NC articles: http://t.co/dDB5eECBFs – basically it’s all about what’s on the face of the PoS

.

https://twitter.com/GiftedPhoenix/status/314266425843908608

.

DfE has published the list of people consulted on the draft NC Programmes of Study published last month: http://t.co/VdnS8iWTlK

Yesterday’s Westminster Hall debate on the Design and Technology Curriculum: http://t.co/ASY1GWvuXZ (Col 285WH)

Richard Evans redemolishes the draft PoS for history: http://t.co/fq3vLX48IR  – Alleges explicitly that Gove himself wrote the final version

Next of the PoS in line for criticism is D&T (following up yesterday’s Westminster Hall debate): http://t.co/P3Gb9l3ARd

.

https://twitter.com/GiftedPhoenix/status/316080670273310720

.

So we now have two levels of listing for participants in drafting NC PoS (for maths at least): http://t.co/dYt25UgIKI – awkward precedent

Link to the NUT’s National Curriculum Survey is at the bottom of this page: http://t.co/uD8dEOza8q

Government to extend National Curriculum exemption to all schools rated either outstanding or good by Ofsted: http://t.co/KYp4h0djrz

Engineers don’t like the draft D&T PoS: http://t.co/iJVOtbryvS (warning: story contains the most terrifying stock head-and-shoulders shot)

.

https://twitter.com/GiftedPhoenix/status/318751619921629185

.

Don’t see anything wrong with pitching NC expectations high: http://t.co/UlYqXTY8df  – The issue is how you then manage the ‘mastery’ notion

Civitas defends Hirsch: http://t.co/r8p9JReyZQ – Yet more wearisome polarisation. As ever, the optimal path lies somewhere in between.

I see the endgame in this cycle as ditching the NC entirely, only for it to be reinvented half a generation later: http://t.co/3CsF6BFISN

Sadly it may be too late to rescue the draft NC from the poisonous clutches of ideology and political opportunism: http://t.co/3CsF6BFISN

FoI seeking correspondence with RAE and BCS over preparation of the draft ICT PoS draws a blank: http://t.co/W7ULIg694f

Curriculum for Cohesion response to the draft History PoS: http://t.co/1Qhjfo9hhg – as featured in this BBC story: http://t.co/hoqJtDNWKH

.

https://twitter.com/GiftedPhoenix/status/322019960295665664

.

ASCL National Curriculum response calls for retention of levels (at least for KS3 core) or a pilot of alternatives: http://t.co/YB9ZRpLA7q

TES on ASCL and NAHT responses to NC consultation: http://t.co/4drArVSvgC – Is that the sound of chickens coming home to roost?

Apropos Natural England signing that draft NC protest letter, I see they’re currently subject to triennial review: http://t.co/zjXpdMdcee

South China Morning Post: Troubled England has much to learn from HK’s curriculum reforms: http://t.co/mESujyoOYR

UCET’s response to the National Curriculum consultation: http://t.co/J5ElUIkwWf

Latest protest letter about environmental education in the draft NC is signed by Natural England, a DEFRA NDPB: http://t.co/pehCetI1Oh

.

https://twitter.com/GiftedPhoenix/status/324154681876180992

.

Full CBI response to National Curriculum Review: http://t.co/ztyUIGR8Ml  – says NC should be focused much more on outcomes:

Truss pushes school curriculum over NC http://t.co/DXcaF3jFGx We could do with guidance on how much time the prescribed PoS should consume?

FoI reveals NC Expert Panel cost £287.6K in fees and expenses for 342 days’ work http://t.co/BDhNDCcvOx – Hardly bargain basement?

Confirmation that the draft History PoS was written by DfE officials: http://t.co/7A6X8cu7SY (Col 881W)

2 years into NC Review, DfE reported to be taking draft D&T PoS back to the drawing board: http://t.co/Nas8LX1My0 – More humble pie consumed

Sec Ed carries a report of a March Westminster Forum event on MFL: http://t.co/QlaaCHYGvi

.

https://twitter.com/GiftedPhoenix/status/327484933998247937

.

Bringing balance and diversity to the new history curriculum (courtesy of Curriculum for Cohesion): http://t.co/TCDX8ireIJ

So DECC also thinks there’s an issue with climate change in the National Curriculum: http://t.co/j5bIY3qgVJ – that raises the stakes a bit

There was a very small majority in NC consultation to change ICT to Computing – 39% yest; 35% no: http://t.co/KL0OKK9rkV

This new document on National Curriculum disapplication says only 23% of NC consultation responses supported it: http://t.co/dRFBGGPWS6

DfE consultation on the Order replacing ICT with Computing and disapplying parts of the NC from September 2013: http://t.co/xs6P6TXDcq

SCORE’s Report on Resourcing Practical Science at Secondary Level: http://t.co/zTYvwjZWTH

There’s been a desperate search for additional surveys demonstrating children’s lack of historical knowledge: http://t.co/B6VZQdD5bM

If Government doesn’t abandon NC in 2014/15 my guess is that will be in the 2015 Tory Manifesto: http://t.co/gmE4EHTXA4

@warwickmansell fisks Gove’s speech on curriculum reform: http://t.co/gmE4EHTXA4  Could 2013/14 disapplication pave the way for abandonment?

Gove seems to be suggesting that National Curriculum may need to change iteratively to reflect innovation http://t.co/EdWssxHd84 Unworkable?

Davey’s private letter about climate change in the NC is officially acknowledged in a PQ reply: http://t.co/fx76s9scQU (Col 358W)

Robinson breaks cover to criticise the National Curriculum (and promote his new book): http://t.co/o6h98R1UPn

.

https://twitter.com/GiftedPhoenix/status/336746672883396608

.

HEFCE is funding a new programme to support MFL including raising aspirations and attainment in secondary schools: http://t.co/eg5pws1Pme

More of the shortcomings of the National Curriculum Review process laid bare by @warwickmansell: http://t.co/ZUpkvekz8V

More attacks on national curriculum history: http://t.co/OIy1l5dZyD

.

https://twitter.com/GiftedPhoenix/status/346511361028784128

.

Twigg has just advocated almost complete fragmentation in curriculum, but says he’s against fragmentation! http://t.co/nepHKaziLH

Twigg’s decision to ditch entire National Curriculum isn’t getting media scrutiny it deserves http://t.co/AHcEL4pPoG  20,000 secret gardens!

Government response to consultation on the order replacing ICT with Computing: http://t.co/xs6P6TXDcq

New Labour policy to drop National Curriculum is directly at odds with ASCL’s preference for a universal entitlement: http://t.co/ZN83xoU2GO

Government response to consultation on NC disapplication: http://t.co/Dg9pbmmCZ6  (we’re going to do it anyway…)

Reports that draft History PoS significantly revised: http://t.co/etSGXxePOj – also being cleared by both PM and DPM!

Groups advising on training/resource implications of new NC PoS have a go at draft PoS instead. Beyond their remit? http://t.co/7Nqtr82JKo

.

https://twitter.com/GiftedPhoenix/status/351982610148368384

.

NUT’s Advice on National Curriculum Disapplication 2013/14: http://t.co/zY7KvfkiES

.

https://twitter.com/GiftedPhoenix/status/353024273729863680

 

https://twitter.com/GiftedPhoenix/status/353444033303027712

 

https://twitter.com/GiftedPhoenix/status/353444300387917824

 

https://twitter.com/GiftedPhoenix/status/353459007995904000

 

https://twitter.com/GiftedPhoenix/status/353802320175316992

 .


Other Curriculum-Related

Direct link to new APPG Report on RE: ‘The Truth Unmasked’: http://t.co/uClQGD59M9 – stops short of pushing (again) for RE in EBacc

Outcome of PSHE Review: http://t.co/9F6iBqXf4i – There will be no separate PoS. Links to consultation report and FAQ from this page

Yesterday’s Ministerial Statement on PSHE Education: http://t.co/X8n5KpejDv (Col 52WS)

PSHE consultation report doesn’t give percentage of respondents wanting some topics to be statutory http://t.co/IyECrBZ1Qg Likely a majority

Eurydice has published a timely new report on PE and sport at school in Europe: http://t.co/aEyu0fjTSM

Powerful case for statutory PSHE following the relatively pallid Review: http://t.co/he3CIbLRFV  – could be fertile Manifesto territory…

Google wants more emphasis on the STEM pipeline: http://t.co/f9p7rtxlcY  – How can government harness their enthusiasm (and spondoolicks)?

The Next Generation Science Standards: http://t.co/H0oaK6bLeR and an instructive insight into the drafting process: http://t.co/fgnsm5mjtM

Wonder why this PQ reply on school sport funding fails to confirm that it will be ringfenced: http://t.co/BoN08PB0UO (Col1196W)

Letter from Sex Education Forum et al about sex ed in the draft NC: http://t.co/B46EqtMrQ5 Weird decision to write to a paywalled organ (!)

A National Shakespeare Week and new bank of teaching materials? http://t.co/ixaFlV7VQ9 – There are more things in heaven and earth…

Westminster Hall debate on long-awaited National Plan for Cultural Education: http://t.co/7tqSlZKdcF (Col 94WH) – here ‘very soon indeed’

TES comment from YST on spending the £150m for school sport: http://t.co/OTSDTRUUvY – hard for them to add value without national framework

Yesterday’s short Lords’ Debate on PSHE: http://t.co/CXoOAafGNJ (Col GC403)

Sue Campbell repeats warnings about the patchiness that will result from uncoordinated support for school sport: http://t.co/BPsOr6FEnz

Ofsted’s PSHE Survey Report: http://t.co/0TkhFNm32M

New phonics document: Learning to Read through Phonics: Information for Parents: http://t.co/uVtjTzcyXA

Quiet news day on the education front, so all the better opportunity to engage with Mr Point (geddit?) http://t.co/994uBgIvG3 A neat riposte

Education Committee is taking evidence on school sports from 9.30am this morning: http://t.co/IYTM3RR8ap

Two Plashet School students review this morning’s Select Committee session on school sport: http://t.co/W1C3yFLI2N – excellently written

NLT press release on the increase in children’s online reading: http://t.co/uULtsGEYPV – says the report itself is ‘forthcoming’

Belated link to the Smith Institute Survey of Participation in School Sport: http://t.co/qIZO4OZdEm

Is it socially unacceptable to use bad grammar but fine to make mathematical errors? http://t.co/ukuxdCV5we

Lords Oral PQ on school sports: http://t.co/86omWG5x4d (Col 623)

Uncorrected Transcript of 14 May Education Committee Oral Evidence Session on School Sports: http://t.co/n9jMXd13vg

Direct link to ‘Learning the Lesson: The Future of School Swimming’: http://t.co/zRByR2zVDE

What’s going on with this PM intervention over school sports funding? http://t.co/cxhfDZAy9G  – one could read a fair bit into it

Here’s Oxford’s press release on the Snowling phonics test research: http://t.co/1q3BkHS2kQ No link to paper (which isn’t yet peer-reviewed)

Labour wants to bring elements of PSHE into the National Curriculum: http://t.co/AzcNBcSW9x – but why are Home Office shadow team leading?

£7m over 5 years to support A level Physics: http://t.co/KyjajFgXdk and http://t.co/xq4LkvRJDK  – but no hint of priority for disadvantaged

Lords Oral PQ on PSHE: http://t.co/1iVQ0w5vyq (Col 1512) leaves the way open for change to the National Curriculum

By the way, whatever’s happened to the National Plan for Cultural Education? http://t.co/gGaB9ScUJb

The Cultural Education Plan will be published ‘within the next few weeks’: http://t.co/0GkX1TjPZ2 (Col 620W)

Details of Further Maths Support Programme tender now on Contracts Finder: http://t.co/3QHq3rNfb7 – EOIs by 19/4 and Supplier Day on 2/5

Truss increases funding for Further Maths Support Programme: http://t.co/AwNsNN5gDS Current offer is here: http://t.co/nqsOofug0B

Sting in the tail here. That Further Maths Support Programme expansion will be tendered, so MEI may not get the gig: http://t.co/IB0gbZaAud

DfE is inviting bids to run the Further Maths Support Programme: http://t.co/jwBCbdK2Aa – up to £25m over 5 years

NATRE’s Survey of RE provision in primary schools: http://t.co/F1Qc0FpBeC

Short Lords Debate yesterday on Citizenship Education: http://t.co/nFuhwBiBf3 (Col 953)

Need to see how these various maths reforms amount to coherent strategy where whole’s greater than the sum of parts: http://t.co/AwNsNN5gDS

DfE has refreshed its school sports advice on http://t.co/fRKX7ciiSd  Press release on gov.uk http://t.co/xrGaWzl3Vl

Government held a roundtable meeting on arts education on 5 June: http://t.co/zYAuduZ0ST (Col 528W)

.

https://twitter.com/GiftedPhoenix/status/352668612596731904

https://twitter.com/GiftedPhoenix/status/353203077723070465

https://twitter.com/GiftedPhoenix/status/353223177230491648

 .

National Curriculum Assessment

Still no-one’s rising to the challenge I posed to design a new KS1-3 grading system: http://t.co/kZ2Ki7k18M – The silence is deafening

Today I have been mostly worrying about National Curriculum Assessment: http://t.co/ybJ13d8rVR I desperately need help from @warwickmansell

RT @localschools_uk: Ofsted “expected progress” measure flawed – higher grade students much more likely to achieve it: http://t.co/07TNk

Brian Lightman: we should not be trying to drive the curriculum through our assessment system: http://t.co/6Q8Sr3FMY1 – I agree

Many thanks to everyone RTing my post on future of NC assessment: http://t.co/ybJ13d8rVR – separate KS2 test syllabuses seem inevitable

Warwick Mansell on National Curriculum assessment: http://t.co/me1Ecnd9Ia – the perfect complement to my own post!

.

https://twitter.com/GiftedPhoenix/status/308890950527250434

.

Apropos Levels going in 2016, we should imminently get announcement of who has KS2 marking contract 2014-2016: http://t.co/9aKktsQFwm

The Laws speech to ASCL confirms that Level 4b equivalent will become the new KS2 test ‘pass’ from 2016: http://t.co/OT91Q7KfCW

MT @emoorse01: Unpopular level descriptions are going. But what will replace them? http://t.co/P7zkKotuv9 inspired by @GiftedPhoenix Thanks!

Further KS2 grammar punctuation and spelling sample materials for level 6: http://t.co/MMDbkh14KF

“In particular we shall consider refreshing the p-scales”: http://t.co/3cVnw0uGUy (Col 344W)

@brianlightman tells #ascl2013 that abolition of NC levels creates a policy vacuum. ASCL to discuss further with DfE: http://t.co/2g7MKulC4m

2013 Performance tables will show percentage of children achieving KS2 Level 6, but not as percentage of entries: http://t.co/pHV2q4Vvle

New primary assessment and accountability regime (consultation awaited) won’t be confirmed until September http://t.co/VF0r5wEAR5 (Col 722W)

The primary school accountability consultation will still be published “shortly”: http://t.co/nrlWy3x5qx (Col 806W) Next week presumably.

New KS2 Writing moderation and exemplification materials levels 2-6: http://t.co/lzaMwFpTnN

Tokyo high schools are about to introduce a set of attainment targets: http://t.co/xsZ16I7KOd

New DfE research on KS2 Level 6 Tests: http://t.co/2FdvVGeoKY – Critical of lack of guidance; doesn’t mention disappearance of L6

.

https://twitter.com/GiftedPhoenix/status/322704575041781761

.

Wonder why there’s no reference to primary accountability consultation in this new timeline for schools: http://t.co/19aW9z2t8W

How Level 6 tests are viewed in secondaries: http://t.co/Ie7nzkOWOA Gifted learners suffer badly from this poor transition practice

The list of Education Oral Questions for this afternoon: http://t.co/X5Dvwd2swc – Includes one from Ollerenshaw on Level 6 tests

113,600 pupils from 11,700 schools (21% of cohort) are registered for a 2013 KS2 L6 test: http://t.co/AfDYI0OsRW (Col 680W) Iffy

More about KS2 L6 tests: http://t.co/hXTS6d4XOO  NB: a 21% entry rate seems excessively high; NC levels will disappear by 2016

@warwickmansell Did you know about this? http://t.co/TXbaOZVtZE – I might have missed it but I saw no announcement. It looks as though…

@warwickmansell ..Pearson were the only bidder and have been awarded a £60m contract following negotiations…

@warwickmansell ‘Only one bid received which did not meet the minimum selection criteria. Negotiations were conducted with that bidder’

@DrFautley @warwickmansell NB that the estimated value of the contract was originally £50m – see: http://t.co/bCS3eMSUJi

What’s wrong with tests being marked abroad, provided quality is sustained and savings are passed on via lower fees? http://t.co/vjhtxgms5O

.

https://twitter.com/GiftedPhoenix/status/329273791882596352

.

Worth comparing this Duncan speech on assessment with similar discussion on this side of the Atlantic: http://t.co/DSgzUXEl4i

5 Reasons to Keep NC Levels by @LKMco http://t.co/iYpPn2MTZc If no rabbit in the assessment consultation hat will Labour commit to keep them

This reaction from Crystal on SPAG raises awkward wider questions about current QA processes for statutory assessment http://t.co/q3aVOus50X

Given the furore over the grammar, spelling and punctuation test, any feedback on these L6 materials? http://t.co/iqDpS7vXT8 – helpful?

Wow! This post on National Curriculum levels is a bit of a rollercoaster ride: http://t.co/idrE54bxyO – I agree with substantial parts

For completeness sake, the press release on today’s grammar punctuation and spelling test: http://t.co/KbKXd5trYt

Timetable for the primary assessment/accountability consultation slips to ‘by the end of the summer term’: http://t.co/oaL231aj69 (Col 383W)

New DfE research into KS2 Access Arrangements Procedures: http://t.co/H0Xt49YR1Y

.

https://twitter.com/GiftedPhoenix/status/341826694082084866

.

The significance of progression in assessment reform: http://t.co/u2DsNj47PH – a timely reminder from Australia

(Slightly) veiled criticism from Economist for ‘endless fiddling with tests’: http://t.co/LsmW6XSkC5

FoI reveals Standards and Testing Agency’s 2012/13 programme budget was £35.7m: http://t.co/iPNJzvRQRz

My piece ‘Whither National Curriculum Without Levels’ http://t.co/JNTYosr4nL We await a new KS2 grading structure

.

https://twitter.com/GiftedPhoenix/status/345592470232514560

.

First Interim Report from the Evaluation of the Phonics Screening Check http://t.co/g4e1o9djiN conveys negative teacher feedback over value

This on NC Levels from DfE rather glosses over importance of internal assessment dovetailing with statutory grading: http://t.co/2wziieK5Bv

There doesn’t seem to be any defensive line on the odd dissonance between the NC and assessment timetables: http://t.co/L9U6ICI0MH

Confirmation that in 2015 pupils will be following the new NC but will be assessed against the old NC: http://t.co/3NQb9AAsGM (Col 357W)

Last night’s #ukedchat transcript on the removal of National Curriculum levels: http://t.co/zGKCjhnwiN – My post: http://t.co/JNTYosr4nL

.

https://twitter.com/GiftedPhoenix/status/350505289507803138

.

STA received 240 complaints re non-registration of KS2 pupils for Level 6 tests post-deadline: http://t.co/zYAuduZ0ST (Col 531W)

If it’s not legally possible to keep NC and assessment out of kilter: http://t.co/AxlG0W61Sp  Could this delay NC implementation to 2015?

Breakdown of responsibilities in Standards and Testing Agency: http://t.co/fpVIEknny4  (Col 601W) Will be reducing from 3 divisions to 2

.

https://twitter.com/GiftedPhoenix/status/353803264438972417

 

https://twitter.com/GiftedPhoenix/status/353803745898934273

 .

Qualifications

 Chris Wheadon on the difference between tiering and extension papers (via @warwickmansell ): http://t.co/lFJgjllA3Y

DfE has released an Equality Analysis of its planned GCSE Reforms: http://t.co/pXkyHnPZXF  – You can decide whether it stands close scrutiny

Seven new GCSE-equivalent 14-16 courses in engineering and construction: http://t.co/Zdri1TZAZx – Bet they’re not the last!

TES reports Ofqual has embarked on an international comparisons study of GCSE equivalents: http://t.co/HM4tr4b1gP

Truss IoE Open Lecture on A Level reforms: http://t.co/lucLs9KVHM plus the latest increment of maths support

What’s the difference between a Maths M Level and (part of) a stand-alone AS level? http://t.co/zzx6Z5YBGU

Not very revealing PQ replies about the decision to make AS level a stand-alone qualification: http://t.co/hUFtrO6zho (Col 142W)

TES interview with Ofqual’s Stacey throws further doubt on exam reform timetable and viability of untiered GCSEs: http://t.co/V6m5C1BvOH

New letter from Gove to Stacey on A level reform: http://t.co/MUepfQmyn8  – sounds like AS reform beyond content will be delayed

PQ asking which universities us AS level for admissions purposes: http://t.co/mBk6f2IrmU  (Col 594W) – Answer not that illuminating

Uncorrected evidence from Education Select Committee’s 12 March session on GCSE English results: http://t.co/soK7X9z3UR

SEN lobby coming to the forefront in arguments against removal of GCSE tiering (TES): http://t.co/gKnFgTTopB – Looks increasingly vulnerable

Ofqual’s response to the last Gove letter on the exam reform timetable: http://t.co/hdVyDaO26B

Glenys Stacey back before Education Select Committee at 10.40 this morning re GCSE English results: http://t.co/X5B7HIumzZ

So what alternatives are there to GCSE tiering, apart from core + extension model? Any of them substantively better? http://t.co/55eLxHkEjQ

Uncapped GCSE APS data for London Challenge, City Challenges, All secondary by ethnic group, 2007-2012: http://t.co/BoN08PB0UO (Col 1190W)

PQs from Glass and Stuart about HE support for standalone AS levels still getting short shrift: http://t.co/BoN08PB0UO (Col 1187W)

Uncorrected transcript of 26 March Education Select Committee session on GCSE English with Ofqual: http://t.co/iRF55cMZsB

Hansard record of yesterday’s Westminster Hall Debate on AS and A levels: http://t.co/QMJ35QD6ak (Col 33WH)

Today’s TES Editorial is about the perverse effect of comparable outcomes: http://t.co/fGeC5qdQ7V

Ofqual Note on GCSE English Marking in 2012 sent ahead of 26 March Session with Stacey: http://t.co/6QbFIGlQnx

O’Briain advocates core plus extension in GCSE maths: http://t.co/eRmYsc29fN – but his comments seem more likely to justify tiering

Ofqual’s GCSE English S+L proposals would mean a 60/40 split in favour of written papers (compared with 40/60 now): http://t.co/bigkAQQvw1

TES piece on OUCEA report on GCSE reforms: http://t.co/9B1HVEVatB – and here’s the paper itself: http://t.co/8EiV3onziZ

Government Response to Education Committee Report on KS4 Reform http://t.co/Qi4EDjx0W7 Nothing new; handy summary ahead of May consultations

So AS level will be a clear dividing line in educational policy ahead of 2015 election http://t.co/gsoN51rrS6 A one off or first of several?

Direct link to the OUCEA Report on research evidence relating to GCSE reforms: http://t.co/8EiV3onziZ  – think I may have tweeted this before

Ofqual’s secondary accountability consultation response:http://t.co/FjZiWK3Yee – Isn’t weighting Eng and Ma in the best 8 measure overkill?

Coverage of Cambridge Assessment’s call for GCSE scale scores: http://t.co/8ia6fY84At and http://t.co/K5vRXDyRbN and http://t.co/Po14hPyuL9

Another Cambridge Assessment Paper on GCSE tiering: http://t.co/aPra9aCzkB Sees merit in 2 separate levels in Ma and Eng: ie Tiering Plus!

Here’s a direct link to the Cambridge Assessment paper on GCSE scale scores: http://t.co/ldnOTOd0sA

Laws to Brennan on AS Levels: http://t.co/VMAsjOIAdN – internal analysis suggests AS have marginal additional predictive value

Reports of a new commitment from DfE to consult the other education departments on qualifications reform: http://t.co/8ZrMBlqUWX

Coverage of yesterday’s indication of more ceiling space for GCSE high attainers: http://t.co/nrh6KBcBxq and http://t.co/474RgDUgEb

Pretty much inevitable home countries split over GCSE/A Level: http://t.co/9x7dx7tFq9 and http://t.co/YLBPC43ohJ There’s plenty of downside

Here’s the study of the predictive value of GCSE versus AS that Laws quoted last week: http://t.co/3mbNOvtE8v

CUSU has written to Education Ministers about AS Levels: http://t.co/zpfyb98Mcv

TES say AQA is monitoring Twitter for evidence of cheating in its exams (but presumably not in other’s exams): http://t.co/wJFVPY2e8z

Leaked notion that son-of-GCSE grades go from 1 up to 8 reverses Gove’s 1-4 down illustration: http://t.co/oHJUQ4DAYW What’s the logic here?

Incidentally, introducing an I(ntermediate) level at age 16 begs a big question about the identity of E(ntry) level: http://t.co/znAH11STJU

Hmm – building in space for a hypothetical Grade 9 is most definitely a ‘speculative’ proposal! http://t.co/rr91ir37zc

Independent Editorial on I-Levels seems to be off the pace, or at least rather idiosyncratic: http://t.co/79QceNtdR2

Will there be pressure in future to keep Welsh and Northern Irish GCSEs available in English schools? What of IGCSE? http://t.co/CQvDzvT7e0

Twigg on I levels: http://t.co/xzpO3ppS0m

TES on I-levels: http://t.co/MI7mbLmKi0 – also says yesterday’s Estyn report on science is a threat to their PISA 2015 aspirations

Some distancing here from the new I-level moniker: http://t.co/0R8JB6S0VP – maybe we’ll have GCSE(E) instead!

Ofqual on A level review arrangements: http://t.co/aiKa18kbi2 and http://t.co/qLzlWkZKEX  Russell Group references conspicuous by  absence

Ofqual’s Interim Review of Marking Quality http://t.co/LXcht4zCQh + Literature Review on Marking Reliability Research http://t.co/zqWNfdc07c

Two newly-published interim research briefs from evaluation of the linked pair of GCSEs in maths (M LP): http://t.co/L6iTg2aE8w

Direct link to Education Select Committee report on GCSE English marking: http://t.co/mn5l58lXkE

This looks authoritative on today’s GCSE announcements: http://t.co/USfhSmP8SW  New line on reverse grading from 8 down to 1 still looks weak

So it appears tiering may be reprieved in maths, physics, chemistry biology: http://t.co/USfhSmP8SW  Annual 1K pupil sample tests in Ma/Eng

Update on Ofqual’s progress in investigating awarding organisations that also publish text books: http://t.co/wjN6ya5xMe (Col 124W)

DfE consultation says high achievers’ content is separately indicated in Maths spec. But why only Maths? http://t.co/Eog0u5rq10 (p7)

DfE GCSE content consultation says no specifications for any further subjects: http://t.co/Eog0u5rq10 (pp5-6)

Here is Ofqual’s Review of Controlled Assessment in GCSE, also released today: http://t.co/doP85MINLa

Ofqual is proposing new short course GCSEs, and there could be common content with full GCSEs: http://t.co/KHDBXOyyeV (p32)

Ofqual promises a separate Autumn consultation on setting grade standards from first principles: http://t.co/KHDBXOyyeV (p30)

Can’t see any justification from Ofqual’s for why the grading goes from 8 to 1, rather than vice versa: http://t.co/KHDBXOyyeV (pp26-29)

As expected, Ofqual proposes ‘improved’ overlapping tiers in Maths, Physics, Chemistry and Biology: http://t.co/KHDBXOyyeV  (pp13-17)

DfE GCSE Subject Content Consultation Document: http://t.co/Eog0u5rq10 and subject specifications: http://t.co/JdFkUvlwWk

Ofqual GCSE consultation press release: http://t.co/tUoMAGtZPQ and consultation document: http://t.co/KHDBXOyyeV

Today’s oral statement on GCSE reform: http://t.co/V6bah8Wf8l

Interesting if somewhat idiosyncratic list of the (14) experts consulted on the GCSE specifications: http://t.co/jU9mzRVeE6

As Wales and NI stick with integrated AS level, the UK-wide exam reform picture begins to resemble a dog’s breakfast: http://t.co/BDgyrEEGSW

My gov.uk notification says draft GCSE science spec has already been replaced with a new version: http://t.co/y7ZDv6qwu6 – what’s changed?

Russell Group will have an A Level Content Advisory Group (ALCAB) after all: http://t.co/khv5tOT9KK but no real detail about process

Set of fairly non-committal PQ replies about Welsh GCSEs: http://t.co/c55IFqatcr (Col 581W)

Handy NUT guide to proposed changes to GCSE: http://t.co/09DCibEJJT

GCSE MFL entries by language over the last decade: http://t.co/eKA5N4OOxA (Col WA90)

How well does AS predict A level grades? http://t.co/KMTGBHMvsE

Ofqual needs two new board members, pay is £6K a year: http://t.co/Jm53Ff2ltk

.

https://twitter.com/GiftedPhoenix/status/352680828268052480

https://twitter.com/GiftedPhoenix/status/352730368845156352

https://twitter.com/GiftedPhoenix/status/353027254709784576

.

Performance Tables and Floor Targets

Bell criticises Russell Group-focused destination data: http://t.co/vcBS7VB9Mo – likely to be relegated to planned Data Warehouse/Portal?

AoC has complained to the UK Statistics Authority about the KS5 Performance Tables (TES): http://t.co/h7F0xy896G

Details of new primary floor targets from DfE: http://t.co/qlhoDYi5Eq – Level 4 becomes 4A/4B only until it’s discarded in 2016

Repitching Level 4 at current 4B must be precursor to using the higher threshold as new post-Levels end KS2 benchmark http://t.co/qlhoDYi5Eq

By the way, results in the new GSP test ‘are likely to be part of the floor standard in 2014’: http://t.co/qlhoDYi5Eq – why so provisional?

Sounds like there are also plans (long on the back burner?) to publish ‘families of schools’ data: http://t.co/OT91Q7KfCW

@headguruteacher On school accountability: http://t.co/zo3QD7kiRy – Can’t see universal performance indicators being sacrificed entirely

Direct link to IPPR Report on how performance table reforms have affected 14-16 vocational education: http://t.co/l1OoqsiJsm

Direct link to Demos Report on Detoxifying School Accountability: http://t.co/Z5SZk71vDE  plus press release: http://t.co/LSVGazLSex

Interim report tomorrow from Labour’s 14-19 review will apparently recommend abolition of EBacc: http://t.co/Vxo2h6BoYS

International comparisons of school accountability systems: http://t.co/J4fgxLEwHx

.

https://twitter.com/GiftedPhoenix/status/345121638506954754

.

.

Ofsted

FAQs on Ofsted’s School Data Dashboard: http://t.co/SLJX93VUDl – Nothing on link with new Performance Tables proposals

Mumsnet users aren’t impressed by the Ofsted Schools Dashboard: http://t.co/GVN1JA2F0L

TES reports concern that data determining which LAs Ofsted inspects is out of date: http://t.co/FPtyc0uLjn

TES editorial suggests Ofsted should consider Montesquieu before undertaking school improvement alongside inspection: http://t.co/vgArtBkNsg

Ofsted will tomorrow publish data on the outcome of school inspections in first term under new framework: http://t.co/yLQJIZPO2H (Col 987W)

The promised Ofsted school inspection data for October to December 2012: http://t.co/Ampa57zT5M  plus press release: http://t.co/VFbVi2pkuI

38 sponsored academies inspected October-December 2012: 0 outstanding; 20 good; 13 require improvement; 5 inadequate: http://t.co/Ampa57zT5M

Ofsted has published the Data Dashboard/school governors Wilshaw speech: http://t.co/YRBrc9iiYP

Ofsted Survey: Improving Literacy in Secondary Schools: http://t.co/qi05eHb2hY

Update from NAHT on Ofsted data dashboard issues: http://t.co/lv259wSH8n

Following Derby and Portsmouth, Ofsted next blitz Coventry: http://t.co/6Tpf78JChe

Norfolk is next authority in line for the full Ofsted treatment: http://t.co/cxUENfpBqA

Ofsted’s letter to Portsmouth following mass inspection there is less emollient than the earlier one to Derby: http://t.co/6Gzl4uPN2V

Ofsted has published its response to a consultation on improving its own complaints procedures: http://t.co/XQY9jAdDbi

Omigod (literally!) Christian Today says “Sir Michael’s sins are too long to detail”: http://t.co/NLWvc5ohg5 but I think it’s ironic

Ofsted has written to Coventry about the outcomes of the recent LA-wide inspection there: http://t.co/6Gzl4uPN2V

In case you haven’t yet had a link to….Ofsted’s consultation document on early years inspection: http://t.co/6BSMUMjt4T

Not sure if this breakdown of grades for the 9 free schools so far inspected has been made public before: http://t.co/zY98hnQFcO (top link)

Papers from the Ofsted Board Meeting of 26 March: http://t.co/HSLtSywpvZ including a Performance Report on the last Quarter of 2012

All the Ofsted bumph on LA inspection: http://t.co/0uoTSOSgUB and http://t.co/L4uCbIymSV and http://t.co/9MVPli31GM

Norfolk gets a stiff letter from Ofsted, Here’s the full text courtesy of the EDP: http://t.co/AnXruhJytU

Ofsted has released a first set of monthly figures updating latest school inspection outcomes by phase and LA/region: http://t.co/H7dOL1k3TA

NAHT press release about Instead, its alternative inspection model: http://t.co/P0Gg7IOLS6

Interesting piece on the impact of a negative Ofsted on test outcomes: http://t.co/21vs0QbFeS

Ministers haven’t pinned down the criteria determining a decision to ask HMCI to inspect a local authority http://t.co/uIqHyNszr9 (Col 596W)

The Ofsted roadshow reaches Bristol: http://t.co/4xuYt3gIP7

Ofsted will on Monday start inspection of local authority school improvement services in Norfolk and Isle of Wight: http://t.co/Ak3qmFir13

Full DfE FOI response on evidence of effectiveness of Ofsted inspection: http://t.co/Wc6yzuU9qm Includes 2012 internal research review.

TES reports that some private school emergency inspections are prompted by fears of religious extremism: http://t.co/Rs2VKph5Ee

Medway is next in Ofsted’s sights: http://t.co/URDpiAr1Qj

Ofsted press release on its inspection of Medway primary schools: http://t.co/URDpiAr1Qj

East Riding escapes relatively lightly following Ofsted scrutiny: http://t.co/6Gzl4uPN2V

Is Ofsted now acting as a school improvement broker, asks @Robt_Hill http://t.co/rirkTzCdit

Really useful data via Ofsted FOI on how academies’ inspection ratings change after achieving academy status: http://t.co/Ucldk1c5BG

.

International Comparisons

New OECD working paper on the Predictive Power of PISA test items: http://t.co/YLSiI6aBOu

RT @OECD_Edu: The ideas that shaped PISA, & the ideas that PISA shaped – Slidedeck for @SchleicherEDU TED talk http://t.co/vW3qlyXLQ4

Latest PISA in Focus on marking: http://t.co/hgNXEUYnNK

Schleicher presentation from OECD summit on using evaluation to improve teaching: http://t.co/cOIG1UR31y

What’s wrong with NC Levels: http://t.co/6sgLjPN212 Interested to see how the author’s alternative differs to mine: http://t.co/JNTYosr4nL

BBC reports that PISA test for schools is about to be launched: http://t.co/ECcHnUjaSI  – further background here: http://t.co/t4rBw5sssG

A whole range of US pilot material from the OECD (PISA) test for schools can be accessed here: http://t.co/5aGWuGlZxp

Today’s TES editorial is about our penchant for things American: http://t.co/X72zSyL6vT – but PISA is now driving a more eclectic approach?

‘You’ll be Shocked by How Many of the World’s Top Students are American’: http://t.co/hHH1vihzNx (I wasn’t)

Tim Oates in TES on comparative study of high-performing education systems: http://t.co/hAbCyLSHZ1 – is the full analysis published?

PISA in Focus study on What Makes Urban Schools Different: http://t.co/PoGkZjegCe as reported here: http://t.co/FAEGWkKx3i

Interesting World Bank blog post about OECD’s ‘PISA for Development’ (building PISA use in developing economies): http://t.co/DULGaN96ru

Interesting NCEE article exemplifying data produced from the PISA Test for Schools: http://t.co/WED2ztV8vS

International comparisons of school accountability systems: http://t.co/J4fgxLEwHx

TES on PISA Tests for Schools: http://t.co/KQjAugyG8s – They’ll cost at least £5,250 a throw. Government ‘supportive’ but won’t impose

Forgot to mention you can check how our high achievers perform on PISA, TIMSS and PIRLS here: http://t.co/OvqmNIJO7J

PISA’s approach to assessing collaborative problem-solving (in 2015): http://t.co/Zn01bSsjvy

Here is Education at a Glance 2013 (all 440 pages): http://t.co/vMw2wWsonC and here’s the 10 page note on the UK: http://t.co/hEMaPeHNN5

.

Social Mobility

The ASCL page from which you can download all three of today’s publications about social mobility: http://t.co/wbXhX89uPx

On quick review ASCL social mobility documents are good in parts while ignoring the Excellence Gap concept: http://t.co/wbXhX89uPx

Upcoming ISER social mobility report: http://t.co/6mheoYoFQj Doesn’t yet show up on their website: http://t.co/vYtaj2Rqfi

NYT article on social mobility through HE access in the USA: http://t.co/qH8Xtd5bNB – unfortunately the paper it discusses is £

@brianlightman on schools’ contribution to social mobility: http://t.co/Yf8dDeVozB

Book of abstracts for today’s HEA social mobility conference http://t.co/clpTWHbsat – let’s hope the papers are published and free to access

Top strand of the IT pipeline is dominated by independent schools: http://t.co/qYmTqQbprT

Some fairly rigorous intervention is required before grammar schools will be a key to social mobility: http://t.co/rQ4RK2tUwU

Social Mobility and Child Poverty Commission reminds us of its existence: http://t.co/E6zAQizcuU  – There’s plenty for it to address

BIS FOI Release concerning evidence of guidance on unpaid internships and unpaid interns: http://t.co/Soje8x3Asg

Social Mobility and Child Poverty Commission has issued a call for evidence to inform 1st annual report http://t.co/wkIx8QBfNo

Grammar schools and the myth of social mobility: http://t.co/ltOADwOJve – Exactly

Not exactly an enthusiastic answer to Hinds PQ on Government progress on social mobility: http://t.co/ZaSg2aw1e5 (Col 1097W)

DPM’s Opening Doors Press Release: http://t.co/3pSPybcQ5O – so glad to know who provided the double-deckers for the Talent Tour!

Full text of Milburn’s Scottish Speech setting out the stall of the Social Mobility and Child Poverty Commission: http://t.co/EFBdc5xfYB

Yesterday’s Westminster Hall Debate on unpaid internships: http://t.co/HRvLNpmlfk (Col 161WH)

20 June Lords Debate on Social Mobility: http://t.co/Mo88u8Y7nW (Col GC139) When Government spokesman ditches his speech, that’s a bad sign

Public Attitudes Research on Social Mobility from the SMCPC: http://t.co/BvbgFBzdLI and press release: http://t.co/rqgfZlP9qV

The Grandparents Effect in Social Mobility: Evidence from British Birth Cohort Studies by Chan and Boliver: http://t.co/aNnf6OpqHE

Thirteen Economic Facts about Social Mobility and the Role of Education (US): http://t.co/nOSq5EV2oa

.

Fair Access

Ever wonder why state schools don’t send students to Oxbridge? Read this and weep: http://t.co/puvhYZhPen

It’s becoming harder to get a place on a UNIQ summer school than to gain admission to Oxford: http://t.co/ro7kV9vbtf

Oxford and Telegraph lock horns over statistical analysis of ‘summer born’ effect on admissions http://t.co/NS4pCYe1Cg

If there’s no problem with Oxbridge admissions let’s have full data transparency. Secrecy breeds conspiracy theories http://t.co/D2vrLdK22N

OFFA has appointed nine new members to its advisory group including Penelope Griffin of the Bridge Group: http://t.co/mQ78zvi3ol

Direct link to HEFCE’s ‘The Uses and Impact of HEFCE Funding for Widening Participation’: http://t.co/0ZbP3GXklg

The sting in the tail of HEFCE’s report is on page 68: http://t.co/0ZbP3GXklg

THE says OFFA/HEFCE national access strategy interim report will propose coordinated outreach strategy: http://t.co/xNBhyJdWLd

The HEFCE/OFFA Interim Report on the National Strategy for Access: http://t.co/WDKbMqWt84 said to propose son of Aim Higher

BiS press notice about the OFFA/HEFCE Access Strategy Interim Report: http://t.co/rPXhuxVUqu

I’m struggling to understand how factual information about applications could be defamatory to universities: http://t.co/rjGrV5LGNL

Early stages of a FOI wrangle over those UCAS admission statistics: http://t.co/RqOpSvPD42

HEFCE has published a bit of extra material from the National Scholarship Programme evaluation they commissioned: http://t.co/781hQMQGn1

The long arm of RCT reaches across to the Sutton Trust’s fair access work: http://t.co/HI8K250mCu – RCT is the new fad/Heineken/Mr Tickle!

Why don’t independent schools introduce loans that aren’t repayable if you don’t enter a Russell Group university? http://t.co/yv9C7pb2uW

Is OFFA going soft on fair access? http://t.co/9MHSRpSNif – no ratcheting up of expectations in 2013-14

HEFCE confirms open recruitment for ABB+ grades (a third of students) Press release: http://t.co/eDGhU3uYym Circular: http://t.co/6qjma2yT8L

New HESA participation data http://t.co/dbUjMykxb4 Higher says 6 of 24 Russell Group HEIs met state school benchmarks http://t.co/5Kt4HLgsvF

Durham press release on that Boliver fair access research: http://t.co/29wqUO8Tz1 and the project that spawned it: http://t.co/AJtP0nm4Cv

Gove statement on the Boliver social mobility research http://t.co/MMXipicX2u Last line seems to contradict Government HE admissions policy

Sounds like IPPR’s HE Commission is developing an approach to fair access that Mr G will not find conducive: http://t.co/pvDZ6PjvRG

The proportion of UCAS applications from private and independent schools 2008 to 2012: http://t.co/JRfh8mQy0A  (Col 1067W)

THE reports a Mahmood speech yesterday. Labour would ‘urgently restore’ widening participation as policy priority: http://t.co/qZEN1QJ6Ed

Ivy League universities are becoming increasingly selective: http://t.co/C8uXBRJgoL

Latest Independent Commission on Fees Analysis of 2012/13 HE admissions: http://t.co/USds5tryru and Sutton Trust PN: http://t.co/SGsxcJIgYr

@conorfryan expands on this morning’s news about the potentially deleterious impact of tuition fees: http://t.co/CnA5SAahY3

HEFCE briefing and report on Non-continuation rates at English HEIs 2003-2011: http://t.co/y4JXksj87r

HEFCE estimates that 18,555 students achieved AAA+ at A level in 2009/10 and 96% entered HE: http://t.co/np7w9UBKcR  (Col 319W)

Laws says proportion of independent/selective students at Oxbridge is ‘unacceptably high’: http://t.co/e40OVnnWf7 (Col 53WH)

Willetts speech in which he announces information packs to support fair access to HE http://t.co/E9WopWgLg3 Marginally positive

Number of Welsh comprehensive pupils admittted to Oxbridge is flatlining – and significantly lower than in 2008/2009: http://t.co/EgSxD63K25

Willetts’ ‘well done’ letters not going down too well: http://t.co/TutennonJw  Idea has a certain affinity with Dux. It won’t impress Milburn

Direct link to the BIS data on HE Participation Rates 2006/07 to 2007/12 (Provisional): http://t.co/cL5NP94rbe

OFFA’s comment on the HE participation data: http://t.co/idXGME8VVP

Imposing a universal embargo on admissions below BBB is hardly praiseworthy university admissions practice: http://t.co/lAkcV0ggIH

Good showing for non-Russell Group in Complete University Guide. Durham also outranks Oxbridge for English: http://t.co/OojYmR16DN

The latest application figures from UCAS: http://t.co/czw2XrXdmw

HEFCE Consultation Document on Student Number Controls 2014-15 onwards: http://t.co/KAUZLzAJYT – includes proposals for moving beyond ABB+

Early evaluation of Unistats from HEFCE: http://t.co/vVERy2JDEj and associated press release: http://t.co/KeHyisXWZx

Is ASCL against all use of contextual data in HE admissions, or concerned about which data is used for the purpose? http://t.co/IjspRbRGL6

Gove’s CPS Joseph Memorial Speech: http://t.co/qfd9TrRAeX  Says his policies are explicitly designed to improve FSM progression to Oxbridge

Direct link to Cambridge Undergraduate Admission Statistics for 2012 http://t.co/5BhG1nmwpC  – disadvantage is by POLAR quintile

Interesting comparison of fair access north and south of the border: http://t.co/LsYBEw0HY7 and http://t.co/Qi2JAn1z5n

Sutton Trust press release on impact of fees: http://t.co/hc1TqEPhpI and Lampl commentary on same: http://t.co/gwuMjHFnst

US debate on ‘affirmative action’ is peaking in expectation of the outcome of the University of Texas court case: http://t.co/P8CgECIvu1

Direct link to new Centre Forum report on access to postgraduate education: http://t.co/tpyjOEW7VI

Guardian preview of OFFA’s annual report (with HEFCE) due out today: http://t.co/yrsl3fqzXh  Expect the anodyne, not fireworks

OFFA’s press release on today’s report: http://t.co/xTWlhOYhuc and the Report itself, plus annexes http://t.co/xGUPzkvQBQ

Stock response from BIS to yesterday’s OFFA/HEFCE report: http://t.co/9KoZTdvNIn – wonder what the latest FSM to Oxbridge figure is…

IPPR’s HE Commission will propose a £1K HE Premium for up to 230K disadvantaged students from existing WP budget: http://t.co/8HqPG9AqOP

Missed this Guardian coverage of geographical disparities in Oxbridge admissions http://t.co/kstNax8Jkw and http://t.co/VC8VFgTp7o

IPPR’s HE Commission is pro-contextualised admissions; HEIs could admit unlimited numbers of student premium-eligible http://t.co/eaZbhFkuZC

Direct link to IPPR HE Commission summary (the download mysteriously provides only pages 112-144): http://t.co/eaZbhFkuZC

Here’s the full IPPR HE Commssion Report: http://t.co/N7wjVbktgO – Glitch now fixed thanks to @IPPR_Nick

WonkHE analysis puts the IPPR HE Commission Report (which I still can’t access in full) firmly in its place: http://t.co/A5E3cmzzPF

I like the IPPR HE Commission Report on both Student Premium and Contextualised Admissions: http://t.co/N7wjVbktgO but two tricks missed…

…first, the Student Premium needs to align with 16-19 support as well as the Pupil Premium as suggested here http://t.co/vopcXghiS6 and…

…second, HE outreach budget/effort (HE ‘pull’) needs to be integrated with the school/post-16 budget/effort (‘push’) to maximise impact.

Milburn quietly re-endorses contextualised admissions to HE while up in Scotland: http://t.co/qRpYIFpKWN

Next set of Destination Measures will be published on 20 June: http://t.co/BQIzJbIdAR (Col 231W)

Milburn will publish a report on Monday showing that fair access to prestigious universities has stalled http://t.co/YOOL8xUkKd

Direct link to new Social Mobility Commission policy paper: Higher Education: The Fair Access Challenge: http://t.co/GCBNqtcxRl

Excellent Report by Social Mobility Commission: http://t.co/GCBNqtcxRl – So good that it raises awkward questions about OFFA’s contribution

Today’s Social Mobility Commission report on Fair Access, but now with added data: http://t.co/IJ5YS8V7no  Can’t see FSM though

OFFA responds to Social mobility Commission Report on Fair Access to HE: http://t.co/NQzuqjydkw which shows OFFA’s having negligible impact

Now there’s a thought – link VCs’ pay to the achievement of their fair access targets: http://t.co/gnUjG3DAtm Warwick OKish on this measure?

THE reports the National Scholarship Programme could be vulnerable under the Spending Review: http://t.co/MrM50928uT

Updated Destination Measures general information: http://t.co/rJ64rECRro and Q&A: http://t.co/0M4ukptFzF

KS4/5 Destinations Data PN: http://t.co/fe8lpw4V8Y – SFR and tables: http://t.co/C7MEmY0lEe  FSM breakdown not published until 23 July

Russell Group on SMCPC Report on Fair Access http://t.co/QXrWgZ6ocd and @tessa_stone’s powerful response http://t.co/EppEi11oN5

THE reviews IntoUniversity: http://t.co/cbsZhVNN1V Successfully squeezing money from corporate sponsors to support fair access

Why 9 of these students: http://t.co/Cj0QXueVp3 rejected Oxbridge: http://t.co/opqCP24K2U – Still a trickle but could it become a flood?

Debate continues over affirmative action despite Supreme Court Ruling: http://t.co/Tw1jquylX2 and http://t.co/C6k4wezKdU

Here’s a brief report on Fair Access issues, especially some news about the Dux Award Scheme: http://t.co/krPc7Uweo4

National Scholarship Programme reduced back to £50m and focused exclusively on postgraduates: http://t.co/52iWzTrjOT

Progress of the Post-16 Education (Scotland) Bill which supports WP/fair access north of the border, passed yesterday http://t.co/kjZU5vQG5N

I’ve finalised my brief post of yesterday about the future of Dux Awards, now renamed Future Scholar Awards http://t.co/krPc7Uweo4

HEFCE analysis of trends in transition from undergraduate to postgraduate study 2002-2011: http://t.co/RMVP6znz5D

HEFCE’s Overview Report of Postgraduate Education in England and Northern Ireland 2013: http://t.co/MJIr5ik7fe

HEFCE’s invitation to submit pilot project proposals to support progression into postgraduate education: http://t.co/C6mZoCbHrx

Further detail of Government financial support for access by disadvantaged students to postgraduate education: http://t.co/1VU6MOkWWr

Think this is the report: http://t.co/b5ISODUQPl on the state/private school university experience referenced here: http://t.co/EPGDBA7hUX

.

https://twitter.com/GiftedPhoenix/status/352499444035489793

https://twitter.com/GiftedPhoenix/status/352674966271037440

https://twitter.com/GiftedPhoenix/status/353023250034458624

.

 

Careers

Deed of variation to funding agreements will require academies to secure independent careers advice: http://t.co/B80SinZNAn (Col WA339)

Lords Oral PQ about careers guidance in schools: http://t.co/L7Q3cZiMD4 (Col 1267)

Updated statutory guidance for schools on Careers Guidance: http://t.co/X04WvS5Bru

Government response to Education Select Committee report on Careers Guidance for Young People: http://t.co/wipyXovbJS

RT @SecondaryCEIAG: Tony Watts forensically takes apart Government response to Education Committee report on careers http://t.co/kHKG8hPhfh

Yesterday’s Westminster Hall debate on Careers Guidance: http://t.co/WzQp3ZAI04 (Col 1WH)

Direct link to today’s National Careers Council publication: ‘An Aspirational Nation’: http://t.co/TG0HacIsmj

New guidance for post-16 institutions on Securing Independent Careers Guidance: http://t.co/ubSsBOh8c9

Cridland on careers: http://t.co/sg6rVB0lOK – His topic at the GS Heads Association was ‘nurturing ability’: http://t.co/WsAKS1mqBe

Yesterday’s backbench debate on careers advice in schools: http://t.co/kI8QEAP5W0 (Col 120)

.

Pupil Premium

Primary floor rises; schools rated by Ofsted below ‘good’ with attainment gap issues need Pupil Premium action plan http://t.co/jKDkZSUycK

DfE Evaluation of Summer Schools for Disadvantaged Pupils: http://t.co/eOd9JF7wKu  plus key findings for schools: http://t.co/sKqsISskUC

Who are these experts that will advise schools on their use of Pupil Premium? http://t.co/bA03gftEMb – What happened to ‘schools know best’?

Not much evidence in this Evaluation of Disadvantaged Summer Schools of a focus on improving attainment: http://t.co/eOd9JF7wKu

Pupil Premium intervention requires accredited ‘System Leaders’ (not all NLEs) to help schools produce action plans http://t.co/OT91Q7KfCW

Laws’ Pupil Premium intervention is basically the old Labour mantra: ‘intervention in inverse proportion to success’  http://t.co/OT91Q7KfCW

Fact Check revisits how much of a premium the Pupil Premium really is http://t.co/hg8MZIlY9o Big issue for current spending review I’d guess

FAQs for the 2013 Summer Schools for Disadvantaged Pupils: http://t.co/s21G1ZbKH8  – continuation announced by Laws yesterday

RT @fimclean: SAM Learning debate on how the £1.25 billion Pupil Premium affects school spending http://t.co/HleTnmUo7d

Hansard record of yesterday’s Westminster Hall debate on the Pupil Premium: http://t.co/jVYFyWRENc (Col 25WH)

Laws ratchets up pressure on schools to narrow gaps via the Pupil Premium http://t.co/R4MZyRcMnb – Schools know best approach is now history

Can you see the Pupil Premium reducing the FSM attainment gap to 12.5% any time soon? http://t.co/zKpfBJk5LO – No, me neither

@RobAnthony01 @miconm Then again there’s Deloitte’s ‘Insight 5’: http://t.co/boxxy21V6w (which rather undermines the Pupil Premium concept)

Just how much Pupil Premium is being pocketed by private tutors? http://t.co/XIswAp64nJ – Wouldn’t it make sense to cut out the 3rd party?

Guardian reports new Sutton Trust survey of Pupil Premium funding: http://t.co/Wz3FMP4q96 Presume details here later: http://t.co/SoVIOiRVqG

Here’s that belated Sutton Trust Pupil Premium Survey: http://t.co/OIwNumz8BN and press release: http://t.co/yUPzqPFfgX

The independent evaluation of the Pupil Premium will be published in July: http://t.co/UymKtQiLTq (Col 300W)

Young Foundation Report ‘Social Investment in Education’ urges using Pupil Premium to support said social investment: http://t.co/ojufJMnnfR

More detail of Pupil Premium accountability measures; John Dunford’s appointment as Pupil Premium National Champion: http://t.co/iWhWCt76gE

Limited support in this Pupil Premium evaluation for Ofsted’s complaint that high attainers are neglected: http://t.co/n4K4K771lF

Pupil Premium Evaluation says it’s too early to offer a judgement of the impact on pupil attainment: http://t.co/n4K4K771lF – Why so?

New Evaluation of the Pupil Premium: http://t.co/n4K4K771lF Identifies tensions between schools’ use and ‘external expectations’

Pupil Premium reinforcement illustrates how few strong policy levers exist in a ‘self-improving school system’: http://t.co/iWhWCt76gE

There’s also more detail here about how Pupil Premium Reviews will work: http://t.co/dtejHrLnk9

.

https://twitter.com/GiftedPhoenix/status/353445813168504833

.

FSM Gaps, Poverty, Disadvantage and EEF

EEF’s Evidence in Action Events: Discussion Paper on ‘Knowledge Mobilisation’: http://t.co/kYhuJsZ3eC and report: http://t.co/1CpSW18zVh

If the EEF now deals with ‘Improving education outcomes for school-aged children’ isn’t that serious mission creep? http://t.co/pVR0ltTkBe

This implies that elevation of EEF to ‘What Works Centre’ extends its remit to all outcomes for 4-19 year-olds: http://t.co/pVR0ltTkBe True?

What the Education Endowment Fund was originally for: http://t.co/xYqAJYGJuY and http://t.co/chzsiw16lx

Essential reading for right-lurching politicians: The Lies We Tell Ourselves: Ending Comfortable Myths About Poverty: http://t.co/qiQlfobfKe

This is the Troubled Families evidence base: http://t.co/moLvZowzCA about which this report is so scathing: http://t.co/qiQlfobfKe

How the Pygmalion effect works: http://t.co/5sV6Xclq6B

A deeply troubling round-up of the impact of the various April benefits cuts: http://t.co/uGjYwII7b8

And here’s a benchmark report on 2012 UK poverty levels ahead of the new round of benefits cuts. Required reading: http://t.co/MiDBjew7Bj

Teach First commissioned IFS report on its choice of indicators of disadvantage http://t.co/faArHvgcd9 – TES on same: http://t.co/8M49PfNBRQ

Set of new EEF-funded projects: http://t.co/9yp7ddwyN5 – includes £700K for chess in schools. Who is evaluating the EEF?

Laws speech to ATL references a Liberal ‘ambition’ to halve the current FSM gap by 2020: http://t.co/nsurEnGHhY – remember that one!

IoE blog on why linking FSM gap-narrowing to the inspection framework may not be entirely fair or appropriate: http://t.co/uCgoKl06Sa

DfE Research Topic Note on EYFSP Pilot outcomes: http://t.co/UEWS7MO0oM – reports bigger EM and FSM gaps under new model

DfE publishing data shortly on attainment by age 19 broken down by pupil characteristics including FSM. Here is link http://t.co/RX0fVlrolp

New SFR 13/2013 shows increased FSM gap on 2+ A levels measure, one of Government’s social mobility indicators: http://t.co/Nx3EWkNlfp (p13)

Mongon on white working class underachievement: http://t.co/U8SQdhXk93 – advocates a localised, co-ordinated zone-based response

TES feature on underachievement in coastal towns: http://t.co/5EU995vVpp – 10 years on it’s as if Education Action Zones never existed

Is Universal Credit a disaster waiting to happen? http://t.co/VlflM8mHvl What’s the fallback for FSM eligibility if it can’t be made to work?

New Children’s Centre Evaluation (ECCE) baseline survey of families using centres in the most disadvantaged areas: http://t.co/Kj7LpgYdMP

Don’t understand how FSM eligibility could be cut back by councils if it’s passported on universal credit? http://t.co/0YH4F4dd4X

Government does not expect to introduce the FSM element of Universal Credit until after October 2013: http://t.co/Kmhk1F543K  (Col 479W)

Education Endowment Fund has so far funded 56 projects at a cost of £28.7m (full list provided): http://t.co/Wsgd5KLgmx (Col 959W)

New Good Practice Guide for the 16-19 Bursary Fund: http://t.co/VcLM5vVUiZ

Do low aspirations hold back low income students? http://t.co/vTiSTqayc2 – summary of Rowntree research

New Invitation to Tender for Education Endowment Foundation’s data-crunching activity: http://t.co/LmlPHaxZFR – deadline 31 May

New consultation document on allocation of discretionary element of 16-19 Bursary Fund: http://t.co/1uytICb53g

The proportion of FSM-eligible pupils at the first tranche of free schools: http://t.co/HFQfA6fEpg (Col 90W)

Year 1 Evaluation of the 16-19 Bursary Fund: http://t.co/Ffk5yFiD6r plus associated press release: http://t.co/jLiRfUOofr

Replugging my new post about support for FSM high achievers, applying US research to English settings: http://t.co/XREYgg8bmO

Apropos Sutton Trust report do we know how many Academies/free schools give admissions priority to FSM? http://t.co/RjT0iEUfY1 (footnote 22)

Wonder why Sutton Trust isn’t advocating priority FSM admissions to academies/free schools as well as ballots/banding http://t.co/MsISlp7Sh0

New DfE research report on impact of summer schools programme on disadvantaged learners: http://t.co/FZhTDtGBBA – not a ringing endorsement

IoE PN on Jerrim et al research on impact of genetics on reading achievement gap: http://t.co/0YEVA7Sjcv Exec summary http://t.co/raJBzgVmB0

FSM to Universal Credit is all about the transition: http://t.co/eWtxkkI0FQ  Protect entitlement of 168K losers already eligible to age 16?

UCET advice to ministers on closing the achievement gap and ITT: http://t.co/knzaQupHyy

Preview of HMC’sI themes in next week’s speech on disadvantage: http://t.co/HdC2whRFmd  Including disadvantaged high attainers hopefully

Twigg commits to admissions reform so all schools can give priority to disadvantaged learners. Good http://t.co/nepHKaziLH

Direct link to IPPR’s Excellence and Equity: Tackling educational disadvantage in England’s secondary schools: http://t.co/EG5UzaETS8

Interesting that the Education Endowment Foundation has released a statement on teaching assistants: http://t.co/Z4basqS7Ox

Schools, Pupils and Characteristics January 2013: http://t.co/wIpVZg8JT9 Primary FSM down 0.1% to 19.2%; secondary FSM up 0.3% to 16.3%

Series of Jamie Reed PQs following up on Ofsted’s ‘Unseen Children’ Report: http://t.co/1P0wSgQ0p9 (Col 236W)

No Rich Child Left Behind – on the ‘upper-tail inequality’ in US education: http://t.co/dg73xDwHWW

Report of an Expert Panel considering reform of the measure of socio-economic status deployed in NAEP: http://t.co/gOmSdrLxSn

Breaking the Glass Ceiling of Achievement for Low Income Students and Students of Color from The Education Trust (US) http://t.co/MGYMJglK44

“The distribution of opportunity in our education system is nutty”: http://t.co/oo7Ri27gnS – Fair point

Labour also reported to be proposing admissions reform: http://t.co/MunADl43Zz  – devil will be in the detail here

Funding of NZ schools according to socio-economic deciles under further scrutiny: http://t.co/jvHglmxt9H

.

Selection and Independent Sector

Thin end of the wedge if it’s OK for politicians to say one thing and do another when choosing private education?  http://t.co/8aQ8eXeEcl

So what feedback to we have on the CEM 11+ pilot in Bucks? Exactly how much less coachable is it? http://t.co/Rg1EwHj5rO – I’m sceptical

Subject to DfE approval Invicta GS to run Sevenoaks grammar annex – now seem OK sharing site with Trinity Free School http://t.co/xSs2Eh63Hg

All the papers seem to be reporting the Independent Schools Census 2013. I guess it’s published here towards tiffin: http://t.co/drLgzU0bQd

TES report on the Independent Schools Census 2013: http://t.co/XtxDOqe3Xs – and here is the Census itself at last: http://t.co/p6EKqHKqpr

Today’s Sutton Trust Research on Selective Comprehensive Schools: http://t.co/vipxLDdv7a  and associated press release http://t.co/MsISlp7Sh0

The Ethics of Tutoring: http://t.co/BYWpvluyfw  – refreshingly honest

Hang on! ‘Moronic repetition and preparation for cognitive ability tests’? Aren’t they supposed to be non-coachable? http://t.co/8FlcE4UCUr

Oxford Times profiles Tim Hands, incoming Chair of HMC: http://t.co/BvlmuGwlw0  – who sounds like he might make waves

I simply don’t believe that Chelmsford HS’s change of 11+ test will materially change its intake: http://t.co/wdZZci6dtn

I see Mr G is delivering ‘a fresh vision for the independent sector’ today: http://t.co/LzVIKMoXRc

Mr G’s speech turned out a different animal: http://t.co/MBsmWmynkh – I worry his high expectations are too narrow, like a hurdle in a race

Tony Little says he and others can’t see an overarching ‘big picture’ vision for the Government’s education reforms: http://t.co/XlRGAnzzB8

New GSA President joins the chorus of disapproval: http://t.co/61ltLEtM7s

Outside the elite institutions, the private sector in US education is dying out argues Chester Finn: http://t.co/ypeIcFjln5

Love the ambiguous ‘too’ in the final para of this on Eton and poverty of aspiration: http://t.co/tVEVEEos38

Fair Admissions Campaign Statement: http://t.co/8Qfoa2DSJ8 Will map religious and socio-economic selection. FAQs: http://t.co/Tp1svgyGfJ

Centre for Market Reform of Education on its Tutors Association Consultation: http://t.co/7iWZx3FRwL  More detail here http://t.co/85qoGqevWq

The rapid expansion of online tutoring in the UK: http://t.co/s7oDRmdqbe

These are the grammar school statistics: http://t.co/H27wwW5ik8 cited in today’s latest Mail article on the subject: http://t.co/J8n5Ct1JKH

Grammar schools and the myth of social mobility: http://t.co/ltOADwOJve – Exactly

I wonder if Hitchens would support the wholesale introduction of ‘contextualised admissions’ into grammar schools: http://t.co/QcougJ89fb

@headguruteacher I still stand by most of what I proposed in Jan 2011 – essentially an OFFA for (grammar) schools: http://t.co/8ZvhNo2RA0

Post on selection by @headguruteacher: http://t.co/sajaOw2nSN  Appears to suggest GS select on attainment, not on ability so FSM imbalance OK

The continuing expansion of grammar school places: http://t.co/7yqrNsW2Nb  – How many are adding 1FE+ post academisation?

A second, competing, proposal to run a satellite grammar school in Sevenoaks: http://t.co/KomIp9xlzO

Chris Ray calls for 11+ admission via assessment days: http://t.co/oKjIBRg8Ad – I agree

HMCI prods independent sector towards stronger partnership with state schools: http://t.co/M46q0kiJCg and http://t.co/mbIY0UAeJz

.

https://twitter.com/GiftedPhoenix/status/352672826341343232

.

.

GP

July 2013

A Summer of Love for English Gifted Education? Episode One: KS2 Level 6 Tests

.

summer of love 1967 by 0 fairy 0

summer of love 1967 by 0 fairy 0

This post is the first in a short series, scheduled to coincide with three publications – two yet to be published – that focus directly on provision for gifted learners in England.

Each Episode will foreground one of the publications, set within the emerging overall narrative. Each will assess the likely impact of the target publication and the broader narrative as it unfolds while also reflecting associated developments in educational policy anticipated during the next few months.

Episode One:

  • Analyses the first publication, an Investigation of Level 6 Key Stage 2 Tests, already published in February 2013, exploring its findings in the context of current uncertainty about future arrangements for assessment in primary schools.
  • Reviews the outcomes of the most recent Ofsted survey of gifted and talented education, conducted in December 2009, so establishing a benchmark for consideration of a new Ofsted survey of how schools educate their most able pupils, due for publication in May 2013.
  • Sets out what we know about the third document, an Investigation of School and College-level strategies to raise the Aspirations of High-Achieving Disadvantaged Pupils to Pursue Higher Education, due for publication by mid-September 2013.

Future Episodes will scrutinise the new Ofsted Survey and the second Investigation respectively, linking them with other developments over the summer period, not all of which may yet be in the public domain.

By this means I plan to provide a kind of iterative stocktake of current issues and future prospects for their resolution. I am curious to learn whether I will be more or less positive at the end of the series than at the beginning.

For I enter the fray in a spirit of some world-weariness and pessimism over the continuing inability of the gifted education community to act collaboratively, to reform itself and to improve practice. This is seemingly a global malaise, though some countries stand out as bucking the trend. Many have featured in previous posts.

Will the Summer of Love provide the spur for trend-bucking reform here in England, or will the groundswell of energy it generates be dissipated in the long, languorous, lazy sunshine days ahead?

.

Publications in the Last Two Years and Associated Developments

Following a lengthy period in the doldrums, we may be on the verge of a rather livelier season in the evolving history of English gifted education.

It would be wrong to suggest that we have been entirely becalmed. Over the past two years we have digested a trio of key publications, all of which have been reviewed on this Blog:

  • The Sutton Trust’s ‘Educating the Highly Able’ (July 2012), which I took seriously to task for its over-emphasis on excellence at the expense of equity and almost entire failure to address the needs of underachieving gifted learners, especially those from disadvantaged backgrounds. Given the sponsoring organisation’s raison d’etre (improving social mobility) that seemed, frankly, bizarre.

These documents may have had some limited positive impact, by maintaining gifted education’s profile within wider education policy, but I can find no evidence to suggest that they have reformed our collective thinking about effective gifted education, let alone improved the learning experience and life chances of English gifted learners.

Indeed, it is conceivable that the two latter publications have set back the cause of gifted education by taking us down two successive blind alleys.

I have made my own small efforts to refocus attention on a more productive direction of travel through The Gifted Phoenix Manifesto for Gifted Education.

I do not claim any great status or significance for the Manifesto, though there are encouraging early signs that it is stimulating productive debate amongst others in the field, at least amongst those who are not firmly wedded to the status quo.

The Sutton Trust promises further work, however:

‘Helping the highly able

Piloting programmes that support and stretch bright students from non-privileged backgrounds in state schools, and opening up selective state schools to bright children from low and middle income homes.’

This presumably includes the outcome of the call for proposals that it issued as long ago as July 2012, ‘with a view to developing the first project by the end of the year’ – ie 31 December 2012 (see attachment at the bottom of the linked page).

The call for proposals sought:

‘Cost-effective, scalable projects which support highly able pupils in non-selective maintained schools.  The Trust is particularly interested in initiatives which are based on sound evidence and / or which draw on proven models of intervention.’

It expressed interest in:

  • ‘proposals that focus on those pupils capable of excellence in core academic school subjects’;.
  • ‘various methods of defining this group – for example those attaining at the 90th percentile and above, the 95th percentile, or the new Level 6’ or ‘on the basis of school performance and local context’;
  • Support for ‘“exceptionally able” pupils’ especially ‘imaginative ways of bringing them together’;
  • Provision that is ‘integral to schools and not simply a “bolt-on” to mainstream provision’
  • Programmes that start ‘in key stage three or four, but which may continue to support the students through their transition to FE and HE’.

There is some reasonable hope therefore that the Trust might still contribute in a positive way to the Summer of Love! If there is an announcement during the timeframe of this series I will of course feature the details in a future Episode.

But I plan to build the series around a second trio of documents which have the capacity to be somewhat more influential than those published from 2011 to 2012.

.

Kew once more 1 by giftedphoenix

Kew once more 1 by giftedphoenix

.

Key Stage 2 Level 6

One is already with us: an ‘Investigation of Key Stage 2 Level 6 Tests’ commissioned by the Department for Education and published in late February 2013. (OK, so I’m stretching a point by extending Summer back into the Winter, but this study has so far escaped serious in-depth attention.)

The authors are Mike Coldwell, Ben Willis and Colin McCaig from the Centre for Education and Inclusion Research (CEIR) at Sheffield Hallam University.

Before engaging directly with their findings, it is necessary to sketch in a fair amount of contextual background, since that will be critical to the broader narrative we expect to evolve over the coming months.

 .

Background: Level 6 Tests

Level 6 Tests are by no means the first example of efforts to raise the assessment ceiling for high-attaining learners at the end of Key Stage 2 (KS2) (typically the final year of primary school when children are aged 11), but there is insufficient space here to trace the history of their predecessors.

The current iteration, optional Level 6 tests, was introduced in 2011 in reading, writing and maths. The tests were not externally marked, nor were results published.

QCDA was still in place. Its website said:

‘The tests provide the opportunity to stretch high attaining pupils and also provide a useful tool for measuring the ability and progression of gifted and talented pupils. You are advised to view the tests to make a judgement on how appropriate they are for your pupils.’

In June 2011, the Bew Report into KS2 testing, assessment and accountability reflected this experience:

‘We recognise that the current system of National Curriculum tests can appear to place a ceiling on attainment for the most able pupils. This has important implications for measures of progress, since a pupil who achieves level 3 at the end of Key Stage 1 can currently only achieve level 5 in the end of Key Stage 2 tests, and can therefore only make two levels of progress (currently the expected rate of progress).

Allowing pupils to attain level 6 at the end of Key Stage 2 would enable pupils with high Key Stage 1 attainment to make better than expected progress. Secondary schools receiving pupils who had attained level 6 would understand that these pupils would need to be particularly challenged and stretched from the start of Year 7…

It is important to challenge the most able pupils. We welcome the Government’s decision to make level 6 tests available to schools on an optional basis this year. We believe that these optional tests could allow particularly able pupils an opportunity to develop and fully demonstrate their knowledge and understanding.

However, we do have some concerns, in particular over the extent to which it will be possible for primary schools to cover enough of the Key Stage 3 curriculum to allow pupils to attain level 6. NFER, one of the few respondents who commented on this issue, suggested that it would be more appropriate to award a ‘high 5’ than a level 6.’

So Bew concluded:

‘We believe that the Government should continue to provide level 6 National Curriculum Tests for schools to use on an optional basis, whose results should be reported to parents and secondary schools.’

But there was also a rider:

‘If, following the review of the National Curriculum, any changes are made to the current system of levels, alternative arrangements should be put in place to ensure the most able pupils are challenged.’

More about that anon.

In the light of this, externally marked KS2 Level 6 tests were offered in 2012 in Reading and Maths. There was also an option to undertake internally marked Level 6 teacher assessment in Writing.

The 2012 KS2 Assessment and Reporting Arrangements Booklet offered a brief commentary:

‘These tests are optional and are aimed at high attaining children. Headteachers should take into account a child’s expected attainment prior to entering them for these tests as they should already be demonstrating attainment above level 5…

To be awarded an overall level 6 in a subject, a child must achieve both a level 5 in the end of Key Stage 2 test and pass the level 6 test for that subject. Schools can refer to the 2011 level 6 test papers in order to inform their assessment of whether to enter children for the test.’

The Investigation examines this 2012 experience, but is confined to the two externally marked tests.

Meanwhile – and skipping ahead for a moment – in 2013, the optional Reading and Maths tests are once again available, alongside a new optional test of Grammar, Punctuation and Spelling, in place of the teacher assessment of writing.

Reporting of Level 6 results in School Performance Tables has also changed. In 2012, Level 6 outcomes were used only in the ‘calculation of progress measures, Value Added,  percentage achieving level 5+ and average point scores’.

When it comes to the 2013 Performance Tables:

‘…the percentage of the number of children at the end of Key Stage 2 achieving level 6 in a school will also be shown in performance tables. The Department will not publish any information at school level about the numbers of children entered for the level 6 tests, or the percentage achieving level 6 of those entered for level 6.’

This change may have been significant in driving increased interest in the tests, though not necessarily for all the right reasons, as the discussion below will reveal.

Although the 2012 Performance Tables made limited use of Level 6 results some aggregated performance data was published, as my post on the outcomes noted:

‘900 pupils achieved Level 6 in the KS2 reading test and 19,000 did so in the maths test. While the former is significantly lower than 1% of total entries, the latter is equivalent to 3%, so roughly one pupil per class is now achieving Level 6 in maths. (About 700 pupils also achieved Level 6 in science teacher assessment). Almost all learners achieving a Level 6 will have demonstrated three levels of progress. We know from other provisional data that some 2,500 of those securing Level 6 in maths achieved either Level 2A or even Level 2B in maths alone at KS1, so managing four levels of progress in crude whole-level terms.’

Incidentally, we now know from DfE’s website that:

‘There will not be a Key Stage 2 science sampling test in 2013; a new, biennial (every other year), pupil-level sampling system will be introduced in 2014.’

And slightly more accurate performance data was supplied in an Appendix to the Investigation itself. It tells us that, across all schools (including independent schools that opted to take the tests):

  • 55,212 learners were entered for Level 6 Maths and 18,953 of them (34.3%) achieved it; and
  • 46,810 pupils were entered for level 6 reading and 942 (2.0%) achieved it.

That gives a total of 102,022 entries, though we do not know how many came from independent schools or, indeed, how many learners were entered for Level 6 tests in both Maths and Reading.

.

Background: The Future of National Curriculum Assessment

We have known since June 2012 that National Curriculum levels will be phased out and were informed, through a kind of policy aside in March 2013, that this would happen ‘from 2016’.

The new National Curriculum will be introduced from September 2014, so will be assessed through the existing assessment framework during its first year of implementation, despite the apparently strong case for keeping it and the associated assessment reforms fully synchronised.

It may be that this decision is associated with recent difficulties over the procurement of a contractor to undertake external marking of the KS2 tests from 2014-2016, or else progress on determining the new arrangements was insufficiently advanced by the time that contract came to be negotiated.

At the time of writing we still await a promised consultation document on primary assessment and accountability, some 10 months after the removal of levels was first communicated.

The issues discussed below will need revisiting once the Government’s proposals are safely in the public domain: the spectre of assessment reform hangs over this post as well as the Investigation it is supposed to be reviewing.

There are few clues to the direction of travel, apart from some suggestion that the Government has been influenced by Bew’s deliberations, even though his clarity on this point left something to be desired.

I quote the relevant sections fully below, to ensure that I haven’t missed any vital inflection or  hint of what Bew intended. The emphases are mine:

‘In the short term, we believe we need to retain levels as a means of measuring pupils’ progress and attainment… However, in the long term, we believe the introduction of a new National Curriculum provides an opportunity to improve how we report from statutory assessment. We believe it is for the National Curriculum Review to determine the most appropriate way of defining the national standards which are used to categorise pupils’ attainment.

We realise that, in order to measure progress, it is necessary to have an appropriate scale against which attainment and progress can be measured at various points. For example in Australia, a ‘vertical scale’ (where a movement along the scale between any two equally spaced points must reflect similar levels of progress) is created by testing several year-groups, using some common questions to link scores on each test together. A particular question might be considered difficult for a Year 3 pupil, but much easier for a Year 5 pupil. Although this is technically defensible, it does require tests at more regular intervals than we currently have in England.

In England, we currently use National Curriculum levels as a scale against which to measure progress. However, as stated later in this chapter, concerns have been raised as to whether the levels, as they currently exist, are appropriate as a true vertical scale. We recommend that, as part of the review of the National Curriculum, consideration is given to creating a more appropriate ‘vertical scale’ with which to measure progress.

And, a little later in the Report:

‘In the longer term, we feel it may be helpful for statutory assessment to divide into two parts. All pupils could be expected to master a ‘core’ of essential knowledge by the end of Key Stage 2, concentrating on the basic literacy and numeracy which all pupils require if they are to access the secondary curriculum. This ‘core’ could be assessed through a ‘mastery’ test which all pupils should be expected to pass (only excepting cases of profound Special Educational Needs), providing a high minimum standard of literacy and numeracy at the end of primary education.

We recognise the risk that this approach may lead to ‘teaching to the test’, may set an unhelpfully low ceiling on attainment and would not reflect pupils’ progress. We would suggest two solutions. Firstly, it might be helpful to allow pupils to take ‘core’ tests in Years 4, 5 or 6 to ensure that able pupils are challenged. Secondly, we feel there could also be a separate assessment at the end of Key Stage 2 to allow pupils to demonstrate the extent of their knowledge and therefore to measure pupils’ progress during the Key Stage. This assessment could be designed to identify the extent of pupils’ attainment and understanding at the end of Year 6, spreading them out on a ‘vertical scale’ rather than being a pass/fail mastery test. Such an assessment should be as useful as possible to pupils, parents and teachers. It may be helpful for the results to report in greater detail than is currently provided by National Curriculum Test data, so they can identify more effectively the pupil’s attainment in key broad aspects of a subject.

We feel the combination of these statutory assessments could ensure that all pupils reach a minimum standard of attainment while also allowing pupils to demonstrate the progress they have made – which would indicate the quality of the school’s contribution to their education. It could provide a safety net in that all pupils should achieve a basic minimum, but would not impose a low ceiling on the able.’

And then finally:

‘A key criticism of the current Key Stage 2 tests is that pupils’ knowledge and skills over a four-year Key Stage is assessed via tests in a single specified week in May. Some critics have raised concerns that this approach causes stress for pupils, particularly those working at the lower end of a spectrum, and may have unfair implications for schools, whose overall results may be affected if for example a highly-performing pupil is absent on test day. In addition, criticism suggests there is little incentive to challenge the more able children, who may well be working at level 5 at an earlier point in the Key Stage or year.

We believe that our earlier recommendations address these issues. However, we also recognise the benefits of a system based on the principle of ‘testing when ready’. The proponents of such an approach argue that it would allow each pupil to be entered for statutory tests when he/she is ready, and then able to move on to more advanced learning. We believe that it would be possible for a statutory ‘testing when ready’ system to meet the statutory assessment purposes we have specified.

However, we are not convinced that moving to a ‘testing when ready’ approach is the best way of achieving the purposes of statutory assessment under the current National Curriculum. We suggest that the principle of ‘testing when ready’ should be considered in the future following the National Curriculum Review. We believe that the principle of ‘testing when ready’ may fit well if computer administered testing is introduced, making it easier for each pupil to sit his/her own personalised test at any point in time when teachers deem him/her to be ready.’

In summary then, Bew appears to suggest:

  • Assessment of mastery of an essential core of knowledge that all should pass but which might be undertaken as early as Year 4, two years before the end of KS2;
  • A separate end of KS2 assessment of the extent of learners’ knowledge and their progress against  a new ‘vertical scale’ that will judge their progress over time, this potentially incorporating reporting on attainment in ‘key broad aspects of a subject’;
  • Consideration of transition to a universal ‘testing when ready’ approach at some indeterminate future point (which may or may not be contemporaneous with and complementary to the changes above).

Quite what learners will do after they have successfully completed the mastery test – and its relationship to the draft Programmes of Study that have now been published – is not explained, or even explored.

Are learners expected to begin anticipating the Key Stage 3 programme of study, or to confine themselves to pursuing the KS2 programme in greater breadth and depth, or a combination of the above?

In short, Bew raises more questions than he answers (and so effectively reinforces the argument for keeping curricular and assessment reforms fully synchronised).

At this point we simply do not know whether the Government is ready to unveil plans for the introduction of a radically new ‘test when ready’ assessment regime from 2016, or whether some sort of intermediate position will be adopted.

The former decision would be a very bold reform given the ‘high stakes’ nature of these tests and the current state of cutting edge assessment practice. Given the difficult history of National Curriculum assessment, the risk of catastrophic error might well be too great to contemplate at this stage.

Awash in all this uncertainty, one might be forgiven for assuming that an analysis of the impact of the introduction of Level 6 tests has been overtaken – or almost overtaken – by events.

But that would be unjustified since the Investigation addresses some important issues about gifted education in the upper primary years, effective management of the transition between primary and secondary schools and the role of assessment in that process.

.

Kew once more 2 by giftedphoenix

Kew once more 2 by giftedphoenix

.

The Investigation: Key Points

The Report is structured around the sequence of events leading from a school’s decision to enter learners for the tests, proceeding from there to consider the identification and selection of participants, the support provided to them in the run up to taking the test, and the outcomes for participants, other pupils, the host school and receiving secondary schools.

It addresses five research questions:

  • How have the tests affected school behaviour towards the most able pupils?
  • What is the difference in behaviours between schools that do well in the tests and those which do not?
  • What are the positive and negative effects of the tests, on schools and pupils respectively?
  • Why did some schools enter pupils for the tests whereas others did not?
  • How are schools identifying pupils to enter the tests?

It does so by means of a tripartite methodology, drawing on 20 case studies of schools undertaking the tests, 40 telephone interviews with schools that decided not to take part and 20 telephone interviews with secondary schools.

.

The Decision to Enter Learners

Schools that decided to enter pupils for the tests did so because:

  • They wanted to provide additional challenge for able pupils and/or remove an unhelpful ceiling on their attainment. There was a perceived motivational benefit, for staff as well as learners,  while some primary schools ‘hoped that an externally validated exam might make secondary schools more secure in their views about primaries’ judgements’, as well as protecting learners from expectations that they would repeat work at their receiving secondary schools.
  • They wanted to evidence positive performance by the school, by demonstrating additional progress by learners and confirming teacher assessment outcomes. Entry was assumed to assert their high expectations of able pupils. Some were anxious that failure to take part would be perceived negatively by Ofsted.
  • Some were encouraged by the ‘low stakes’ nature of the assessment, identified entry as consistent with the school’s existing priorities, saw a positive marketing opportunity, or wanted to attract or retain staff ‘with sufficient confidence and expertise to teach level 6 content’.

Conversely, schools deciding against participation most often did so because they judged that they had no pupils for which the tests would be suitable (though there was recognition that this was a cohort-specific issue).

Many said they had received insufficient guidance, about the test itself and about the need to teach the Key Stage 3 programme of study, and there was related concern about the absence of dedicated teaching materials.

Some objected to the tests in principle, preferring an alternative approach to assessing these learners, or concerned at a disproportionate focus on the core subjects. ‘Quite a number’ took the reverse and negative position on secondary schools’ anticipated response, assuming that receiving schools would re-test and repeat the work pupils had undertaken.

.

Identification and Selection of Participants

Concern about lack of guidance extended to advice on selection of participants. There was widespread worry at the limited availability of past papers. Lack of confidence led to schools adopting very different approaches, some rather liberal and others much more conservative.

Some entered only those learners they believed had a very good chance of passing. Others extended entry to all those they believed had some chance of success, sometimes including even those they felt probably would not pass.

On average, case study schools nominated 41% of the subset of learners who achieved Level 5 in Maths, though some entered 20% or fewer and others 81% or more. Most fell between these two extremes. (The national figure is given as 26%.)

But, in Reading, case study schools nominated on average only 25% of learners who had achieved Level 5. Only a minority of schools nominated over 41%. (The national figure is given as 18%.)

Timing of selection varied considerably. Identifying potential entrants relatively early in Year 6 and confirming selection nearer the April deadline was a common strategy.

Decisions typically took into account several factors, foremost of which were learners’ own preferences. Few schools consulted parents systematically. There was generally less clarity and confidence in respect of Reading.

Schools typically utilised a mix of objective, quantifiable and subjective, value-driven measures, but ‘many schools struggled to convey coherently a specific selection strategy’ and it is clear that the probability of a learner being entered varied considerably according to which school they attended.

Objective evidence included formative assessment, tracking data, cross-moderation of work between partner schools and the outcomes of practice tests. Though schools felt secure in their levelling, only a handful stated explicitly that they had learners working at Level 6, either at the point of selection for the tests or subsequently. In reality, most made their judgements on the basis of performance at Level 5.

Subjective considerations – eg learners’ ‘wellbeing’ – were significant:

‘In certain instances possessing the raw ingredients of academic ability and a track record of high academic performance in isolation were not necessarily seen to be sufficient grounds for selection. Instead a number of schools also attached considerable importance to the particular pupils’ maturity, personality and, in some cases, behaviour.’

Many schools expected to tighten their selection criteria in response to low pass rates, especially in Reading. There was marked dissatisfaction with ‘the increased threshold marks (compared with those from the pilot tests)’ and a feeling that this had led schools to underestimate the difficulty of the tests.

The Executive Summary argues that ‘schools were largely effective in ensuring that the very top ability pupils were identified and put forward’, but the substantive text is not quite so bullish.

There was clear evidence of reticence on teachers’ parts in outlining the characteristics of learners working at Level 6. Reference was made to independence, tenacity and motivation and ‘an innate flare or capability to excel at a particular subject’.

Some schools struggled to pin down these traits, especially for Reading. Teachers mentioned ‘excellent inferential skills and capacity to access authorial intent’.

Maturity was also a key consideration:

‘The parameters of the Level 6 Reading test are just not compatible with the vast majority of pupils aged 11 (even the very brightest ones) – they simply do not possess the experiences and emotional maturity to be able to access what is required of them within the level 6 test.’

.

Support Provided to Participants

Limited guidance was a prominent issue, leading schools to use ‘an array of ad hoc means of support’ derived from their own research and experience.

Many adopted aspects of the KS3 Programme of Study, despite concern at the attitude of receiving secondary schools. Materials and support were much more evident in Maths than in Reading.

Lack of clarity over the relationship between Level 6 tests and the KS3 programmes of study was a significant issue. Most schools drew on the KS3 curriculum but a few preferred to emphasise breadth and depth at KS2 instead.

Schools were generally more confident in their support for Maths because ‘there appeared to be more internal and external expertise available’ and they found selection of participants less problematic.

Two aspects of support were prominent:

  • Classroom differentiation, focused on specific aspects of the curriculum – though the tests themselves were not widely perceived to have had a material impact on such practice. Some form of ability grouping was in place in all schools in respect of maths and most schools in respect of reading (as part of literacy).
  • Test preparation, mostly undertaken in additional booster sessions combining teaching with test-taking practice and the wider use of practice papers.

The Report characterises three broad approaches adopted by schools: outcome focussed (heavily emphasising test preparation); teaching and learning focused (with markedly less emphasis on booster sessions and test practice); and a composite approach marking the continuum between these two extremes.

Several schools reported an intention ‘to focus more on teaching and learning’ in the coming year.

.

Outcomes of the Tests

In Maths it was possible ‘to identify a small number of schools that performed particularly well and others that performed relatively poorly’.

The analysis focuses on the simple pass rate, the Level 5 to 6 conversion rate and a ‘top Level 5’ to Level 6 conversion rate across the 20 case study schools.

The simple pass rate was 40% (34% nationally), though this masked significant variation – from 0% to 100% indeed.

These outcomes correlated broadly with the level 5 to 6 conversion rates for which the case study school average was 17%, with variance from 0% to 50%.

However, when it came to the’ top Level 5’ to Level 6 conversion rate, the Report can only admit that, while there was some degree of correlation with the other two measures:

‘On this measure there was polarity: most schools either found that all of their ‘top level 5s’ achieved level 6 or that none of them achieved it. This is difficult to interpret, and the qualitative data does not shed a light on this.’

Even more problematically, only one learner in the entire sample was successful in achieving Level 6 in the Reading test – equivalent to a 1% success rate (the national pass rate was 2%).

The Report offers some rather approximate findings, wrapped around with health warnings, suggesting that better results were more typically found in schools with a combined approach featuring learning and outcomes (see above), as opposed to either of those two extremes.

Positive outcomes for schools have already been outlined above.

Benefits for learners, identified by teachers and learners alike, included the scope provided by the tests for learners to demonstrate (even fulfil) their potential. Wider personal outcomes were also mentioned including a positive impact on motivation (though there were also corresponding concerns about overloading and over-pressurising learners).

Secondary schools rather tended to reinforce the negative expectations of some primary schools:

  • They were ‘generally ambivalent about primary schools’ use of L6 test and aspects of the KS3 curriculum…due to the fact that secondary schools in general felt that measures of KS2 outcomes were not accurate… Consequently, they preferred to test the children pre-entry or at the beginning of Year 7’.
  • ‘Many of the secondary schools were concerned about primary schools ‘teaching to the test’ and thus producing L6 pupils with little breadth and depth of understanding of L6 working…Generally secondaries viewed such results as unreliable, albeit useful for baseline assessment, as they help to identify ‘high fliers’’
  • While most noted the benefits for learners ‘some felt that inaccurate test outcomes made the transition more difficult’. The usual range of concerns was expressed.

.

The Investigation’s own Conclusions

The Investigation offers four main conclusions:

  • It is abundantly clear…that greater guidance on pupil selection and support and more practice materials are key issues’. This needs to incorporate guidance on coverage, or otherwise, of the KS3 curriculum. The main text (but not the executive summary) identifies this as a responsibility of ‘DfE with the STA’. It remains to be seen whether the Government will take on this task or will look instead to the market to respond.
  • Schools adopting a strongly outcome-focussed approach were less likely to produce successful results than those adopting a mixed learning and outcome approach. Some schools seemed too heavily driven by pressure to secure positive inspection results, and

.‘responded to the direction from inspectors and policymakers to support the most able by a narrowing of the curriculum and overemphasising test preparation, which is not in the best interests of pupil, teachers or schools’

There is a ‘need for policy to aim to drive home the vital importance of pedagogy and learning to counteract the tendency’.

  • Secondary schools confirm primary schools’ scepticism that they will not ‘judge the tests as an accurate reflection of levels’. There is therefore ‘a strong need to engage secondaries much more with primaries in, for example, curriculum, assessment and moderation’. This is presumably a process that is most easily undertaken through local collaboration.
  • The very low pass rate in Reading, selection issues (including maturity as a key component) and secondary scepticism point to a need ‘to review whether the L6 Reading test in its current form is the most appropriate test to use to identify a range of higher performing pupils, for example the top 10%’. The full commentary also notes that:

.‘The cost of supporting and administering a test for such a small proportion of the school population appears to outweigh the benefits’.

.

My Conclusions

There is relatively little here that would be unusual or surprising to a seasoned observer of how gifted education is currently practised and of wider educational issues such as the impact of Ofsted on school practice and transfer and transition issues.

The study is rather narrow in its conceptualisation, in that it fails to address the interface between the Level 6 tests and other relevant aspects of Government thinking, not least emerging policy on the curriculum and (of course) assessment.

It entirely ignores the fact that a decision to abandon National Curriculum Levels was announced eight months prior to publication.

There is no attempt to analyse the national data in any depth, or to look at any issues concerning the gender, ethnic and socio-economic profile of learners entered for the tests and successful within it, even though there will have been some heavy biases, especially in favour of those from comparatively advantaged backgrounds.

It would have been particularly helpful to see how much bigger the FSM gap at Level 6 is, compared with Level 5, whether schools had focused on this issue and, if so, what action they had taken to address it. Was there any evidence of the positive use of Pupil Premium funding for this purpose?

The Investigation’s general point about the negative impact of Ofsted on schools’ practice may also be rather misleading, in that the negative influence of overly outcomes-focussed thinking is at least partly attributable to School Performance Tables rather than Ofsted’s school inspection framework.

In that guise it will probably also feature in Ofsted’s own upcoming publication (see below). Whether there is any reference in Ofsted’s report to the case for rebalancing schools towards pedagogy and learning, so they are more in equilibrium with the pursuit of assessment outcomes, is rather more doubtful. Quite how that might be undertaken is ducked by the Level 6 Investigation and so likely to be sidelined.

The issues relating to transition and transfer are longstanding and a heavy drag on the efficiency of our school system, both for gifted learners and the wider population. If the upcoming consultation affects the timing of Key Stage 2 assessment, that may provide the impetus for renewed efforts to address the generic problem. Otherwise this seems unlikely to be a priority for the Government.

The response to date to the call for additional guidance has been rather limited.

Certainly, a range of sample material has been posted to assist schools interested in taking up the new test of grammar, punctuation and spelling. But the information available to support the Maths and Reading tests remains relatively thin. I have found nothing that addresses substantively the issues about pre-empting elements of Key Stage 3.

Despite the limited support available, evidence has recently emerged that Level 6 test entries are significantly higher for 2013 than for 2012. A total of 113,600 pupils have been entered, equivalent to 21% of the relevant pupil population.

This is said to be an increase of 55% compared with the 73,300 entered in 2012 (though that figure does not seem to agree with those quoted in the Investigation and reproduced above).

Moreover, some 11,300 schools have registered for the tests, up 41% on the 2012 figure of 8,300 schools.

Given the issues associated with the Reading test set out in the Report, one might hazard a reasonable guess that the increase will be attributable largely to the Maths test and perhaps to schools experimenting with the new grammar, punctuation and spelling test (though the figures are not broken down by test).

Increased emphasis in the 2013 Performance Tables (see above) will also be a significant factor. Does this suggest that schools are increasingly slaves to the outcomes-driven mentality that the Investigation strives so hard to discourage?

.

https://twitter.com/GiftedPhoenix/status/326943920028270592

.

The key point here is that it is unlikely to be wise or appropriate to enter over one fifth of all end KS2 learners for tests in which so few are likely to be successful.

One might reasonably hope that, incorporated within the design principles for whatever assessment instruments will replace Level 6 tests, there is explicit recognition that a basic pass/fail distinction, combined with an exceptionally high threshold for a pass, is not the optimal solution.

It is important to retain a high threshold for those with the capacity to achieve it, but other relatively strong candidates also need opportunities to demonstrate a positive outcome at a slightly lower level. A new approach might look to recognise positively the performance of the top 10%, top 5% and top 1% respectively.

It will also be critical to ensure an orderly transition from the current arrangements to those in place from 2016. There is a valuable window of opportunity to pilot new approaches thoroughly alongside the existing models. The reform need not be rushed – that is the silver lining to the cloud associated with decoupling curriculum and assessment reforms.

So, what is my overall judgement of the contribution made by this first publication to my wished for ‘Summer of Love’?

A curate’s egg really. Positive and useful in a small way, not least in reminding us that primary-secondary transition for gifted learners remains problematic, but also a missed opportunity to flag up some other critical issues – and of course heavily overshadowed by the primary assessment consultation on the immediate horizon.

Still, one hopes that its recommendations will be revisited as part of a holistic response to all three publications, and that those to follow will take full account of its findings, otherwise the overall narrative will be somewhat impoverished and will almost certainly fail to give due prominence to the critically important upper primary phase.

.

Kew once more 3 by giftedphoenix

Kew once more 3 by giftedphoenix

 

The Ofsted Survey

.

Background

Next in line for publication is an Ofsted Survey, conducted using the Inspectorate’s rapid response methodology, which will examine ‘how state schools teach the most able children’.

Unusually, this was announced in January 2013 through a press briefing with a national newspaper. Given the political leanings of the paper in question, the contents of the story may be a somewhat biased version of reality.

There is no information whatsoever on Ofsted’s own website, with the sole (and recently added) exception of a publication schedule confirming that the survey will be published in May.

The newspaper report explains that:

  • Despite being a rapid response exercise, this publication ‘will be the most extensive investigation of gifted and talented provision undertaken’ by Ofsted.
  • It will focus predominantly – if not exclusively – on secondary schools where ‘children who get top marks in primary school are being let down by some secondary school teachers who leave them to coast rather than stretch them to achieve the best exam results’.
  • It will examine ‘concerns that bright pupils who are taught in mixed ability classes are failing to be stretched and that schools are entering clever children too early for GCSE exams so that they gain only the C grades that count in league tables and are not pushed to the full extent of their abilities’.
  • Ofsted will interrogate existing inspection data on educational provision for gifted and talented learners, as well as pupil progress data. They will also survey provision afresh, through visits to a representative sample of over 50 secondary schools.

HMCI Sir Michael Wilshaw is quoted extensively:

‘I am concerned that our most able pupils are not doing as well as they should be…Are schools pushing them in the way they should be pushed and are pushed in the independent sector and in the selective system?

The statistic that four independent schools and a very prestigious six [sic] form college are sending more youngsters to Oxbridge than 2,000 state secondary schools is a nonsense. When the history of comprehensive education is written people need to say that they did as well by the most able pupils as they did by the least able…

I am passionate about this, it will be a landmark report…I am as concerned as the next person on the issue of social mobility. Are our children and our children from the poorest backgrounds who are naturally bright doing as well as they should?

…I would like to see GCSE league tables reformed…The anxiety to get as many through those C boundaries have sometimes meant that schools haven’t pushed children beyond that.

We need sophisticated league tables which shows [sic] progress. Youngsters leaving primary school with level 5 should be getting A*, A or B at GCSE.’

It is arguable that the Government has already responded to the final specific point via its proposal – in the consultation on secondary accountability released alongside the draft National Curriculum – to publish an ‘average point score 8’ measure based on each pupil’s achievement across eight qualifications at the end of KS4 (though whether it has done enough to counterbalance other pressures in the system to prioritise the C/D borderline is open to question).

Otherwise there are several familiar themes here:

  • whether gifted learners are insufficiently challenged, particularly in secondary comprehensive schools;
  • whether they are making sufficient progress between the end of Key Stage 2 and the end of Key Stage 4;
  • whether they are held back by poor differentiation, including a preponderance of mixed ability teaching;
  • to what extent they are supported by schools’ policies on early entry to examinations, particularly GCSEs;
  • whether more can be to done to support progression by state school students to the most competitive universities, especially by those from disadvantaged backgrounds; and
  • whether there are perverse incentives in the accountability system that result in gifted learners being short-changed.

Given the puff generated by Sir Michael, expectations are high that this will be a substantial and influential piece of work. It follows that, if it turns out to be comparatively a damp squib, the sense of disappointment and frustration will be so much greater.

The Report will be judged by what new and fresh light it can bring to bear on these issues and, critically, by the strength of the recommendations it directs towards stakeholders at national, local and school level.

Just how interventionist will Ofsted show itself in backing up its leader’s passion? Will it take responsibility for co-ordinating a response from central government to any recommendations that it points in that direction – and what exactly will Ofsted commit itself to doing to help bring about real and lasting change?

Not to labour the point (though I fear I may be doing so) a limp effort that repackages familiar findings and appeals rather weakly to stakeholders’ better judgement will not display the landmark qualities of which HMCI has boasted.

A future Episode in this series will be dedicated to assessing whether or not these inflated expectations have been satisfied, and what the consequences are for the Summer of Love.

.

Benchmarking the New Report

In the meantime, it is instructive to look back at the most recent inspection report on gifted education, thus supplying a benchmark of sorts against which to judge the findings in this new publication.

This will help to establish whether the new report is simply bearing out what we know already about long-standing shortcomings in gifted education, or whether it has important messages to convey about the impact – positive or negative – of the predominantly ‘school led’ approach adopted by successive Governments over the past three years.

The most recent report was published in December 2009, in the latter days of the previous government.

Gifted and Talented Pupils in Schools’ is based on a rapid response survey of 26 primary and secondary schools, selected because their most recent school-wide inspections had identified gifted and talented education as ‘an improvement point’.

The survey was undertaken shortly after the previous government had, in the Report’s words:

‘Reviewed its national programme for gifted and talented pupils and concluded that it was not having sufficient impact on schools. As a result, provision is being scaled back to align it more closely with wider developments in personalising learning. Schools will be expected to do more themselves for these pupils.’

Eight of the 26 schools (31%) were judged to be well-placed to respond to this new environment, 14 (54%) displayed adequate capacity for improvement and the remaining four (15%) had ‘poorly developed’ capacity to sustain improvement.

The schools that were well-placed to build their own capacity could demonstrate that their improved provision was having a positive impact on outcomes for all pupils, were making use of available national resources – including the critically important Quality Standards – and were making sure that all pupils were suitably challenged in lessons.

The majority of schools in the middle group could demonstrate some improvement in pupil outcomes since their last inspection, but ‘many of the developments in these schools were fragile and the changes had had limited success in helping gifted and talented pupils to make appropriate and sustained progress’.

Gifted education was not a priority and:

‘To build their capacity to improve provision, they would benefit from better guidance, support and resources from outside agencies and organisations.’

In the four schools with inadequate capacity to improve, lead staff had insufficient status to influence strategic planning, teachers had not received appropriate training and schools:

‘Did not sufficiently recognise their own responsibilities to meet the needs of their gifted and talented pupils’.

The Report’s Key Findings identify a series of specific issues:

  • Many schools’ gifted education policies were ‘generic versions from other schools or the local authority’, so insufficiently effective.
  • In the large majority of schools (77%) pupils said their views were not adequately reflected in curriculum planning and they experienced an inconsistent level of challenge.
  • None of the schools had engaged fully with the parents of gifted learners to understand their needs and discuss effective support.
  • The better-placed schools were characterised by strong senior leadership in this field and lead staff with sufficient status to influence and implement policy. Conversely, in the poorer schools, senior staff demonstrated insufficient drive or commitment to this issue in the face of competing priorities.
  • In schools judged to have adequate capacity to improve, subject leaders had too much flexibility to interpret school policy, resulting in inconsistency and lack of coherence across the curriculum.
  • Most schools ‘needed further support to identify the most appropriate regional and national resources and training to meet their particular needs’. Lead staff were seeking practical subject-specific training for classroom teachers.
  • All schools ‘felt they needed more support and guidance about how to judge what gifted and talented pupils at different ages should be achieving and how well they were making progress towards attaining their challenging targets across key stages’
  • Just over half the schools had established collaborative partnerships with other schools in their localities. Lack of such support was evident in the schools with limited capacity to improve. There was comparatively little scrutiny through local accountability arrangements.
  • All the schools had developed out-of-hours provision though the link with school-based provision was not always clear and schools were not consistently evaluating the impact of such provision.
  • There was little analysis of progression by different groups of gifted learners.

The Report offers the customary series of recommendations, directed at central and local government and schools, designed to help schools build the necessary capacity to improve their performance in these areas. It will be telling whether the new Report assesses progress in implementing those.

Rather oddly, they fail to endorse or propose arrangements for the ongoing application of the Quality Standards in a ‘school-led’ environment, although the Standards incorporate all these elements of effective practice and provide a clear framework for continuous improvement.

With the benefit of hindsight, one might argue that many of the problems Ofsted cited in 2009 would have been rather less pronounced had the Inspectorate fully embraced the Standards as their official criteria for judging the effectiveness of gifted education when they were first introduced.

The Standards are now growing significantly out of date and require an urgent refresh if they are to remain a valuable resource for schools as they continue to pursue improvement.

Ideally Ofsted might lead that process and subsequently endorse the revised Standards as the universal measure for judging the quality of English schools’ gifted education. I can think of nothing that would have a more significant impact on the overall quality of provision

But I suspect that will be an idea too interventionist for even the most passionate HMCI to entertain.

It will be fascinating, nevertheless, to map the shortcomings identified in the upcoming Report against the existing Standards, as well as against those flagged in the predecessor Report. But that’s a topic for another day.

.

Kew once more 4 by giftedphoenix

Kew once more 4 by giftedphoenix

.

Raising the Aspirations of High-Achieving Disadvantaged Pupils

Thirdly and finally, DfE has commissioned an ‘Investigation of School and College-level Strategies to Raise the Aspirations of High-achieving Disadvantaged Pupils to Pursue Higher Education’.

This is still some way from publication, but the contract – including the specification – is available for public scrutiny (see documents section on this link).

The contract was awarded to TNS-BMRB (where the Project Lead is Mark Peters) working with the Institute for Policy Studies in Education (IPSE) based at London Metropolitan University (where the lead is Carole Leathwood).

IPSE is undertaking the qualitative element of the research and carries this outline of the project on its website.

According to the contract, the contractors must deliver their final report by 28 June and the Department must publish it within 12 weeks of this date, so by 20 September 2013 at the latest. The project is costing £114,113 plus VAT.

Its aims, as set down in the contract, are to discover:

  • ‘What strategies are being used by schools across years 7-11 and in school sixth forms (years 12-13) to support high-achieving disadvantaged pupils in to [sic] pursue HE.
  • If the pupil premium is being used in schools to fund aspiration raising activities for high-achieving disadvantaged pupils.
  • What strategies are being used by colleges to support high-achieving disadvantaged pupils pursue HE and
  • To identify assess [sic] any areas of potential good practice.

‘High-achieving’ is defined for these purposes as ‘pupils who achieve a Level 5 or higher in English and Maths at KS2’.

As reported in a previous post, some 27% of pupils achieved this outcome in 2012, up from 21% in 2011, so the focus is on the top quartile, or perhaps the top two deciles of pupils on this measure.

‘Disadvantaged’ is defined as ‘pupils eligible for free school meals’ (and, in the case of post-16 students, those who were eligible for FSM in Year 11). This is of course a somewhat narrower definition than eligibility for the Pupil Premium, even though the Premium is pivotal to the study.

The national proportion of pupils achieving Level 5 in KS2 English and maths in 2012 who are eligible for FSM is, I believe, 14%, compared with 32% of non-FSM pupils, giving a gap on this measure of 18%.

This data is not provided in School Performance Tables nor is it easily sourceable from published national statistics, though it does appear in schools’ Raise Online reports. (Incidentally, the comparable gap at Level 4 is somewhat lower, at 16%.)

The full set of objectives for the project is as follows (my emphases, but not my punctuation):

‘For Schools:

  • To identify to what extent schools are supporting high-achieving disadvantaged pupils to raise their aspiration to go on to HE?
  • To identify what activities take place in Years 7 -11 for high-achieving disadvantaged pupils to raise their aspiration to go on to HE and the Russell Group universities?
  • To identify whether the Pupil Premium being used [sic] to fund specific activities to help pupils pursue HE?
  • To identify what good practice looks like for supporting high-achieving disadvantaged pupils to pursue HE? (Focusing particularly on schools that have a high percentage of FSM pupils who go on to HE).

For FE colleges, sixth forms colleges and school sixth forms:

  • To identify to what extent are colleges supporting high-achieving disadvantaged learners post-16 to pursue HE?
  • To identify what strategies, if any, do high-achieving disadvantaged learners receive post-16 to pursue HE and more specifically Russell Group Universities?
  • To identify what good practice looks like for supporting high-achieving disadvantaged learners to pursue HE? (Focusing in particular on the strategies used by colleges that have a high percentage of disadvantaged learners who go on to HE).

For schools and colleges

  • To establish how schools and colleges are identifying ‘high-achieving, disadvantaged’ pupils/learners?
  • To identify which particular groups (if any) are being identified as requiring specific support and why?
  • To identify what extent schools/colleges engage in aspiration raising activities specifically designed to increase participation in Russell Group Institutions (rather than HE in general)?
  • To identify what good practice look like in relation to different groups of pupils/learners?’

It is evident from this that there is some confusion between aspiration-raising activities and wider support strategies. But there is clearly interest in comparing strategies in the school and post-16 sectors respectively (and perhaps in different parts of the post-16 sector too.) The primary sector does not feature.

There is also interest in establishing approaches to identifying the beneficiaries of such support; how such provision is differentiated between progression to HE and progression to ‘Russell Group universities’ respectively; the nature of good practice in each sector, drawn particularly from institutions where a significant proportion of students progress to HE; and distinguishing practice for different (but non-defined) groups of learners.

Finally, there is some interest – though perhaps a little underplayed – in exploring the extent to which the Pupil Premium is used to fund this activity in schools. (Funding sources in post-16 environments are not mentioned.)

The study comprises 6 phases: pre-survey scoping; survey piloting; national school survey (a sample of 500 schools, including 100 that send a high proportion of FSM-eligible pupils to HE); national FE and sixth form college survey (a sample of 100 institutions); case studies (eight schools and two colleges); and results analysis.

The latter will incorporate:

  • ‘To what extent schools and colleges are providing aspiration raising activities to high achieving disadvantaged pupils.’
  • ‘What activities take place across different year groups.’
  • ‘Analysis by school characteristics including region, school size, distance to the nearest Russell group university, proportion of FSM eligible pupils’
  • Comparison of the 400 schools with the 100 sending a high proportion of their FSM pupils on to higher education.
  • Whether ‘activities are associated with higher numbers of pupils progressing to HE and trends in what works for different pupil groups’
  • Triangulation of data from different strands
  • Analysis of ‘best practice’, incorporating ‘comparisons between schools and colleges’.

There is no overt reference to other Government policies and initiatives that might be expected to impact on institutions’ practice, such as the Destination Measures (which will be presented separately for FSM-eligible learners in 2013, as well as being incorporated in School and College Performance Tables) or the Dux Scheme. Nor is there any explicit reference to the outreach activities of universities.

One assumes, however, that the outcomes will help inform Government decisions as to the effectiveness of existing school and college level policy interventions that contribute towards the achievement of its Social Mobility Indicators, specifically:

The Report is likely to result in arrangements of some sort for for disseminating effective practice between institutions, even if that amounts only to a few brief case studies.

It may even help to inform decisions about whether additional interventions are required and, if so, the nature of those interventions.

Previous posts on this Blog have made the case for a nationally co-ordinated and targeted intervention provided through a ‘flexible framework’ which would synergise the currently separate ‘push’ strategies from schools/colleges with the ‘pull’ strategies from higher education in support of the ‘most disadvantaged, most able’.

This would be a subset of the 14% achieving KS2 Level 5 in English and maths, defined by their capacity to enter the most competitive universities. It might incorporate a specific focus on increasing substantively progression to particular ‘elite’ targets, whether expressed in terms of courses (eg medicine, veterinary, law) or institutions (notably Oxbridge).

At the moment all the running is being made on the ‘pull’ side, spearheaded by joint OFFA/HEFCE efforts to develop a ‘National Strategy for Access and Student Success’.

A joint effort would:

  • Passport funding on individual learners and support them through transition at 16 and 18, probably topslicing Pupil Premium for the purpose.
  • Enable learners and facilitators to draw on provision offered via the (currently fragmented) supply side, drawing in third party providers as well as schools/colleges and universities.
  • Provide for a menu of such provision from various sources to be synthesised into a personalised programme based on needs assessment and subject to regular monitoring and updating.

Although there is presently some ideological inhibition hindering the adoption of such scaffolded programmes, an intervention of this nature – targeted exclusively at a select cohort of ‘high ability, high need’ students – would be likely to result in much more significant improvements against these indicators, and do so much more quickly than generic system-wide reform.

In ‘holding the Government’s feet to the fire’ over social mobility issues, perhaps the recently-established Social Mobility and Child Poverty Commission might see its way to making that case when it reports on Government progress in the Autumn.

.

Kew once more 5 by giftedphoenix

Kew once more 5 by giftedphoenix

 

Drawing These Strands Together

So, as things stand at the end of Episode One:

  • There is a decent, if relatively narrow report on the table which draws attention to longstanding transition and transfer problems and an outcomes-obsessed mentality at the top end of Key Stage 2, as well as a range of narrower issues associated with the effective delivery of Level 6 tests.
  • We impatiently await a consultation document on primary accountability that should provide some clarity over the future assessment of high-attaining learners within Key Stage 2, so enabling us to complete the bigger picture of National Curriculum and associated assessment reforms across Key Stages 1-4.
  • We also await a much-vaunted Ofsted survey report which – if it satisfies our high expectations – might provide the spur for real action at national, local and school levels, perhaps even inspiring the Sutton Trust to announce the outcomes of its 2012 call for proposals.
  • Then in September the third report (the second Investigation) will ideally be sufficiently strategic and influential to cause some important joining up to be undertaken across that part of the agenda focused on progression to higher education by high-attaining learners from disadvantaged backgrounds, potentially at the behest of the Social Mobility and Child Poverty Commission.

I am hopeful that this series of posts will support the process of distilling and synthesising these different elements to provide a composite picture of national strengths and weaknesses in gifted education throughout the continuum from upper Key Stage 2 to university entry. Some kind of audit if you will.

But the question begged is how to respond to the state of affairs that this ‘joining up’ process reveals.

As matters stand, at the end of this first post in the series, I have proffered unto the melting pot a cautiously provisional wishlist comprising three main items: a Manifesto that sets out some principles and arguments for a genuinely collaborative response, revised Quality Standards integrated within the accountability machinery and a targeted intervention for ‘high ability; high need’ learners designed to eliminate the fragmentation that bedevils current efforts.

This menu may well grow and change as the ‘Summer of Love’ progresses, not least to reflect planned and unplanned discussion of the issues . I would be delighted if some of that discussion were to take place in the comments facility below.

I believe one of the Manifesto principles must be to pursue an optimal middle way that is neither top-down nor bottom-up but a ‘strategy of all the talents’. That is reflected in my own version. Your comments are ever welcome about that, too.

But that principle presupposes a national gifted education community with the capacity and wherewithal to build on strengths and tackle weaknesses in a strategic, collaborative, inclusive and universal fashion.

For, if the next stage of reform is once more to be school-led, it is abundantly clear from the evidence presented above that schools will need our support to bring about real and lasting improvements in gifted education practice, for the benefit of all English gifted learners.

I was once optimistic about the prospects, but now I’m not so sure. Perhaps the Summer of Love is a chance in a generation – maybe the last chance – to galvanise the putative community into a real community and so make that happen.

.

GP

May 2013