Ok, blogging again on this site after another break. I’ve been hoping to have the article below published somewhere, but haven’t been successful, so thought I’d just post it on here. Please read on…

The Government is handing privately-sponsored academy trusts funding which could approach £1 million per school to take over the running of struggling former local authority primary and secondary schools.

The Department for Education is making the cash available from this month to support the capital and running costs of local authority schools converting to sponsored academies.

Critics say the move appears to undermine the principle of a “level playing field” for school funding and “looked like a bribe” to sponsors to take on schools as ministers seek to boost the academy chain policy, although the money is paid to the trust set up by a sponsor to run a school or group of schools, rather than the sponsor themselves. The department for education says the cash, which appears to be being disclosed for the first time in detail in relation to individual schools, is going to institutions where standards badly need to be raised and is actually less than has been made available in previous years.

A “note for sponsors on development funding and support for sponsored academies”, on the DfE’s website, spells out the funding to be made available to many newly-opened sponsored academies from April 2013 in order to “help sponsors achieve education transformation in their academy”.

It shows that, in addition to the resources given to conventional state schools, sponsored academies felt by the DfE to be in particular need of support will receive:

-          A “pre-opening grant”, worth £120,000 for primaries or £200,000 for a secondary, helping sponsors to meet legal bills and the payment of “key staff” such as the headteacher before it opens. The DfE says this money can be kept until after opening if unspent.

-          Unspecified additional cash support for “staff restructuring costs”.

-          On opening, £25,000-£50,000 per primary school to pay for extra “resources” costs, such as books and equipment, although there is no stipulation by the DfE on what it is spent on. In secondaries, the figure is £150 per pupil, or £150,000 in a 1,000-pupil comprehensive.

-          A payment of up to £135,000 over two years to primary schools which, when starting up as a sponsored academy, are taking below their full capacity of pupils. In secondaries, a school which opens at less than three quarters capacity would receive an additional £342,000, over three years, a government document shows.

-          A one-off payment of an additional £50,000 for primary schools, and £100,000 for secondaries, to support buildings refurbishment.

Adding up these figures, an academy trust set up by the sponsor of a primary school could receive up to £355,000 over two years, and a 1,000-pupil secondary school £792,000, over a three-year period, not including the unquantified “staff restructuring costs”, in addition to conventional state school funding. The figure for secondary schools could end up higher, too, than the £792,000 if the school were operating at less than 70 per cent capacity.

The DfE has two categories of sponsored academies: “full sponsored” institutions, replacing schools felt to be seriously underperforming, and “fast-track” for those deemed to need less support. Only the former get the full payments, with “fast-track” projects receiving up to £90,000.

Ministers have been facing increasing scrutiny over moves by DfE consultants, known as “brokers”, to push schools which have fared badly in Ofsted inspections towards academy sponsors. There has also been controversy over allegations that some have been offering cash incentives to schools of up to £65,000 to become sponsored academies.  The above figures, however, dwarf this sum.

Peter Downes, a Liberal Democrat councillor from Cambridgeshire who is a former secondary headteacher and has been a persistent critic of academies, said: “This looks to me like a new attempt to bribe schools into going for academy status.”

One of the DfE documents says that “on opening, a start-up grant is paid to full sponsored academies in order to assist them raise standards and transform educational attainment”. Sceptics question whether schools other than sponsored academies are receiving such support.

Malcolm Trobe, deputy general secretary of the Association of School and College Leaders, said: “Schools will need some money to go through the academy conversion process, but this looks like sweeteners to sponsors to take on these schools.

“What we want is a level playing field in terms of the funding of schools, and this is quite clearly not creating a level playing field. It is significant extra money for some schools compared to others.”

One of the DfE documents says that “on opening, a start-up grant is paid to full sponsored academies in order to assist them raise standards and transform educational attainment”.

The revelations will heighten scrutiny of academy funding within Michael Gove’s education department. Last November, the National Audit Office said the academies programme had added £1 billion in extra costs.

Under the academies system, schools either opt on their own to become an academy or are run as part of a chain of sponsored academies. In the past two years, the number of sponsored academies has more than doubled, to 633, with nearly one in eight of England’s secondary schools now a sponsored academy. An additional 246 sponsored academies are listed by the government as already in preparation to open by 2015.

Kevin Brennan, shadow schools minister, said: “These figures suggest that Michael Gove is more concerned about the number of academy conversions than using taxpayers’ money effectively to raise standards. He appears to be throwing more money at his pet projects whilst failing in his basic duty to provide enough school places.”

John Pugh, Liberal Democrat MP for Southport who led a Parliamentary debate on the “forced academy” policy last month, said: “Parliament  passed the Academies Bill on two assumptions. Firstly that schools would make a free choice and secondly that any choice to stay or remain with the local authority would take place on fair funding playing field.  In fact bullying and sheer bribery is taking place on an industrial scale. Away from the scrutiny and challenge of parliament Michael Gove is pursuing his pet project with autocratic flair, unlimited resources and a blinkered perspective on outcomes. Rarely has a minister been so indulged or seemed so unaccountable.”

A department for education spokeswoman said: “The idea that these are bribes is complete nonsense.

“First, we are entirely transparent about this funding – that is why the details are published on our website for all to see.

“Secondly, this funding is used only to tackle years of failure in the worst primary and secondary schools, often in the most deprived parts of the country.

“We make absolutely no apologies for helping these schools with time-limited costs so that their pupils can finally get a good education – this money pays for better leaders and better teachers.”

The spokeswoman added that “sponsored academy start-up costs have been cut significantly in the last three years”, but did not provide details of how much schools have received in the past.

No Comments
posted on April 12th, 2013

Wednesday, October 31st

A response from Ofqual to a Freedom of Information request, published last week, offers fresh insights into this year’s GCSE English grading controversy.

Followers of my regular blog on the NAHT’s website will need no reminding that I’ve been taking quite a close interest, having posted several lengthy pieces on this and the related issue of Ofqual’s “comparable outcomes” policy for controlling apparent grade inflation.

Ofqual itself is due to publish its final report on this year’s problems on Friday (November 2nd).

This latest set of correspondence has been released under FOI to blogger and tweeter Antony Carpen, following an earlier request by him which also generated a lengthy correspondence trail and which was covered in my last two blogs on the NAHT site.

This one focuses on correspondence between Ofqual for the Department for Education as the controversy was developing. And it is interesting, right from the start.

In the first set of emails, which date from August 17th – six days before national GCSE results would be announced at a press conference –   an unnamed Ofqual official tells the DfE that:

“On GCSE English, there is a potential story because in order to make sure the overall subject grades are right/comparable with last year and across the boards, some of the Controlled Assessment units sat in the summer have higher grade boundaries than the units sat in January.”

The email goes on: “Policy colleagues have been talking to DfE at your place and are due to talk again early next week.”

“Controlled Assessment” units are tasks set by the boards for pupils to take  in-class, with the work marked by teachers but with the boards deciding later how many marks are needed for each grade.

For the AQA board, which has by far the highest number of pupil entries for English, the number of marks needed for a C grade was changed from 43 marks out of 80 for pupils submitting work in January for 45 in June. This has proved extremely controversial, since the tasks set for the pupils did not change over this period.

Other boards, however, also changed CA boundaries between January and June, according to data provided in this Association for School and College Leaders document, matched to data provided at the end of Mr Carpen’s FOI.

Edexcel changed the grade boundaries for two controlled assessment papers between January and June: one from 55 marks out of 96 to 65 and another from 60 to 64 marks out of 96, the ASCL document suggests.

A third board, OCR, changed the marks on five CA papers between January and June, in all cases, again, moving the boundary upwards, according to the ASCL document. The two other boards included in our regulatory system – the Welsh and Northern Irish boards – seem not to have allowed early entries for controlled assessment units. (For more on the data, see note below)

The significance of the quotation from Ofqual above, I think, is that it is the clearest statement I have seen yet that what the boards, under supervision and in at least one case pressure from Ofqual, did in moving grade boundaries in this way was driven very much by the need to get the overall pass rate “right” in the end.

This can and is justified by Ofqual in terms of it being necessary to combat “grade inflation”. But it does raise some problems, such as the clear risk that boards raise grade boundaries not, in reality, driven by the need to ensure fairness to all candidates taking different modules of a course – or the same module at different times – but by the need to produce overall headline statistics which are comparable to the previous year’s. Again, this may seem unproblematic, for me, it deserves further debate and scrutiny.

So, if one set of candidates one year is advantaged and another disadvantaged by the setting of grade boundaries, but their overall effect is to produce a total number of grades which is similar to previous years – ie the results of the two groups cancel each other out – it may be deemed satisfactory under this policy but is it really being fair to each group of pupils?

In an earlier NAHT blog, I referred to comments from a senior exam board official in a paper dated July 30th, unveiled through Mr Carpen’s previous FOI request.

The official said: “If asked by [schools and colleges] and the press to explain the rise in controlled assessment [grade] boundaries, the rationale has to be based on [examiners’] qualitative judgements of work seen, not on a statistical fix.”

That quotation above suggests to me that we  may be more in the territory of “statistical fix” than qualitative judgement.

-There is plenty more in this latest set of FOI correspondence, as might be expected given that it runs to 148 pages. I don’t have time to blog any more now, though, except to note that it does show Ofqual and the DfE co-ordinating their communication very closely. For example, on 22nd August, a Department for Education official emails Ofqual about a story in the Independent saying “it would be really helpful to have your line on this issue so that we can craft our own around that”.

Similarly, in an email the same day from Ofqual to the DfE, Ofqual indicates it is trying to match its message to that put out by the Joint Council for Qualifications at the GCSE press conference, where results would be announced the following day.

The email from Ofqual says: “We have refined the line on GCSE pass rates to make sure it is a bit closer to the message JCQ will be issuing at the press conference.”

-Note: Candidate entry data provided at the end of this FOI appear to indicate, if I have understood it right, that while changes in controlled assessment grade boundaries were in some cases quite dramatic, relatively few pupils will have benefited from what Ofqual says were “generous” pre-June grade boundaries by submitting controlled assessments before June.

The data show that all the grade boundary changes to controlled assessment discussed above occurred in exam specifications where low proportions of candidates submitted in January 2012 or previously. The only instance in which large numbers of pupils submitted CA entries early was for an Edexcel course for the controlled assessment grade boundaries did not change between January and June.

The significance of this, I think, is that relatively generous grade boundaries for CA in January 2012 are unlikely, in themselves, to have pushed up overall pass rates by much.

However, there were more conventional externally-assessed papers – including a much-discussed foundation paper set by AQA – where there were substantial changes in the grade boundaries between January and June and where substantial numbers of candidates were entered before June.

No Comments
posted on October 31st, 2012

Friday, October 12th, 2012

I’ve just caught up with a very interesting Radio 4 documentary on “free schools”, which aired last night.

The piece prompted quite a few thoughts, but I was particularly taken by comments by, I think, Jeremy Rowe, the head of Sir John Leman school in Beccles, Suffolk, about the possible long-term impact of a new school – a “free school” – which opened in the town last month to make two secondaries there.

Mr Rowe’s concern was that although Beccles Free School has so far struggled for pupil numbers, in time it might take pupils away from Sir John Leman and that, in time, this might lead to serious disadvantages: specifically duplication of provision and, if I have remembered this correctly, the fact that Sir John Leman might no longer be able to operate, for example, courses in minority interest subjects.

This could lead to a situation where “there is less of the added value [his school currently provides] and you end up with two schools offering the same restricted diet”.

In other words, there were benefits to having pupils concentrated in one institution rather than split between two.

The counter-argument, put by supporters of free schools, is that competition between institutions, to use the standard and too-often-repeated cliché, forces each to “up their game”, or improve.

The to- and fro-of this is fairly routine in this debate. But what occurred to me is that the competition argument being used by free schools supporters seems to run directly counter to the argument used in much of the debate around healthcare.

There, a prevailing view – though often challenged, it has to be said, at a local level – is that sometimes hospitals need to close in order to focus specialist provision in single institutions. This is seen to be economically more efficient and also to promote the concept of centres of excellence. It also avoids duplication of services, I think the argument runs.

As I said, this argument seems to be well-balanced in the health sector, but the opposite of the prevailing view among current policy-makers in education.

If many people do, understandably, want to defend their local hospital from closure, I’m not aware of any argument that what we need to do, to improve health standards, is to open lots of new hospitals close to each other in order to force existing providers to “up their game”.

Someone might say I’m wrong there, and that this has been advocated.

But I guess the prevailing view would be that – to put it mildly – this would not be the best use of resources. You can argue,I guess, that schools are cheaper institutions than hospitals, but questions about the efficiency of the comparable project in the education sector will not go away, I feel.

The above represent just some very quick thoughts, but I would be fascinated to hear any responses.

posted on October 12th, 2012

Sunday, September 2nd, 2012

OK, hands up: I haven’t been updating this blog in the last few months. This is for a couple of reasons: the demands of childcare and work pressure during the time I am not looking after our daughter.

Anyway, enough excuses. I still do hope to be posting on here from time to time in the coming months. And, given the importance of the current controversy over GCSE English results, I wanted to post a blog offering some observations not already put down elsewhere. What follows below is a mixture of some facts I’ve come across over recent days about Ofqual and the regulatory system and some very brief observations about some implications of the regulator’s current actions on grade inflation and how they interact with school accountability.

The below was actually written last week, before the publication of Ofqual’s report following its short investigation into the issue. It could almost be read as a postscript to my blog, published on Friday morning, on the NAHT’s website.

So much for the preamble. Some observations:

- First, we know, from Ofqual board meeting papers, that the regulator raised the potential of problems with the new GCSEs being launched in 2011 and 2012 in December 2010, when a minute from a board meeting paper (at point 10 of paper 13 here ) says: “Achieving comparable outcomes [ ie holding grade proportions roughly constant overall] while at the same time ensuring consistent unit level standards will be challenging for GCSE”. The TES reported on Friday, of course, that fears were also raised by Isabel Nisbet, Ofqual’s former chief executive, in 2009.

- Second, we also know that, as of earlier this summer, Ofqual did not seem to think, officially at least, that a big problem was brewing this year. A minute of an Ofqual board meeting on Monday, June 25th this year (see here and scroll down to near the bottom to a document slightly confusingly titled “24-12: minutes of meeting held on 16th May”) says: “The board received a short update on the progress of the 2012 summer [exam] series which had gone well and more smoothly than previous years”.

Moving more into comment now:

- Third, the whole business of Ofqual seeking to curb grade inflation raises huge questions about the pressure being placed on many individual schools to improve their results or risk closure. If results for the country as a whole effectively cannot rise (as discussed in another recent blog for the NAHT) one has to ask what the purpose of this policy now is.

- Fourth, the technicalities of the way Ofqual is seeking to curb grade inflation – with pupils’ Key Stage 2 results now a central aspect in the judgement as to whether the regulator should move away in any one year from what is effectively coming close to resembling norm referencing system – mean that a collective effort by secondary schools across the country to raise grades seems unlikely now to find any reward in overall national average results.

-Fifth, returning to try to look in detail at exam rules for the first time in a while, I have been staggered by the complexity around new modular specifications such as the GCSE English/English language/English literature suite. It seems to me that the boards have not really helped their cause in the current controversy over grading by offering, with approval from the regulator at the time they were developed, GCSEs which now seem to me, as an outsider to this system, incredibly complicated. Perhaps this is in response to demand from schools for assessments over which they hope to gain more control. I have not space to go into detail on this here; it may be the subject of a future blog.

- Sixth, these politicians do have to be watched carefully, don’t they? Michael Gove has said he wants to break with the recent past and not be a minister who seeks to claim credit for incremental annual rises in higher grade pass rates. These he has likened to “tractor production figures“. He is likely to have got credit for this stance with political commentators and many members of the public, I guess. Yet Mr Gove has also repeatedly been happy to highlight the results of some of his favoured academies, achieved under exactly the system which, at a national level, he currently criticises. He thus wants to use these particular improving results, but only in certain schools, to claim credit for the academies policy. Some will no doubt shrug their shoulders and say this is to be expected from a minister, but for me, the dishonesty and duplicity is staggering.

Finally, and on related matters…I wanted to look into the issue of whose responsibility this saga is. In his interview 10 days ago with the BBC , Mr Gove said both that the setting of grade boundaries was a matter for the boards and that the boards, in conversation with Ofqual, were ensuring that new exams – sat in English and other subjects – were comparable in standard with those in previous years. This was billed as Mr Gove denying political interference in GCSEs.

He said: “The decision about where to set grade boundaries is made by exam boards.

“If you take English, then yes the number of As and A*s has fallen but the number of Bs has increased. The number of Cs has fallen and the number of Ds has increased.

“And that is the result of the independent judgements made by exam boards entirely free from any political pressure.”

Reading this, and listening to the interview, I couldn’t resist a feeling of…well, ”scepticism” would be a polite way of putting it. While I don’t believe the Secretary of State played any direct role in ordering grade boundary changes, I think Mr Gove was being disingenuous in suggesting things are quite as simple as he implied.

First, it is not the case that exam boards make these decisions entirely “independently”. Although he did mention that they ensure the new exams are comparable in standard to previous years after “conversations” with Ofqual, in fact the influence of the regulator appears to be quite substantial.

We know, for example, that Ofqual wrote to the boards in late June to remind them about need to ensure “comparable outcomes” in new GCSE qualifications including English. I’ve also seen evidence, from a paper presented by an Ofqual director to the Ofqual board last month, that Ofqual held teleconferences with the boards every week as the publication of national results neared, to discuss “standards issues”.

The paper says “provisional award outcome data” – ie provisional national results – would be provided to Ofqual and its counterparts in Wales and Northern Ireland in late July or early August, ie well before their publication towards the end of August.

The paper adds: “Through this procedure, which includes weekly teleconferences with all of the exam boards, we will have an early sight of awards [national grade statistics] and be able to identify and challenge any standards issues.

“We will meet with the exam boards to review these data and resolve any issues on Tuesday 31st July for A-level and Monday 6th August for GCSE.”

It is clear, then, that exam boards have not been taking these decisions entirely “independently”, and that the notion of “conversations” the boards may have with Ofqual seems to underplay the detailed nature of these interactions. Ofqual itself seems to have had, to judge from the above, an opportunity to learn of any problems with GCSE English several weeks ago, if this were being flagged up by the boards.

In terms of political influence, the question then turns to what influence the Government, and the Secretary of State, have over Ofqual. While, again, there is no evidence of any direct ministerial involvement in either exam boards’ decisions over grade boundaries or over individual moves by Ofqual to scrutinise and “challenge” any such decisions, it would be very misleading to view the regulator as operating outside of general influence by ministers.

I should say, first, that Ofqual is, in one sense, more independent than its predecessor body the Qualifications and Curriculum Authority, in that it is not given a formal annual remit by the Department for Education, as used to happen throughout most of the Labour government for the QCA. Also, formally Ofqual reports to Parliament, rather than to Mr Gove.

On the other hand, Ofqual is largely funded by the Department for Education. Its chair and the rest of its board were appointed by the Secretary of State, subject in the former’s case to approval from the Queen and in the case of the other board members, in consultation with the chair. In the case of the chair, Amanda Spielman, she was appointed by Mr Gove as Secretary of State. Half of the other 12 board members were appointed by Mr Gove’s predecessor, Ed Balls, and the other half by Mr Gove. Although the chief executive, Glenys Stacey, was not directly appointed by Mr Gove, she went through an application process in which he clearly could have serious influence.

A memo by the Department for Education to the House of Commons Education Select Committee (published here ) makes this very clear. It says: “The position of Ofqual chief executive is an Ofqual appointment, on conditions of service determined by Ofqual, although both the approval and conditions are subject to approval by the Secretary of State.”

Although the minister cannot interview the candidates or express a preference for any of them, the memo continues, “extensive discussions took place among the parties involved, including the Civil Service Commissioner and the Secretary of State to decide the requirements [in the prospective candidates for chief executive] seen as important for the role.”

Among these were a “robust stance on standards issues….” although, to be scrupulously fair, this was quoted with reference to comparing our qualifications against those available in other countries, rather than in relation to guarding against drops in standards over time. The first line of the job advert by which Ms Stacey was recruited, by the way, begins “Could you guarantee the rigour of examinations and qualifications…?”

The memo lists the final requirement of the role as the ability to “maintain Ofqual’s independence” though adding, interestingly – and, perhaps, confusingly – that this should be done “in partnership with ministers”.

Jon Coles, then a very senior civil servant working for Mr Gove at the Department for Education, was one of a four-person selection panel for the chief executive post which appointed Ms Stacey. At one stage, the memo says, the chair of the panel sought “additional input” from Mr Gove and representatives of other government departments. After the selection panel had reached its decision, it made Mr Gove aware of this and he met Glenys Stacey and then approved the decision.

In the same document, a letter sent on January 5th this year from Mr Gove to the select committee chairman, Graham Stuart, is published. It begins: “As you know, the Education Act 2011 [to state the obvious, this is a government document] will strengthen Ofqual in enforcing rigorous standards and ensuring that our exams system stands comparison to the best in the world.”

Ofqual’s latest annual accounts show that the Secretary of State determines the pay, allowances and expenses of board members; that the chief executive’s remuneration is determined by the Secretary of State; and that the Department for Education determined that Ofqual should not be awarding bonuses to staff members in 2011-12.

All of this shows, then, that while Mr Gove may not have intervened directly in individual grade boundary decisions, his potential or likely influence over the overall direction of travel for standards-setting should be clear. Would it be possible, for example, for Ms Stacey or anyone in her position to have put a greater emphasis on treating pupils equally within a particular set of examinations – the key issue in the current row – even if this on occasion conflicted with attempts to tackle grade inflation?

It has to be said that not only does the potential influence over the regulator of the Department for Education seem to limit that room for manoeuvre, but legislation originally passed by Labour which established Ofqual and set its priorities in law also set out its objective as holding “standards” constant over time.

But the DfE’s influence outlined above must also be borne in mind. Grade boundary decisions, then, are clearly not taken by the boards independently. And, from what I have learned over the last few days reading these background documents, a description of Ofqual as simply acting “independently” of ministers is very simplistic indeed, if not downright misleading.

posted on September 2nd, 2012

…what, exactly, is the problem?

 Monday, October 17th, 2011

Education policy-making is in a very strange place at present, with politicisation very much to the fore and reform proposals, though often successful in winning headlines for ministers, sometimes having a superficial quality. This means they often do not bear up well against detailed analysis.

The latest examples came in a speech last week by Michael Gove to Ofqual, the exams regulator. While I have no problem with Mr Gove looking closely at the exam system and proposing changes, this new foray was, to this observer, remarkably unfocused.

I was left unclear not only as to the detail of what exactly Mr Gove was proposing as suggested improvements on the current system, but also why he thinks they are needed.

The speech (which you can read here) captured headlines with some thoughts on possible ways forward for England’s A-level system. Or at least that was the way it was reported, for the speech talks about a range of suggested problems with both GCSEs and A-levels, and the reported proposals seem not to be labelled within the speech as relating specifically to the latter.

But anyway, let’s concentrate on A-levels, for simplicity’s sake, here.  Mr Gove appears to want to respond to a string of concerns about the way the current system works – including, he said, complaints from universities that they struggled to choose between the best-performing 18-year-olds – by making two suggestions for what would be radical changes to the existing structure.

First, he floated the idea that “only a fixed percentage of candidates” should get an A* in each subject. Second, he suggested that, in future, pupils and – presumably, although he was not explicit about this – universities and employers would be told not just the individual’s grade in each A-level, but their rank: how they fared in the national order of marks gained by all their peers.

Ok, I need to deal with each of these ideas in turn.

To take the first one, it is not clear to me from the speech exactly what problem this suggestion is meant to be addressing. The idea follows a section when Mr Gove cites concerns from employers and universities about the levels of “knowledge” their new recruits are arriving with, with the Education Secretary then discussing grade inflation within the GCSE and A-level system.

He also says: “Over the last 15 years, the proportion of pupils achieving at least one A at A-level has risen by approximately 11 percentage points. In 2010, more than 34,000 candidates achieved three As at A-level or equivalent, which allow them to progress to one of the best universities. That’s enough to fill half the places within the Russell Group.

“Universities are increasingly asking: ‘how can they choose between so many candidates who appear to be identically qualified?’”

A cap on the number of A*s awarded would appear to be, then, an attempt to deal with this problem of grade inflation and to ensure that employers and universities do not have too many of the very top achievers from which to choose. There would be no caps on other grades, however, because Mr Gove said he did not “want to go back to the situation where exams all were graded on the basis of norm referencing”, or fixed proportions of grades awarded for all exams at all levels.

But those figures he quoted, again, relate to the number of A grades. The obvious question to address, and which I certainly would expect to be addressed if it were, say, an academic looking at this issue rather than a politician, would be whether the introduction of the A* last year had helped to ease the often-claimed difficulties of university admissions tutors in particular, in choosing between high-flyers with strings of top marks. What has been the A*’s effect? This should be a question at least to be approached empirically. But no evidence was offered here.

And is Mr Gove unhappy that too many A*s are already being handed out, or that at some point in the future this might be the case? For reference, 8.2 per cent of all A-level candidates were awarded the top mark this year, compared to 8.1 per cent the previous year. It would have been interesting, then, to have known whether he was concerned that this picture might change, with the number of A*s rising dramatically in the future as has happened in relation to the A grade: 27 per cent achieved at least an A this year.

Answers to these kind of questions are extremely important when you come to the technical detail of how a capped A* system might work. If, as the speech would seem to imply, Mr Gove really is suggesting that, even as things stand, employers and universities are struggling to choose between lots of applicants with top grades, then the logic of what he is saying is that the proportion of A*s should be capped at levels lower than the present 8.2 per cent, so that only the very highest achievers are identified in this way.

This, though, would create a problem for the exam boards, who work on the principle that examining should be fair to students from one year to the next. In other words, if a change was made to reduce the number of A*s from one year to the next such that student x, in the year before the change, gets an A*, but student y, for the same level of performance the next year, gets only an A, student y would be entitled to feel aggrieved if he or she then comes up against student x in the chase for a job or a university place, since student y will appear to be not as well qualified, but this will be not their fault, but because of the change in the exams system.

The boards could change the standard in this way, but for fairness they’d probably have to make it very clear that grading decisions from two successive years were not strictly comparable; they might even have to go so far as to change the name of the exams, in my view, to make this clear to universities and employers.

If, on the other hand, Mr Gove means that the A* is working OK at the moment as a selection device, but that the numbers achieving it may need to be capped at some point in future (ie at the current or possibly a higher rate of pupils gaining A*), it is hard to know where this leaves his point about universities struggling – presumably as things stand, or why mention it in the speech? – to choose between candidates under the current system.

There would be other technical issues to look at with regard to a capped A* system, including whether it would apply as a uniform percentage across all subjects, which would seem the most obvious, or if it would be different for different subjects.

If it were the former, ie a uniform rate for every subject, this would imply a very large shift from the current system, which last year saw the proportion of candidates awarded A*s varying from 27.5 per cent in further maths to 1.1 per cent in media studies. There would be, I think, potential and perhaps to Mr Gove not-very-desirable potential knock-on effects for candidate numbers in a subject such as further maths if, say, around 8 per cent in each subject were guaranteed A*s.

The latter version of this scheme– a varying cap within different subjects – might seem more sensible, but might again raise questions from students as to why certain subjects were guaranteed to have a higher proportion of A*s, whatever the quality of the students that year and how they did in the particular exam.

The larger point is this, though: ideas were being floated in this speech which could potentially have a profound impact on hundreds of thousands of students each year, but without any meaningful analysis of the detailed nature of the current problem they were seeking to address. Mr Gove said in his speech that this was just “one question for debate, and I don’t mind if, in the end, people shoot me down”, but this belies the fact that, as the most influential actor within education in England, even hazily-sketched ideas have the potential for large influence. It is surprising, to say the least, with all the technical expertise available in our system (of which more later), that this proposal begs so many questions not just as to how it would work, but as to, actually, what its goal would be: why, in detail, do we need a cap?

However, the speech, I am afraid, got more surreal after this. Mr Gove then cited a visit he had made to one school, Burlington Danes Academy, during which he had been told about its system of ranking its students based on the exams they sit in every subject, every half term. This, he said, seemed to have many benefits, according to the headteacher when he had asked her. The gains included parents knowing exactly where their son stood in the class (because before this system was introduced, Mr Gove said, teachers had simply said “he’s a lovely boy”, ie by implication provided no information to parents at all on their child’s progress at school). The claimed benefits also included pupils being able to compare their performance against that of their contemporaries, and even looking at the results of teachers and deciding they wanted to be in the classes of those teachers who added the most value, and demanding those teachers who were not getting them up the rankings were “moved on”.

On the basis of what this school’s head said was going on in this one school in its in-school exams, then, he seemed to be proposing a new system in which pupils across the entire country were ranked at A-level, although, again, whether it was to be both GCSE and A-level or just the latter is not specified in the printed speech.

It is kind of hard to know where to begin with that anecdote, and I am not going to analyse the detail of what Mr Gove said about the ranking system in Burlington Danes, except to say that I remember, as a pupil, on occasion getting a good idea of where I was in each class and responding, when I was doing badly, not by seeing it as a reflection on the teacher but on the quality of my own work, but maybe that was just me. However, it should be enough to note that this is a bizarre way of going about policy, even if Mr Gove was self-aware enough to acknowledge the dangers of reading too much into anecdotes, or as he put it “data is not the plural of anecdote”.

As it happens, the idea of giving pupils a national ranking as well as a grade is not so very odd. I think it already goes on in Australia, and the fact that it is a serious proposition may have been acknowledged, implicitly, by Mr Gove when he said “some boards” here are already “debating the advisability of this”. But, again, the question has to be asked: if this is the solution, what exactly is the problem?

If we must take the Burlington Danes anecdote seriously, Mr Gove would seem to be implying that competition is a good thing and that, if pupils know they are going to be given an exact ranking based on their performance in the exam hall, this is going to spur them on to even greater efforts. I’m not convinced by that, though, in relation to the national A-level system: good grades would seem to be incentive enough for most students. Those at the top, in the A* band, who in some hypothetical world might want to compete more in the chase for a better ranking, probably do not need to be made more anxious about exam success. This is before one gets to possible technical problems, such as the degree of uncertainty around individual rankings – marking can never be reliable enough to produce with certainty a ranking list – and whether the Uniform Mark Scheme (UMS) of A-level, which I’m guessing would have to be the basis for the rankings – will produce patterns such as bunching at the top of the mark distribution, with many students awarded 100 per cent for individual papers under UMS.

The larger point is that, as I understand it, universities can already get access not only to a student’s overall grade, but to their grade in individual papers and also, I think the number of UMS marks scored. (There is a story about this in relation to Oxford and Cambridge in this morning’s Times) I think they can also find out if an applicant’s grades were achieved in the first sitting of individual papers, or through re-sits. At the top end, the A* identifies not only the highest achieving students, but insists that they must do well in the harder A2 papers typically taken at the end of the sixth form, rather than stockpiling marks in the easier AS papers, designed to be taken a year earlier. 

In other words, the system as it is already provides plenty of information for universities.

It could be argued that ranking might help employers, who might not have access to all of the above data. But even for them, a string of grades at GCSE and A-level already provides a great deal of information.

Providing yet more data certainly fits with a government agenda which wants to get seemingly ever more statistics out into the public domain, almost as an end in itself (England’s system already seems to be a near world-leader in terms of data production). The speech also clearly won some publicity which Mr Gove was probably happy with, in the form of headlines suggesting he was shaking up a system which needed reform. It may well have been a useful distraction, for the government, from claims that the coalition’s move to “benchmark” England’s A-level system against other exams from around the world (see TES story here) may not yield the easy policy wins that may once have been envisaged.

I don’t think all of Mr Gove’s moves on exams have been wrong-headed: something had to be done , in particular, on early entry for GCSEs following the Advisory Committee for Mathematical Education’s scathing report suggesting results pressures on schools were having some very unattractive outcomes. 

Yet, this latest speech, for all the confidence in Mr Gove’s phrasing, lacked the serious, evidence-based analysis – or, really, meaningful analysis of any kind – which one would expect in an area this technical. These suggestions, then, have a gimmicky feel to them, though I would guess that Ofqual and the boards are now having to take them very seriously.

As an almost-final point within this blog, I would also highlight a section in the speech headlined “the role of Ofqual”, in which Mr Gove said that “with the leadership that Ofqual has, there is a new requirement for Ofqual to do more.”

He continued that the watchdog should be asking itself the question as to how it was performing, and how our exams compared to those elsewhere, so that “Ofqual moves from being an organisation that perhaps in the past provided reassurance, to one that consistently provides challenge to politicians, to our education system overall and to exam boards and awarding bodies”.

I highlight this because Ofqual was originally set up by Mr Gove’s predecessor, Ed Balls, with at least the stated intention that it should function along the lines of the Bank of England, ie independently of ministers. Serious concerns would be raised if ministers were ever seen to suggest policy priorities for the Bank of England, and even to say that they agreed with an already-taken Bank of England decision would be frowned upon. How independent is Ofqual, then?

-I should, finally, say that the education policy field is the poorer for the loss of the blog and twitter feed provided by Chris Wheadon, head of scientific research and development at the AQA board.

Chris’s twitter feed and blog have been deleted, apparently following complaints from the schools minister, Nick Gibb. (See Telegraph news story here).

Throughout my years covering the exams system, I have benefited from insights into its more technical aspects from those working for the boards, and their regulators. For all the criticisms that many people, including myself, make of aspects of England’s system, these conversations have underscored to me the quality of technical expertise and understanding on which this hugely complex structure now rests.

Chris’s blog in particular was very much in that tradition. I think it served both a public cause, in helping people to understand some of the statistical issues behind examining and that it was a reminder of the research expertise which is present both at AQA and within other boards. I also found Chris’s twitter feed a useful source of information.

Technical expertise of this kind is vital if any reform of our exams system is to be achieved successfully, a view that much of the blog above hopefully illustrates. It is a great shame, then, that this contribution to public debate is now no longer available.

No Comments
posted on October 17th, 2011

Friday, October 7th

Exam boards are facing fines from the Government’s qualifications regulator after a string of errors in this summer’s GCSE and A-levels.

Ministers are to propose an immediate change to the law to allow Ofqual to impose a financial penalty – capped at a certain proportion of an exam board’s turnover – if they make mistakes.

But the move was questioned by a head teachers’ leader, who said any fines would simply be passed on by the boards to schools, adding to already large exams bills. The boards themselves believe the move, to be introduced in an amendment to the education bill currently in the House of Lords, pre-judges an inquiry by Ofqual into this year’s mistakes, which is due to report by the end of the year.

The way the move has been handled – with boards given only two weeks to respond to the proposed legal change when told about the plans at the end of last month by Nick Gibb, the schools minister – has also annoyed awarding bodies, who are concerned that it will not be given time for proper legislative scrutiny.

Ofqual launched its investigation in July, after this summer’s GCSE and A-level results season featured at least 11 mistakes, affecting tens of thousands of pupils.

Errors ranged from a printing mistake by the AQA board, leading to some schools receiving GCSE maths papers, taken by 32,000 pupils, which included questions from a previous version of the exam, to an OCR maths AS level paper with 6,790 candidates which featured an impossible question worth 11 per cent of marks on the paper.

In total four boards, serving schools in England and Northern Ireland, have apologised for errors.  Ofqual already has power to take strong sanctions against them, including ultimately to “withdrawal of recognition”, effectively banning a board from setting exams

However, two weeks ago (September 29th), Nick Gibb, schools minister, wrote to the boards to tell them that Ofqual’s current powers “inhibit swift action and do not serve as an adequate deterrent to problems such as we saw this summer”. He said the Government would change the law to give Ofqual the power to fine.

Last week, in a letter to the Conservative peer Lord Lingfield, the Government said that it would bring forward amendments at the next stage of the education bill, which begins on October 18th, to “give Ofqual the new power to fine”.

The Government believes that the watchdog’s current power to de-recognise a board is such a “nuclear” option – with potential to cause major disruption for pupils and schools – that Ofqual needs additional sanctions.

Brian Lightman, general secretary of the Association of School and College Leaders, said: “Ofqual’s review is not due to publish until December, and it seems strange to pre-empt the findings in this way.

“A fine on awarding bodies will simply turn into a fine on schools and colleges, since they pay for all the costs of examinations through exam fees. Institutions are already spending large sums on exam fees, and any further burden would be a perverse consequence. It would be completely counter-productive.”

An exam board source said: “We are very unhappy about the way in which this has been carried out. There is a question over whether the Government wants to get this right, through proper consultation, or whether this is just a massive rush to cobble something together to go into an education bill which is already nine tenths of its way through Parliament.”

1 Comment
posted on October 7th, 2011

 Friday, 22nd July, 2011

The contradiction is, to this observer, breathtaking.

Last week, the Government said this: “Too many of our public services are still run according to the maxim ‘the man in Whitehall really does know best’…The idea behind this view of the world – that a small group of Whitehall ministers and officials have a monopoly on wisdom – has propagated a lowest common denominator approach to public services…”

“People should be in the driving seat, not politicians and bureaucrats,” said the Government, in its “open public services” white paper.

On Wednesday, it announced new decisions on what is to count in school league tables which clearly embody a view that, yes indeed, ‘the man in Whitehall really does know best’: certain qualifications are to be seen as valuable – no matter what pupils, teachers and parents think of them – and certain others are not.

These ‘others’ will continue to be funded by the Government, so that state schools will receive cash to offer them. But the results they generate are not to be published at the school level, because to do so would be to encourage schools to offer them. And the Government wouldn’t want to do that.

Confused enough yet? Well, I must admit that the latest developments on league tables have even this perhaps obsessive chronicler of their many twists and turns scratching my head.

Now, come back to that public services white paper. Now I am very sceptical about this document, having blogged about it here. However, it is useful in one sense as a reference point for a world-view being put forward by the coalition which certainly, I thought, had the benefit of clarity.

An idea which is central to the paper, and much other policy which has emerged from Government in the past year, is this concept of “transparency”. The argument runs as follows.

Whitehall collects huge amounts of data, across all public services. Ministers want to release as much of it as possible.

This would have two benefits, the theory went, I thought at least until Wednesday.

First, “transparency” is a good in itself. The public have a right to know as much about the public services they fund as is possible to provide, so releasing more and more stats on the different qualities of institutions must be a good thing.

Second, one of the key aims is to promote choice and competition. In education, by providing more and more data, the idea is that people get a more and more detailed idea of the quality of each school. In doing so, they get the chance to make more effective choices. And, is the implication, this forces schools to have a more and more tightly-defined regard for what the “consumer” – the parent or pupil – wants, and so, it is argued, the quality of education provided must rise.

An intriguing twist on this argument is that, third, in releasing huge amounts of data in different categories, effectively the Government democratises the use of these statistics for accountability purposes, the argument runs. In the old days, it is claimed, schools were judged by just one or two results formulae, laid down to very tight specifications by civil servants and ministers, meaning that they worried most about performing to goals which had been set for them not by the public, but by bureaucrats.

Now, with schools being able to be judged in any number of ways with the user of the service choosing what matters most to them, the entire process has been devolved, with accountability resting where it should: between institution and user, rather than between institution and policy-makers.

Much of this, I think, is actually very contentious and I hope I have questioned much of the above at some point or another. But it does at least have the virtue of being reasonably internally consistent; indeed, some would say that it is too simple, and too ideological. But, as I say, it is a view.

So, the Department for Education press release yesterday began: “The Department for Education today announced that only the highest quality qualifications will be included in new, transparent school league tables.”

GCSEs and iGCSEs would be included, but other qualifications would have to pass some kind of quality check in order to be released for publication, even though these latter qualifications would continue to be taken in schools and colleges, and funded by the state.

Um, so that would seem to violate the first principle that I thought the coalition’s reforms in this area were based on: complete transparency. If the government had data on something going on within a school, I thought the idea was that it would release it to the public.

Yet here we have a government which says it is committed to transparency seemingly, mind-blowingly perhaps given what I thought its philosophy was, proposing that pupils take a set of qualifications whose results will then be… kept secret.

Nick Gibb, the schools minister, is quoted as saying:  “Parents want more information so they can judge schools’ performance. The changes we have made mean that parents will have a complete picture of their local schools so they can choose the right school for their child.”

Eh? No they won’t have a complete picture. It’s only “complete” if you believe that the non-GCSE courses which are now longer featuring in the rankings, but pupils will continue to take, are not any part at all of what counts in a school. That’s a value judgement made by a minister, rather than coming from decisions at the school or family level. (I also wonder exactly what evidence there is for the at-face-value plausible assertion that “parents want  more information so they can judge schools’ performance”, but that’s another matter…)

And of course, just as strikingly, this move violates the third seeming principle, that publishing more data allows the public to judge what it values within what an institution provides, rather than the state. Yet here, the state is laying down exactly which qualifications are to be seen as high quality, and which are not. It is the officials and the minister – Nick Gibb is the one quoted in this release – who are acting as if they have the “monopoly of wisdom” here. For, if the individual chooses to work for a qualification which does not feature in the league tables, pupils, advised by their parents, will demonstrate that they believe it has some value. Mr Gibb and his advisers appear to be keen on telling them that they are wrong. It is a very un- free market approach, and very un-Tory.

So, as I say, my head is spinning with all of this. I can’t quite understand why a policy has come about which is so at odds with what I thought was the over-arching philosophy. However, I thought I would venture a couple of possible reasons.

The first is reasonably simple: the juxtaposition of this policy and the transparency/democratisation of accountability philosophy might not make much sense, but both of them potentially play well with the media, so, in the policy-maker’s mind, why not go for it? The “transparency” argument above will be accepted by many people, while the belief that this will provide headlines suggesting that ministers are getting tough with  “dodgy” vocational qualifications also appears to have paid off in some newspapers. It’s a win-win, and while this might make for contradictory policy-making, who will notice?

I think that’s only part of the answer, though. The second explanation is a clear implication of the press release: the Government simply is going along with the finding – in the report on vocational qualifications by Alison Wolf which lays the groundwork for these changes – that pupils have been incentivised to go for certain vocational courses not because of the worth of the course to the individual, but because of their high rating in league tables for the school.

That’s an argument I’ve been making since at least the time my book came out, of course. And yes, this is indeed a side-effect of the current league tables. (The press release amusingly says “the Wolf Report demonstrated that the current performance table system creates perverse incentives,” as if this had been in doubt beforehand, or as if any performance table system would not create some kind of side-effect).

The TES rightly points out, on its front page today, that this is the final confirmation that the contribution of non-GCSEs to headline “GCSE” measures will be capped, which must be a correct decision, given the way “GCSE” league table findings are interpreted by the public and given the perverse incentives which have existed up to now.

But otherwise the remedy to this problem is bizarre. This particular perverse incentive was created not because of the mere existence of vocational qualifications in any league table ranking (though all league tables will create perverse incentives), but because some of them were – seemingly, to this observer – so over-weighted in the central indicators that there was a huge incentive for schools to push pupils towards them, with the need of the school to raise its scores at least a large part of that calculation in many cases.

If the Government changed this so the results for particular individual non-GCSEs were simply published separately alongside each GCSE in the tables, this particular perverse incentive, I think, would have gone. Although schools would still have to think about success rates for every type of individual qualification they entered – which can be a perverse incentive, I think, in that I don’t think a pupil wanting to take a course should be pushed away from it just because they are unlikely to get a C grade, though I guess some teachers would dispute this -  at least schools would be relatively free to opt for courses they, and the parent and pupil, potentially valued. A course could still have its results published, but if parents and pupils did not value it, the “market” would kill it off in a way that might not have happened under the old system, when schools were incentivised to push pupils towards particular courses with high league table weighting.

As we are, the new league tables will not neutralise the incentives on schools to push pupils towards particular qualifications because of the benefit to the school, rather than to the pupil. It simply changes the type of qualifications which might be favoured, based on the  “wisdom of Whitehall ministers and policy-makers” as to which type of courses should be favoured. To put it another way: ministers don’t seem to like qualifications assessed entirely through coursework, and I would agree that it is tough for these courses to co-exist and have credibility with a high-stakes accountability system in which teachers are being held to account for the results.

But simply removing such courses from high-stakes accountability, in the sense that the results of these qualifications are not published themselves, does not remove them from the effects of league tables. Ministers are incentivising schools to move away from them. Is this the best move for the child? Again, schools are not able to that decision from a neutral perspective, because of the mechanism of high-stakes accountability.

To put it another way, the press release says: “Teachers will still be able to use their professional judgement to offer the qualifications which they believe are right for their pupils.” But this will, still, clearly be influenced by league table considerations, as, mind-blowingly, even the DfE knows, since it also says in the press release that its league table changes will “ensure that schools focus on valued qualifications”. (Sorry, I made the mistake of looking at the press release again; it’s not good for my head).

So, ahem, clearly I still think there are fundamental problems with league tables and results pressures at lots of levels. But I’m especially surprised the government did not come up with a solution which at least is a better fit with its own logic. Not to have done so either smacks of the thinking behind the rankings getting so complex that everyone gets confused, or of a belief that the incentives within league tables can be harnessed for the greater good, even despite the clear inconsistency. Some will also claim that there is a vindictiveness in ministers coming out against non-GCSE exams and those which are assessed by the teacher, although I am not sure about this verdict myself: some of the arguments within the Wolf report about needing to look properly at the quality of courses towards which “non-academic” pupils are being pushed are powerful. But then again, if these are not good courses, why should the Government continue to allow them to be funded in state schools?

I wonder if this is not also another example of that mad pendulum swinging in education policy: Labour worried that league table pressures would push schools away from vocational courses unless they were given an incentive not to do so – so it over-incentivised them – and the Tories respond by using one of the easiest levers they have to pull – how schools are held to account – to wipe many of them off the official map of what counts.

The Government’s move may have demonstrated something useful, though. Sometimes, to hear supporters of league tables talk, publishing data is both a largely “neutral” act of transparency, and almost inevitable. To open up the statistics is simply a matter of letting in some sunlight into previously obscure areas of school practice, one would think sometimes from listening to the advocates of this movement.

In reality, the choices the Government makes as to what is measured, how, and what data is released, are hugely important. It remains, in this sense, a very centralised system, which I suspect is why civil servants and ministers like it so much. It clearly drives how schools act. But in this sense, it is not a “neutral” act, to be judged on the criterion that you either like the idea of releasing more data, or you don’t. Politicians should be held to account for the effects of their moves on data, not just the existence of them. To the extent that this move will stop pupils being pushed towards non-GCSE courses because of the high-equivalence factor, the politicians should be praised. But believe me, the problems and injustices are not going to go away, and the inconsistencies here are now glaring.

One final point. The  press release also says: “The 2011 school league tables will highlight the performance and progress of disadvantaged groups compared with other pupils. This will create a powerful incentive to narrow the gap in achievement between richer and poorer pupils.”

I know what the thinking is behind this, and on the surface it seems commendable. But I can’t help cringeing when I read it. There’s a sense of a teacher reading this and thinking: “Oh yeah, helping disadvantaged kids do well. I’d never thought of that. I just wasn’t bothered before. Now, thanks to your wisdom, Mr Gibb, in putting another column on the spreadsheet by which I’m judged, I realise the error of my ways. I’ll try harder now. Thanks.”

I know one of the most persistent debates within education is about many teachers not having high enough expectations of pupils from tougher backgrounds. But it strikes me that if the main way of tackling it is for ever-closer monitoring through the statistics generated at the end of this process, we are in danger of missing an awful lot of tricks.

If the Government isn’t, through the way it trains teachers, develops them in the classroom, the messages it sends them in its rhetoric and the support it provides to improve the quality of the educational experience for all pupils and particularly those from disadvantaged families, trying to promote this, I don’t know what it is doing. If it had done that, and then effectively argued that teachers still need an “incentive” at the end of this process, you have to ask what has gone wrong along the way.

-There are other things to comment on in the league table announcements – including the fact that pupils taking the EBacc appear now to have to take seven GCSEs, raising question marks over how  much time they will have to study much else, and the contentious, for me, move from contextual value added to value added and unadjusted progress measures as the main “fair” ways of judging schools – but I seem to have run out of space and time today….keeping up with developments in the rankings is a full-time job, it seems….

posted on July 22nd, 2011

Tuesday, July 19th, 2011

Well, I said at the end of the last blog that I’d be writing something imminently on the relationship between the Bew assessment review and the government’s ongoing national curriculum review. Here, slightly earlier than planned, is what I had in mind.

This week’s Government response to the Bew review into primary assessment could be redundant within just over two years.

That is the implication of comments made by a leading figure within the test regulator Ofqual at a conference on the national curriculum I attended on Friday.

Stephen Anwyll, Ofqual’s head of 3-14 assessment, said the long-term future of testing would be “up in the air” until after the outcome of the current national curriculum review was known.

Assessment arrangements in primary and secondary schools would have to be “completely revised” if the review led to a fundamental rethink of what schools teach.

The standards pupils achieved in any national assessments created as a result of the national curriculum review might also not be comparable with current performance, he said, since measurement would need to be “recalibrated” as assessments changed.

Mr Anwyll also suggested there was a contradiction between the Government’s suggestion, in its remit for the curriculum investigation, that it should not cover assessment and the detail of what it was being asked to look at. “You cannot separate the curriculum from assessment,” he said.

English, maths, science and physical education aspects of the curriculum for 5- to 16-year-olds are due to be revised for first teaching from September 2013 following the curriculum review, which is expected to produce first recommendations by early next year.

Speaking at a Keele University Centre for Successful Schools conference last Friday, Mr Anwyll talked about Ofqual’s detailed work to be carried out in response to the Bew review.

He then added: “Sitting beyond all of this, in the slightly longer term…all of this is up in the air depending on the outcome of the national curriculum review.

“If we are talking about, actually, a new programme of study, in the first instance for English, maths and science, which we are expecting to see some examples of this year, that could change the entire picture.

“If you reform standards as part of the national curriculum review, it’s ground zero again; you calibrate  the standards from there- you cannot start comparing to previous standards.”

He added: “National curriculum assessments are excluded from the remit of the national curriculum review.

“But if you look at what’s included in the remit, it includes whether the national curriculum should be set out on a year-by-year basis, what should replace existing attainment targets and level descriptors to define better children’s standards of attainment, and what’s needed to provide expectations for progression to support the least able and stretch the most able.”

“All of these are absolutely fundamental to assessment, so you cannot separate curriculum from assessment.”

That comment appears to echo a statement by Sir Jim Rose, leader of England’s last curriculum inquiry, carried out in the dying days of the last Labour government. He was barred from considering assessment but said this was the “elephant in the room” when he visited primary schools.

Mr Anwyll added: “Much of what we do currently will have to be completely revised if we get a new national curriculum, new standards defined, and new ways of measuring them defined.”

That’s the newsy bit; the below is comment from me:

This notion of a contradiction between a national curriculum review which is supposed not to be looking at curriculum matters, and in practice it being impossible for a review of this type not to have serious implications for assessment was underlined this week in the Government’s response to the Bew report.

The remit for the national curriculum review says: “The review itself will not provide advice on how statutory testing and assessment arrangements should operate”.

Yet this week’s Government response to Bew says: “The national curriculum review will consider the suggestion from Lord Bew and the panel for statutory assessment to be divided into two parts….”*

It also says: “The National Curriculum Review will consider how we report statutory assessment in the long term.”


The full sentence of that quote above about statutory assessment being ‘divided into two parts’ is: “The National Curriculum Review will consider the suggestion from Lord Bew and the panel for statutory assessment to be divided into two parts in the future, with a ‘core’ of essential knowledge that pupils should have learnt by the end of Key Stage 2.”

This looks to be a suggestion that some “basic” skills literacy and numeracy tests be introduced at KS2. It is building on a somewhat mysterious idea flagged up near the end of the Bew report, which I blogged about here. One to watch, I think, and not only by people who wonder at the polarising language of “knowledge that pupils should have learnt”….

*I was reminded of this nugget of info via Helen Ward of the TES on twitter.

No Comments
posted on July 19th, 2011

 Monday, 18th July, 2011

This is just a quick reflection on union reaction to the Government’s proposals on the future of assessment at Key Stage 2.

Ministers published today their response to last month’s final report by the Bew inquiry into this subject, the review which itself was triggered by last year’s Sats boycott by the National Association of Head Teachers and National Union of Teachers.

The unions’ reaction is interesting: four different associations produced arguably, three or four different positions in response.

This could be viewed as surprising, given that, for all the changes put forward in Bew, the fundamentals of the high-stakes testing regime remain in place, despite widespread concerns within the profession. Or it may simply reflect a beneath-the-surface belief that, whatever the problems with current structures, essentially the basics of the system are in the end unchallengeable, and therefore the argument must be confined to the detail as to how it works.

In terms of their reaction to the Government response to Bew, the heads’ associations were more upbeat. Perhaps unsurprisingly, the National Association of Head Teachers, which called off the possibility of a repeat of last year’s boycott in 2011 in return for the Bew inquiry, and which was allowed to recommend head teachers who would sit on the Bew committee, was broadly positive about its outcome.

Its press release was headlined: “Bew recommendations are a significant step forward towards fairer accountability system, say school leaders.”

But the NAHT said the Bew recommendations – every one of which has been accepted by the Government (always an interesting development for any inquiry which is billed as independent from ministers, I feel) – were only a “first, positive step on a long journey towards a system which reflects the achievements of all pupils and the contribution of all schools”.

Longer-term goals included a far greater role for teacher assessment and more trust in the profession, and the NAHT said it would be on the look-out for ministers breaking with the spirit  of the Bew recommendations.

The Association of School and College Leaders was also accentuating the positive, headlining its release: “KS2 assessment moving in the right direction.” I will come back to this.

Both the National Union of Teachers and the Association of Teachers and Lecturers were less optimistic.

The NUT argued: “The positive steps in this review will be undermined by keeping in place school performance tables, despite the fact that the majority of those who gave evidence called for their abolition.

“While league tables exist, teaching to the test and a narrowing of the curriculum will remain…The Review and the Government should have been bolder.”

The ATL said: “There is some good news in the government’s changes to key stage 2 testing, but so much more could be achieved if the government was not insisting on remaining judge, jury and executioner of schools by setting targets, closing schools, and forcing through its naïve free market policies on academies.”

I haven’t received a press release from the National Association of Schoolmasters Union of Women Teachers, but we know that that union has long favoured tests over teacher assessment, amid concerns about the effects of TA on teachers’ workloads.

For me, having looked at – and written about – the changes proposed by Bew (blog here), this feels like a very muted end to what has been years of pressure building on ministers over testing: the NAHT itself conducted a review into the architecture and effects of the current system, to which I contributed, and which dates back to 2007. Part of that pressure was exerted, amazingly perhaps as it appears now, by Michael Gove when he seemed to accept, in 2009, that test-driven teaching can be bad for children’s education. It also built through the testimony of the unions, subject associations and reports from organisations such as the Children, Schools and Families select committee, the Cambridge Primary Review and the Children’s Society’s Good Childhood inquiry.

One could look at the positive reaction with which many teachers are likely to greet the move, recommended by Bew and accepted by ministers, that the current writing composition test in KS2 English is replaced by teacher assessment, and take a different response to the quick verdict I’ve offered above, of course.

Or, for critics of the high-stakes regime, there is the fact that, since 2008, the following Sats tests have bitten the dust: English, maths and science at KS3; science at KS2; and creative writing at KS2. This might be considered a good outcome of all that pressure.

But, on the negative side, Bew put forward, shockingly, I think, the unbalanced assertion that “strong evidence shows that external school-level [presumably statistics-based] accountability is important in driving up standards”. And the essentials of our system – that test and exam results will remain the main mechanism by which both secondary and primary schools are held to account, with high stakes including closure to follow for “underperformers” – remain unchanged.

The underlying argument must be that this high-stakes system has been good for English education, and that it is a key to continuing progress in the future. If this were not the underlying assumption behind Bew, we would not be proceeding on the current basis, for it provides no fundamental attempt to re-engineer assessment and accountability so that the system gets the accountability it needs without the knock-on washback effects on teaching and learning.

As ever, the basic architecture of test- and exam-based accountability seems to be the unalterable fact of education in England, to which everything – including, I’m afraid, a fair-minded and rigorous consideration of its overall effects on children’s education – must come second. More than 15 years after the introduction of national testing in England,  there has still been no detailed Government inquiry into the nature, extent and effects of test-driven teaching in this country: how many schools go in for it, the detail of how children’s learning is affected and what pupils alongside teachers think about it. This is astonishing, really, if you believe that the quality of the child’s educational experience is to be looked after above all else.

Just finally, I want to return to ASCL’s position, which I think is the most curious.

Brian Lightman, ASCL general secretary, is quoted in its press release as saying: “There must be a robust but fair process of assessment for pupils as they move from primary to secondary school. This is important not only for pupils and their parents, but also so that their new schools have accurate and reliable information about their level of progress.”

I find this statement, which reflects what has been ASCL policy for a while, strange because of the contrast with the somewhat ambivalent relationship secondary schools have with KS2 assessment data, as documented in the Bew report (and elsewhere).

The final Bew report says: “We have heard widespread concern that secondary schools make limited use of the information they receive about their new intake. Many secondary school respondents have expressed concern that national curriculum test results or primary schools’ teacher assessment are not always a suitable proxy for the attainment of pupils on entry to Year 7.”

If many secondary schools don’t trust pupils’ Sats results (or test-influenced TA judgements), why does ASCL want them retained as “robust but fair” measures?

I’ve not put this to ASCL, but I believe the answer is that the union, while it doesn’t particularly trust Sats results as measures of pupils’ underlying understanding, doesn’t want them replaced with teacher assessment because secondary heads worry that primary schools would inflate TA judgements. This would leave secondaries’ results looking less good, because it would mean pupils would appear to be making less progress at secondary school.

So while secondary heads might have reservations about the value of the data provided by Sats, the implications in terms of the accountability system for them mean they back them. As usual, the demands of the accountability system, then, seem to trump other concerns.

This may be a scandalous explanation for ASCL’s position on this issue, but I am struggling to think of another one.

All of which leaves me slightly saddened. There is still an awful lot of evidence that this system is not serving at least a large proportion of children’s needs well. It is a shame that the unions have not seemed able, in the end, to come together to continue pressing home that point.

- Is this really the end of the story for assessment at KS2, though? The current national curriculum review may throw things up in the air again. I expect to write more about that in the next few days.

PS: It is interesting to play “spot the difference” between the stated purposes of national assessment, as laid down by the Bew report, and the previous attempt at this, by the Labour government’s “Expert Group” on assessment, which took in the fall-out to the 2008 Sats marking crisis and reported to the former schools secretary Ed Balls in 2009.

Bew lays down three main purposes of statutory end of Key Stage 2 assessment data as follows:

a Holding schools accountable for the attainment and progress made by their pupils and groups of pupils.

b Informing parents and secondary schools about the performance of individual pupils

c Enabling benchmarking between schools, as well as monitoring performance locally and nationally.

The “Expert Group” report came up with the following definition of the purposes of “assessment”, up to the end of Key Stage 3. It came up with four:

-To optimise the effectiveness of pupils’ learning and teachers’ teaching.

-To hold individual schools accountable for their performance.

-To provide parents with information about their child’s progress.

-To provide reliable information about standards over time.

As you can see, Bew’s top two purposes are extremely similar to the 2008 report’s purposes two and three. The 2008 report’s purpose four is a subset of Bew’s purpose three. The largest difference between the two is that the first purpose mentioned in the 2008, which in my view is correctly placed at the top of the list, does not feature in Bew’s list. Otherwise little, it seems, changes.

1 Comment
posted on July 18th, 2011

Sunday, July 3rd, 2011

I should begin this blog post with a note of slight regret. It gives me no pleasure to be writing something which is critical of the Bew report, especially given the courtesy with which Lord Bew treated me in giving evidence to the review. He invited me to do so, and even wrote me a handwritten note to thank me afterwards.  The review’s interim report, published in April, was, I thought, a largely impressive synthesis of evidence on this subject which gave me hope that, whatever the outcome and whatever the constraints of the remit, the issues would be given a thorough and fair weighing in the final report.

Yet, I am afraid, despite some impressive passages, the report really does not do justice to this, I think, incredibly important subject.

I say this mainly for three reasons.

First, the report fails to follow through on what is said, at least in the foreword to the report, to be the first priority for the assessment and accountability system: ensuring that such a system supports children’s learning. Second, it misrepresents the evidential position on the effects of test-based accountability in a fundamental way. And third:  it does not address in any meaningful sense a central criticism of test-based accountability: that test results are being used for too many purposes and that key purposes can be at odds with one another (my italics, since this was the bit that was not meaningfully considered).

To deal with the first problem, Lord Bew says in the report’s foreword:

“We would like to be quite clear that throughout this process we have always focused on how best to support the learning of each individual child.”

If this had been the overall goal of the review, I would say “fantastic”. The trouble is, having set this up as an aim in the foreword, this approach is completely absent in the report, where the quality of the learning experience resulting from accountability – what, if anything, is happening in lessons as a result of test-driven accountability? – really gets only glancing consideration.

This becomes clearer when we look at the report’s consideration of evidence.

The report says: “Strong evidence shows that external school-level accountability is important in driving up standards and pupils’ attainment and progress. The OECD has concluded that a ‘high stakes’ accountability system can raise pupil achievement in general and not just in those areas under scrutiny.”

Well, I wrote in detail here about the OECD evidence on which Bew drew for this statement.

I do not think anyone reading that report in full could believe that it provides a ringing endorsement of an “English”-style accountability system. Consider, as I mentioned in that blog, the fact that that OECD report says: “Across school systems, there is no measurable relationship between [the] variable uses of assessment for accountability purposes and the performance of school systems.”

Moreover, although Bew says “the OECD has concluded that a ‘high stakes’ accountability system can raise pupil achievement”, with “high stakes” in quotation marks, in fact the phrase “high stakes” only occurs once in the main text of the 308-page OECD report which Bew references here, and its use does not back up the claim made here. (“High stakes” in the one instance referenced in this report refers to any qualification which is high stakes for a pupil, by which criterion the A-levels I took in the 1980s – which were low stakes for my school – would count but today’s Sats would not).

As I wrote in an article for the TES based on research for the NAHT, in fact there are many education systems which are not doing a demonstrably worse job than England and which do not have “high-stakes” accountability of the English kind.

If Bew’s claim is that this type of accountability “is important in driving up standards and pupils’ attainment and progress” is to be understood as meaning that it improves education in a more general sense than simply improving test scores, which must at least be considered if the quality of pupils’ learning is really what matters, then the report needs to consider more evidence.

Yet this section of the report, entitled “the impact of school accountability”, includes no studies raising concerns on the issue of test-driven schooling. It highlights only research which supports it.

This section then simply ends: “We believe the evidence that external school-level accountability drives up pupils’ attainment and progress is compelling.”

This is an absolute travesty of the evidential position. I would say that, given that I wrote my book on this subject from 2005 to 2007 seeking to put together all the evidence I could find on the effects of this system. Negative effects were not hard to come across: detailed concerns about the side-effects were coming to me naturally virtually every week around that time in my work at the Times Educational Supplement. To repeat, none of this evidence gets a mention in the section of the report where Bew is deciding whether or not high-stakes accountability is a good thing.

That is a shocking indictment of this final report. For all the evidence commented on in the interim report, it undermines any claim that this subject has been considered in a truly open-minded way.

If the evidence had been considered, weighed and a conclusion reached that the claimed advantages of hyper-accountability outweighed the claimed negatives (taken seriously and considered in detail); or if a conclusion had been reached that the current system, though imperfect should be retained because changing it in a fundamental way would present too many difficulties, well at least that would have been more honest. To try to claim that the evidence points entirely in this single direction is simply wrong.

Other inquiries to have raised deep concerns about test-driven schooling in recent years have been the Children, Schools and Families assessment investigation of 2007-8, plus its subsequent probe into the national curriculum; the Children’s Society’s Good Childhood Inquiry; and the exhaustive Cambridge Primary Review.  Sir Jim Rose, in conducting his own national curriculum inquiry for Labour which was barred from considering assessment, described it as the “elephant in the room”, in terms of the impact on the curriculum.

Consider some of the claims made in evidence to these various reviews.

The Mathematical Association told the select committee inquiry: “Coaching for the test, now occupying inflated teaching time and effort in almost all schools for which we have information at each Key Stage, is not constructive: short term ‘teaching how to’ is no substitute for long-term teaching of understanding and relationship within and beyond mathematics as part of a broad and balanced curriculum.”

The Cambridge Primary Review reported one witness to the review as, citing her experience as an English teacher, primary head and English examiner, as condemning “the ‘abject state of affairs’” where reading for pleasure in schools “has disappeared under the pressure to pass tests”.

The Independent Schools Council told the select committee’s curriculum inquiry: “National curriculum assessment should not entail excessive testing. Universally, a focus on testing was found to narrow children’s learning, teachers’ autonomy and children’s engagement in learning.”

Ofsted also told the select committee that “In some schools an emphasis on tests in English, mathematics and science limits the range of work in these subjects in particular year groups.” An Ofsted report on primary geography from January 2008, found that “pupils in many schools study little geography until the statutory tests are finished”, while an Ofsted report on music said “A major concern was the amount of time given to music. There were examples of music ceasing during Year 6 to provide more time for English and mathematics.”

The OECD itself said, in the education section of its report on the UK in March this year that: “Transparent and accurate benchmarking procedures are crucial for measuring student and school performance, but “high–stake” tests can produce perverse incentives. The extensive reliance on National Curriculum Tests and General Certificate of Secondary Education (GCSE) scores for evaluating the performance of students, schools and the school system raises several concerns. Evidence suggests that improvement in exam grades is out of line with independent indicators of performance, suggesting grade inflation could be a significant factor. Furthermore, the focus on test scores incentivises “teaching to tests” and strategic behaviour and could lead to negligence of non-cognitive skill formation”

 Either Bew has, then, defined “attainment and progress” in such a narrow sense – ie it means “there is compelling evidence that test-driven accountability drives up test scores” – that its claim to be interested in the learning of each child more generally cannot bear scrutiny (since it is only interested in the evidence of test scores).

Or improving “attainment and progress” is meant to stand for the quality of education as a whole improving as a result of “high-stakes” test-based accountability, in which case Bew has simply chosen to ignore that section of the research on this subject which conflicts with the way the review was framed by the government.

The report does, then, move on to “concerns over the school accountability system”, including “teaching to the test”. But it offers no detail of what the evidence says as to what this might mean for the pupil. The only substantial concern acknowledged here is the unfairness of the way results indicators are used for schools, which it says its recommendations will go on to tackle. This is an important argument, of course, but it is not the same as the claim, widely made, that the system of test-based accountability damages the learning experience of at least a proportion of pupils.

The only acknowledgement of this claim here is when the report says that many heads feel they “‘need’ to concentrate much of Year 6 teaching on preparation for National Curriculum Tests in order to prevent results dropping”. Bew then acknowledges that “the accountability system to date may appear to have encouraged this behaviour [my incredulous italics at the weakness of ‘may’, when heads face losing their jobs if results fall]”.

The report reacts by simply arguing that this need not happen: schools can get good results without narrowing the curriculum. That is exactly the conclusion of the last major report to look at this subject: the 2008 “expert group” report on assessment for Ed Balls as schools secretary.  That report suggested running a campaign to persuade teachers not to teach to the test, since there was simply no need.

Although teachers have argued with me that a good professional does not need to teach to the test, I’m afraid I think of this, when I read it in official reports, as the ostrich, or head-in-the-sand position. It is unscientific, I believe: the fact that some teachers and schools buck the trend does not negate the existence of the trend. The National Strategies, in the past have encouraged teaching to the test, so presumably they thought there was some value in it for schools, in terms of improving results. I suspect local authorities have also promoted a great focus on the content of the tests in schools where the data just has to improve. Overall, the incentives of the accountability system certainly push at least a proportion of schools towards test-driven teaching and thus, if one truly wanted to change this, it would be a good idea to look at changing the way accountability works, rather than effectively simply telling teachers not to follow what for many of them will be its logic.

Then the report closes down the debate, saying simply: “Given the importance of external school-level accountability, we believe publishing data and being transparent about school performance is the right approach.”

In other words, because the review team had already decided that the evidence of the beneficial effects of external accountability was “compelling” – ie without presenting any research on negative impacts – that was the end of the matter. There was no consideration of the actual impact on children’s learning during test preparation, and the nature of it.

Incidentally, because the review team believes that “high-stakes” accountability – ie making results high stakes for schools – works, it must then also believe that assessment should drive what goes on in schools, since the philosophy must be that making assessment results “high-stakes” for schools forces them to improve the quality of education they provide.  

The third problem of the report is related to this, and I don’t want to use too much space going into it in detail here. But in essence it runs as follows. Bew really ducks another criticism of test-based accountability: that test results are used for too many purposes, and that because of this, testing as currently constituted serves many of these purposes less than well.

I’ve put the second bit in italics, because Bew really doesn’t consider this implication. Essentially, Bew accepts the widespread claim that assessment data are put to very many purposes, but reacts to this mainly by listing the “principal” purposes to which they are already put, and then saying other uses should be considered as “secondary”.

It is, I suppose, at least an attempt to consider this issue. But the problem is that the purposes suggested as central by Bew include both that data should be used to hold schools to account, and to provide good information on the progress being made by individual pupils, for the benefit of those pupils and their parents. Bew’s claim, in the foreword, that test-based accountability should also support children’s learning should also be borne in mind here, for that must be another guiding principle if taken at face value.

The problem with the report is that arguably the argument at the heart of this debate is that the use of data to provide information on a school – and on teachers’ – performance can conflict with its use both to support pupils’ learning and to provide the best possible information on the quality of that learning.

This is a big part of what the many people who, Bew acknowledges, submitted evidence to the review mean when they say that the problem is not the tests, it is the league tables which are constructed on the back of them. Because teachers are worried about their school’s results, they take actions which, while right in terms of boosting results, may not be supporting the best learning experience for the child, or their long-term educational interests. And the very act of teachers directing so much attention at the tests and results indicators may also, paradoxically perhaps, make them less good measures of underlying education quality, an argument implicitly acknowledged in the report in a section where it says many secondary teachers do not trust KS2 Sats results because of the extent to which pupils have been prepared for the tests.

In other words, the purposes – and even these “principal” purposes – are in conflict. A report which took seriously the washback effects on learning, from the child’s point of view, of the accountability system, would look much more closely at each of these aims to try to ensure that the requirements of accountability do not conflict with the aim of providing the best possible education experience for pupils.

Some alternative proposals, not backed by Bew, have tried to look at re-engineering aspects of the system to stop some of the purposes conflicting in ways which look either harmful for pupils, or which give us less good data than we might want.

For example, the suggestion put forward by many that national education standards could better be monitored through a system of assessing a sample of pupils rather than through testing every child comes because the purposes to which the current testing system is put are felt to be in conflict. A sampling system, with a relatively small number of pupils being assessed and each on differing parts of the curriculum, would allow information to be collected, potentially, across a much wider and deeper spread of aspects of the curriculum than is possible through a system where all pupils must take every test produced. And its information on whether standards were improving or falling would be more robust because, as the results would be “low-stakes” for schools, test questions could be retained from year to year to allow direct comparisons of pupil performance to be made.

These kind of improvements on the quality of information provided are not possible in the current system because other purposes to which current national test data is put – to provide information on individual schools and on all pupils’ performance, meaning that every pupil must be tested, and papers must change from year to year to guard against schools “cheating” – make them unfeasible.

A more serious look at this subject would also have considered in detail the problems of seeking simultaneously to use test results as “objective” measures of pupil performance;  to support learning; and also to hold schools to account. In 2006, a proposal put forward  by Cambridge Assessment and the Institute for Public Policy Research acknowledged the problem that the purposes were in conflict: the need for schools to generate good results could lead to test-driven teaching and a narrowed curriculum, which was not an ideal form of learning. It therefore proposed a change whereby teacher assessment would become the main judgement on both pupils’ and schools’ performance, but then children in each school assessed through a “testlet”, measuring for each child just a small area of the curriculum. The testlet results would be used as an assurance that the accountability function now placed on teacher assessment was not leading schools to inflate their results. In other words, it retained accountability but, in trying to change the relationship with tests in a small number of subjects, attempted to stop it conflicting with the goal of supporting good learning. This idea was not considered in detail by the report.*

Another alternative, mentioned as my favourite in my book, would be to make inspection judgements the central focus of school-by-school accountability (with inspections offering a rounded look at the quality of education provided, to guard against curriculum narrowing), and to run sample tests to help provide national education quality information.

Instead of trying to look at the relationship between the purposes, Bew has simply left the mechanics of the system in place, in that assessment data is still to be used for all the main purposes it is now including : holding schools to account, producing data on individual pupils’ performance for the benefit of them and their parents, and generating national and regional achievement data.

The report says that through its proposals “we believe we can address the imbalances and perverse incentives in the school accountability system”.

Because the review has not addressed the issue of the conflict of purposes this idea of countering perverse incentives is, I think, a forlorn hope. Its proposals represent no significant change to the system’s fundamentals, but rather a restating of the basis of the system – (which the report must implicitly believe, in its essentials, to be a good thing) – and then an attempt to manage the detail.

Ok, so now, finally to turn to the concrete stuff in terms of those detailed changes recommended by the report, some of which, I think, are important.

-          The report proposes moving to a system of publishing schools’ results averaged over a three year period, to address concerns that judging institutions on single years is unfair, given the way pupil cohorts can change. Small schools, where the introduction of a few high- or low-achieving pupils can have a proportionally very large effect on results from year to year are particularly hard hit by the current system, and their concerns would seem to have influenced this change. However, three-year averages are not recommended to replace single year statistics, but to sit alongside them in league tables. A key consideration could be what weight they are given elsewhere in the accountability regime, including Ofsted reports and floor targets; the report does not, I think, stipulate that they should be given priority.

-          Additional measures are to be introduced recording schools’ achievements counting only those pupils who completed the whole of years 5 and 6 at the school, in response to concerns that schools with lots of children arriving from elsewhere feel an effect on their results. Again, it seems these results will be published alongside the existing measures, rather than replacing them.

-          The report talks about placing a greater emphasis on progress measures, alongside “raw” attainment. However, progress measures already feature in league tables, are central to Ofsted’s new systems and are included in the government’s new floor targets for primaries. So call me a cynic but it is hard to see that the report has added much here. (Overall, my hunch is that there is very little in the report as a whole with which the government would disagree – and you have to wonder after reading this report if this was always likely to be the outcome – but one test (pardon the pun) of that will have to await ministers’ reaction to the report).

-          Teachers will submit teacher assessment judgements before pupils’ test results are known. This seems sensible to me, as it negates the risk of the test judgement influencing the teacher assessment verdict. As the report correctly states, they are measuring different things, so the judgements reached through each assessment method should be kept separate.

-          Finally, the most significant change relates to writing. Bew proposes, first, the introduction of a new test of spelling, punctuation, grammar and vocabulary. I guess teachers will have views on that; I would not comment except to say that the comment in the report that these aspects of English have “right” and “wrong” answers was something some people were querying last week.

The recommendation, however, to replace the writing test with teacher assessment is substantial. It has always seemed to me strange, as someone who went through secondary and university assessments in the 1980s and 1990s and was never assessed on creative writing in the exam hall, it has always seemed to me to be strange that 11-year-olds were asked to be creative under the time pressure of Sats. I think a move to teacher assessment, then, would undoubtedly be a good thing. It could be argued that this change alone, in promoting a better assessment experience for many children, will mean the Bew review will have been worthwhile, despite some of its more fundamental findings being so flawed.

The report does, however, mention that the teacher assessment results are to be subject to external moderation. This is unavoidable, in a system which is using the scores generated to hold schools to account. Ministers, I am guessing, will want to ensure that the moderation is robust, as clearly there will be an incentive for schools to push up scores if they were under pressure over pupil achievement through, for example, the floor standards. The great danger, again, would be that the government decided that the need to use the results to judge schools is seen to be more important than providing the right assessment experience for pupils – that conflict of purposes again –  and therefore moved not to accept this recommendation to move towards teacher assessment. I have, though, no evidence that this is going to happen and hope it will find favour.

Summing up, Bew’s detailed changes do stand to make some difference. But I would suggest that the arguments over the system’s underlying dysfunctionality – or not – are not going to go away. It is a shame that this report did not take more seriously, in reaching its verdict in this final report, the detail and nature of some of the concerns.

*The report does briefly the merits of using tests to moderate a mainly teacher assessment system, concluding that this would not be feasible as tests and teacher assessment are not the same and thus, I think is the implication, it would be wrong to view the test as providing “true” validation of each teacher assessment verdict. I would not disagree with that as an argument, but I do not think it invalidates the Cambridge Assessment/IPPR model, since the “testlets” in this case are not meant to provide a judgement on the accuracy of teacher assessment in the case of every pupil, but merely to provide a more general check that a school has not inflated its teacher assessment judgements.

1 Comment
posted on July 3rd, 2011