Friday, June 17th, 2011

A study apparently demonstrating the benefits of academy status seems to have been highly influential in recent weeks.

The research, by academics at the London School of Economics, was published in April. It has been picked up not only by Blairite commentators who backed the original academies policy, but now by the Department for Education in its push to encourage all schools to become academies.

I would also hazard a guess that it was in the mind of the Today programme presenter Sarah Montague when she asked a sceptical head teacher yesterday morning to accept the statement that academies improve schools’ results.

The research, by Stephen Machin and James Vernoit of the London School of Economics, produced some conclusions which look very positive for academies. As the Financial Times reported when the research was published, the study found that “turning a school into an academy improves its performance – and that of neighbouring schools”. The study was based on an analysis of pupil-by-pupil results of schools turned into academies under Labour, in the years 2002-9, when most of the institutions converting had low GCSE results. It includes a caveat that it does not relate to academies which have converted since the coalition came to power.

Having looked at this research in detail now, I am very impressed with a number of aspects of its methodology. Specifically, it performs statistical checks on institutional results which seem far more robust than similar exercises which have been carried out in widely-cited analyses of the academies policy in the past.

However, there is a gap in this research: any qualitative investigation into how academies opened under Labour have managed to produce their apparently impressive statistics.

This is an obvious question to ask: though academies’ benefits are often cited in broad-brush, quasi-ideological terms (such as allowing schools to break away from LA influence, encouraging innovation through a sponsor, or just simply promoting an often undefined quality called autonomy), why in detail would simply changing the structure of a school’s governance make a difference? What precisely have academies done to drive these results improvements? If they have greater independence, how have they used it and what has been the connection with results?

And once you look into that, as this blog and other research by the Civitas think tank has done, you start to have doubts over whether this policy is quite the panacea that is now widely being claimed.

OK, first the impressive bits, then. Well, for me if you want to know whether schools can improve their results by being turned into academies, and you want your research to have any claim on credibility, you have to do at least two things, neither of which seem to have loomed large in claims made about academy results in the past.

First, you have to compare like with like. Over the past few years, governments have looked at the GCSE (or equivalent, of which more below) results of academies, and compared them to those of the schools these academies replaced. On average, they have tended to find academy results improving, compared to those achieved in their predecessor schools, at least on the headline published figures, at a faster rate than those of the predecessor schools. Therefore, the argument goes, here is evidence that the academies policy is a success.

There are a couple of serious objections to any conclusions based on these calculations, though, including the following. What if the pupil clientele changed from the time before the school was an academy to now? The schools converting to academy status under Labour generally tended to have relatively large numbers of disadvantaged pupils. If the replacement of such schools by academies tended to draw in pupils from slightly less disadvantaged backgrounds – drawn, perhaps by the huge extra investment in new buildings that went with academies under Labour – with better results from their primary schools, this would be to the advantage of the academy but might mean that, when results rose, it was more to do with changing pupil intakes than anything the academy had done itself.

The Machin and Vernoit research tackles this issue by looking at the results achieved in key stage 2 tests by pupils who went on to attend schools which were to go on to become academies during the period under study, and those of children who joined the schools after they had become academies.

And the study finds that the pupil intake of academies did indeed “improve”. In other words, the academies under study were taking in pupils with better key stage 2 results than had been achieved by pupils entering the schools the academies replaced.

But here is the impressive bit: the researchers found that even after taking this pupil intake factor into account, the results achieved in the academies were better than achieved by a control group of schools.

The second impressive aspect about the study was that it sought to take into account the effect on neighbouring schools. This has always seemed to me to be important, since the success or failure of a policy should not be judged in terms only of the effect on an individual institution but in terms of its impact on an entire area: if an academy – which under Labour usually came with new buildings worth eight figure sums – succeeded only by drawing in more “educable” pupils from neighbouring schools, while those around it suffered and their results declined, this would raise questions about the policy.

But the Machin/Vernoit research looked at this issue, too. It found that neighbouring schools did suffer (to put it crudely) from the introduction of an academy nearby, in that the average achievement level of the pupils they recruited in year seven, as measured by their primary test results, fell.  In other words, some of the higher-achieving pupils moved, at the end of primary school, to the academy wheras in previous years they might have attended its neighbouring school. However, despite their intake getting “tougher” in this way, the results in these neighbouring schools at GCSE also improved. The paper suggests that this was probably the result of greater competition from an academy nearby spurring improvement, on the main results metrics, by the neighbouring schools.

Ok, that’s the good news. Here I come to my beef with this study. And I should say first that I am not trying to hit academies over the head for the sake of it with observations around strategies some schools might use to boost results. (The other day I met, as it happens, the principal of an academy with a very tough, non-selective intake in an area with grammar schools now under pressure from the Government’s new GCSE floor targets, and thought what a challenging, important job that must be.) But neither do I think that we should just abandon detailed scrutiny of whether academies are quite the answer to all educational problems that they are being made out to be, and what their results really tell us.

So back to the research. The trouble is, for all the statistical expertise and checking that has gone into this study, it is still based on the assumption that you can use a set of exam results formulae – on one or two performance indicators – to attempt to answer definitively the question as to whether these schools are actually providing a better education than their predecessors. In other words: the implicit view is that this question can be answered entirely statistically, without any reference to any qualitative understanding of what has actually happened to make these schools “better”.

Yet there are some fairly big alternative explanations. The obvious one is that academies have simply been more results-focused, in the main, than other types of school and thus they have sought to do whatever it takes to boost grades on the Government’s published indicators. That means that while the central indicators have improved, other indications – statistical or otherwise – might give cause for concern. So while the stats improved, actually if you tried to get a wider sense of what might be felt to matter in education, you would get a different picture. Academies might have, to put it more crudely as a hypothesis, paid more attention to gaming the results indicator system than other schools.

You could say it is unfair to single out academies in this way, and for newcomers to this blog, this might sound hyper-cynical. But, as I’ve written before, academies under Labour seem to me to have been under more pressure to raise results than other schools. Most of these schools were specifically created to address the claimed underperformance of a predecessor school. They came, often, with tens of millions of pounds of extra funding for new buildings. Their results were subject to extra scrutiny in the media, not just at the school level but at the level of the national politicians overseeing the academies policy, whose reputations were staked on headline scores improving. They might – though I am guessing here – also often come with a business mentality, reinforced by their sponsor, which incentivised senior leaders to get results up, come what may, through bonuses linked to GCSE exam performance. It would be surprising, then, if one or all of these factors did not produce a very strong focus on those headline measures.

So, how to check whether any other explanations lie behind those improvements cited in the study than just a general sense that education has improved in the academies under investigation?

Well, I have to confess here, that I have no killer line, or proof that this study is wrong in its conclusions. But I do think we should be wary about them. I want to come at this first statistically, and then anecdotally.

First, on the statistics, another impressive aspect of this research is that it does attempt to address, through the data of course, the most obvious way in which results could have been boosted artificially, if you like. This is through the use of non-GCSE qualifications.

Under the system in operation in recent years, other courses are counted as “equivalent” to GCSEs, for league table and results purposes. This is the case for the main measure used in this study: the proportion of pupils in each school achieving five A*-C grades at GCSE or vocational equivalent, including maths and English. Yet the fact that some of the GCSE-equivalent courses have been given high weightings in the results formulae – worth up to four GCSEs – and have high pass rates means that they can have a heavy influence on the overall published results. Schools encouraging high numbers of pupils to take these courses – whether they are doing so because of their own need to boost results, because of students’ needs or a bit of both – are therefore likely to get a results improvement out of doing so. Might not academies, then, under greater pressure to produce results gains, simply be turning to these courses to a greater degree than other schools?

So, back to the research. I was surprised to find that not only did Machin and Vernoit address this possible alternative explanation for the better results of academies, but that, when they did so, they found that it did not explain the results improvements academies seemed to show. In other words, the use of non-GCSE “equivalent” qualifications did not explain the relative success of academies, they suggest. The success, then, stood even after taking into account this possible alternative explanation.

The way they calculated this was fairly straightforward: simply to perform their calculations using GCSE qualifications alone as the measure of success in each school, rather than GCSEs “or equivalent”.

This, they say, represents their check on this idea – that I refer to above – “that the performance improvements [in academies] are largely driven by performance improvements in unconventional subjects”.

So, they conclude that putting pupils on “unconventional” GCSE-equivalent courses does not explain the academies’ results success. I should say, here, that I lack both the professional statistical expertise of these researchers or the time they no doubt spent on their study. But I would say that it is a slightly odd conclusion, given some other things we know about academy results, as revealed in more recent data sets.

First, I have performed a very crude version of a similar type of test to the one they used in their study, simply by looking at the latest published GCSE results of academies (all of them academies set up under Labour, and therefore the group from which the LSE study schools were taken) with “equivalents” and without. I have then compared these figures to those of non-academy schools.

I did this using Department for Education spreadsheets, adding up the number of pupils in academies in 2010 who achieved five A*-Cs including English and maths in GCSE or vocational equivalent, and comparing that to the total number of pupils in the academies they attended. The same calculation was performed to total up the number of pupils in academies achieving five or more GCSE A*-Cs when these were not allowed to include “equivalents”.

The figure for academy results – the proportion of pupils achieving five or more A*-Cs including English and maths with vocational equivalent, which was the main published measure used in league tables under Labour and is continuing to be the main target for schools under the coalition –  comes out at 43.3 per cent. Without them, it drops to 33.0 per cent, a drop of 10.3 percentage points.

Now, a similar comparison for non-academy schools reveals a far smaller gap. With equivalents, non-academies end up on a figure of 57.0 per cent. Without equivalents, they finish on 52.5 per cent. This is a gap of 4.5 percentage points.

So, on the 2010 figures, “GCSE-equivalent” courses have contributed far more to academies’ headline results than they have at non-academy schools.

Second, there is evidence from the Government’s much-debated new English Baccalaureate measure. This found, as I blogged about here, that nearly a third of academies with results to report had a score of zero per cent on the English Bacc, which records the proportion of pupils in each school with A*-Cs in English, maths, two sciences, a language and history/geography. Furthermore, the proportion of academies with that zero score on the EBacc was twice as high as it was with a comparison group of schools with similar intakes.

This data would suggest, then, that if academies were improving their results, they were not doing it exclusively on the narrowly “conventional” subjects that Michael Gove has chosen to highlight through the EBacc. Yet the LSE study says its figures do not show the improved results at academies are the product of gains in “unconventional” subjects. So, to repeat, it is strange how this evidence contrasts with the LSE research.

Other than the GCSE “equivalents” move, there are other strategies which can be used to boost school performance if schools of any kind are particularly desperate to see their statistics improve. These include entering pupils multiple times for GCSEs in English and maths in particular, with schools knowing that these are crucial to their published rates. The Advisory Committee on Mathematics Education documented this practice in relation to maths last month, pointing out that sometimes pupils would be removed from the subject by their school if they achieved a C grade earlier than the end of their course, to give them time to focus on other subjects important to the school’s results, even though the pupil might be chasing a grade higher than a C in maths (not important to the school’s published indicator). I have no evidence that this has happened to any greater degree in academies though, as I say, I think the pressures on most of them to improve results have been great. But any study should be aware that headline results indicators will often not present the whole picture of what has been going on in schools.

My final detailed response to the study is anecdotal. And here, I just want to refer back to my original blog on academies’ EBacc results, a couple of months ago, for evidence.

This made several points in relation to studies and anecdotes on the subject of history.

Academies were more likely to have fewer students studying history to GCSE than other types of school, according to research by the Historical Association. Academies were also more likely to have a two-year Key Stage 3, which gives pupils more time to prepare for GCSE but was a concern to the HA because it meant many were likely to lose one of the only three years they would study history at secondary school.

The report also quotes a teacher, from an academy, saying: “History is seen to be too academic! …Students who are predicted lower than a B are not allowed to study the course…We are also not allowed to run ‘entry level’ courses for students with specific needs, as that is not thought to be meeting the attainment targets for the academy.”

An Ofsted report on history teaching in primary and secondary schools, published earlier this year, also documented lower numbers taking history in academies. It found: “Entries for GCSE history from academies were significantly lower than for maintained schools overall.”

One online comment after a 2009 TES story documenting another academic report on the pressures facing history as schools sought to boost their results in league tables, ran as follows:

 “I used to work in an academy in London, and as I was leaving I had to rank every pupil in year 8 as an A, B or a C. A means that they could get an A or a B at GCSE. Therefore history appeared in their option forms. The B category were pupils who were borderline C/D. The C meant that they were predicted grades G to D. Neither categories B or C had history on their option forms! They were encouraged to take other less rigorous subjects.

“Even though I had known students previously predicted Ds and Es get outstanding results, who went on to do exceptionally well at A-level, and some even went on to do history at university.

“What was most upsetting was the case of one student, with a range of learning difficulties. He loved history, and orally he was phenomenal. He was put in category C, and was therefore being guided down a different pathway. He was devastated that he would not be able to take history in year 9-11. His mother rang the school, and explained that it was likely whatever course he was entered into, he would be unlikely to either pass or do very well in, so why couldn’t he at least take a subject he enjoyed?

“The plea fell on deaf ears and the boy was placed in some random BTEC or GNVQ course taught by some bland paper pushing academy drone who was being shipped in to ‘sort’ the school out of failing pupils and failing teachers.”

If you look back to my earlier blog, you will find reference to the parent of a pupil at a school taken over by the Harris chain of academies, who told me (and the local paper) that her daughter had been forced to take a BTEC sports course (worth two GCSEs to the school), at the expense of French GCSE, despite her daughter having no interest in sport. This was a clear case, said the parent, of the needs of the school to boost its published results taking precedence over those of her daughter.

So in response to this LSE study, I have put forward some statistics that run contrary to one of its more important findings, and also some anecdotes.

Not much, you might think. But there is a bigger point here: there should be more to the evaluation of a policy than simple results statistics, however clever the methodology and however robust the statistical cross-checks, especially in a complex system such as secondary schools results calculations which offer plenty of opportunities for schools to take tactical decisions to boost results. This runs the risk of following less-than-ideal behaviour, from a pupil’s point of view, within particular subjects.

And is all that matters the number that appears at the end of the educative process? Or do we care about what happens along the way, and how the numbers are generated? If particular subjects have been affected in the drive for higher results, should an influential study like this not be investigating and having something to say on this? Or should such a perspective just be ignored: the idea is that we lay down the statistical rules for success, check whether the statistics have been raised and that, apart from some clever checking of data, is pretty much it?

To sum up, how do we know that academies under Labour did not simply pursue a more relentlessly focused version of “Education by Numbers”?

I think if researchers are going to make claims which are going to be used, whatever the caveats in the original research, by others to say categorically that a policy “works” and by implication that the education on offer in academies is better in a general sense than in other schools, they are going to have to be prepared to dig a little deeper – and not just statistically – into what has been going on behind the figures. Economists who do not do this will never be able to see or pronounce on the whole picture, I believe. Their research will therefore always be incomplete.

So it is a shame that statistics are simply being held up as conclusive evidence, one way or the other. This really is not, I think, for all the complicated formulae and technical expertise on display in this paper, a very sophisticated way of understanding what has really been going on in our schools.

2 Comments
posted on June 17th, 2011

Wednesday, 1st June, 2011

Right, I haven’t blogged for a while, but thought I’d just post here an extract from a speech I made just after Christmas about what can be read into English Sats results for 11-year-olds.

I’ve been prompted to do this after reading, over the last two days, the Evening Standard’s coverage of what it claims is a literacy crisis in London.

Yesterday, part of its front-page coverage talked about one in four children being “practically illiterate”, seemingly based on the proportion of pupils achieving level 3 or below in English Sats.

Today, it highlighted the number of pupils “with a reading age of seven”, based I think on the numbers achieving level two or below. (The normal level said by the Government to be the expected reading standard of a seven-year-old).

I don’t think test statistics can support the interpretation being put upon them. It may be that we have a literacy problem in the capital, or in the country as a whole. But the test data used as a good part of the news hook for the coverage don’t do a good job about telling us the nature of the problem. It’s probably not helped in that news coverage often fails to put the numbers in perspective. Ideally, it would give  us unsensationalised  information on whether the statistics are on an upwards, downwards or static trend, and what information we have about how this country compares to others, but this tends not to happen.

Anyway, here’s the extract of that speech, prompted in part by similar coverage on the Today programme before Christmas.

I want to talk about the over-interpretation of test results: they don’t tell us nearly as much as we might think they do. Perhaps just as importantly, we don’t use the data, in our public debate around education, really to understand what is going on in schools or with pupils’ learning, and in that sense we are letting children down because we should be using assessment information in a far more sophisticated way, I think. And bear with me, as I am going to have to go into a bit of detail here.

So, I’ll just start with a question: What is the definition of the level of understanding expected of an 11-year-old in reading? How is this defined by the government, by the media, and thus by people nationwide in the debate about this vitally important subject?

What does it mean, within the detail of what children have to achieve, for them to perform at that level?

Well, in 2010, it came down to this: the ability of a child to score 18 marks out of 50 in a one-off 45 minute test, taken by most pupils as they come to the end of their primary school years.

That is the number of marks needed to secure level four in reading, the Government expectation, and represents the entire official judgement on that pupil’s ability in reading over the past four years.

If a child scored 30 marks out of 50 in last year’s tests, they would have achieved a level five in reading, which statistically and according to the interpretation we are expected to put on these data, is the level of proficiency expected of a 14-year-old. If they scored between 11 and 17 marks, they would be at level 3.

That is it. Nothing else counts in official estimations of what it means to be able to read. Our entire primary education system –at least so far as reading is concerned, hinges around the proportion of pupils achieving these expectations and pass marks, which are very closely bunched, in a one-off test one day in May.

I highlight the case of reading because it came up in coverage shortly before Christmas by the Today programme. It led its broadcasts one morning with claims that “thousands of boys start secondary school only able to reach the reading standards of seven-year-olds or below”.

This was based on a technically accurate interpretation of figures generated by national test data, but which led me to question why people are putting such huge weight on figures which, if you step back from this for a second and think about the detail of what these data mean, cannot support this interpretation.

Today had obtained figures – released every year – which showed that in 2010, 10 per cent of pupils obtained below a level three in the reading test. This means that they either scored 10 marks out of 50 – 20 per cent – or below on the tests, or did not even take them.

The logic of Today’s argument was this. Pupils scoring below level three in the reading test have scored level two at best. Level two is the performance technically expected of a seven-year-old in the tests pupils take at this age. So the 10 per cent of boys failing to achieve level three are performing at the level expected of a seven-year-old.

This finding, which suggests a serious problem – implying, I would venture, to many listeners, that many boys are wasting years at school making no progress – is viewed as a national scandal; it, at least, is very serious for these boys.

Consider, though, more detail on how these data are generated. A child could fail to achieve a level three with 10 marks out of 50. But with another eight marks – 18 out of 50, or 36 per cent, these boys would have achieved a level four, in line with government expectations of an 11-year-old.

The difference between having the reading age of a seven-year-old, then, around which national debate centred, and that of an 11-year-old turns out to be eight marks on one 50-mark test. Put it another way, a seven-year-old who took this reading test could have scored 10 marks and be said to be performing in line with expectations for their age.

If they took a similar test four years later as an 11-year-old and scored 18 marks, then they would be deemed to be doing as well as expected for an 11-year-old. Thus, four years’ progress in reading could be said to come down to the ability to improve by two marks a year in a 50 mark test.

I went into some detail in this example to illustrate the difficulties we have in the way test data are being used. Believe me, I am not trying to minimise this problem: if a large number of boys really cannot read, it is a serious national issue.

The trouble is, I don’t think test data, and in addition to a certain extent the way they are reported, are helping us understand the nature of that problem and thus to do something about it.

Consider again the interpretation of the figures around which the Today programme that morning revolved, including an interview with Michael Gove, the Education Secretary.

The lead headline on the programme’s website read “Gove: 11-year-old illiteracy ‘unacceptable’”. John Humphrys, the presenter, also used the term “illiteracy”.

But, in fact, the test data actually tell us nothing about “illiteracy”. They don’t tell us whether the number of boys quoted in the programme actually are “illiterate” – can’t read or decode text – or whether their problems are different from that.

Strictly, they tell us only that a number of boys either couldn’t score a certain number of marks in a one-off reading comprehension test (further scrutiny of the government data shows 4 per cent were entered but didn’t achieve level 3), or their teacher did not enter them for such a test because they believed they would not pass (5 per cent), or that they simply missed the test(1 per cent).

We don’t know, then, from the test data, whether the problem with these children – is a) genuine inability to decode text – although the fact that nearly half of them scored some marks on this test would suggest this was not the issue for these children b)problems with reading for comprehension (ie they can actually read the words, but they don’t really understand either what they mean or what the question is asking) or c) a failure to cope with the format of being tested.

There is, of course, another explanation: that these children scored below their “true” level of understanding through having an “off-day” or just being unlucky: there will always be measurement uncertainty and inaccuracy in a one-off test.

If we don’t know what these figures actually mean, how can we do anything to help children to genuinely improve? Is what the nation needs a greater emphasis on helping children with decoding, as is suggested through the introduction of a new phonics test, or more work on comprehension, for example? The test data give us no answer.

It also was not reported – and generally isn’t – that we do know that substantial numbers of pupils in this category of failing to reach level three have special educational needs: by my calculations from government data, seven in 10 children who failed to reach level 3 in English in 2010 were classed as having a special need. Nearly 10 per cent of those failing to reach level three are classed as autistic; a further seven per cent have specific learning difficulties; 10 per cent have communications needs and a further 10 per cent have behavioural, emotional and social difficulties. None of these figures were presented in the Today programme reporting.

Neither, by the way, was any international context given: boys’ reading is a problem around the world, as last month’s Organisation for Economic Co-operation and Development PISA study showed. It included the following quote: “Across OECD countries, 24 per cent of boys perform below level 2[at the bottom of six levels of the PISA reading tests], compared to 12 per cent of girls. Policy makers in many countries are already concerned about the large percentage of boys who lack basic reading skills.”

The fact that these test data – and sometimes the reporting around them – allow us only a very superficial, decontextualised understanding means that we really are letting down pupils and the education system as a whole.

We could do so much better. For not only does the accountability system which centres on pushing schools to raise these test numbers – through league tables, targets, Ofsted inspections and the rest of the centrally-created performance apparatus – encourage schools to spend months trying to drill pupils to achieve, if you look in detail at what the figures mean, a few extra marks on one-off English and maths tests. We also lack the understanding – in terms of the national data that these test figures generate – both to help these pupils do better – ie to work out what it is they can and cannot do – and to help the system as a whole to improve.

Today presented the problem as a hugely serious issue for the nation. But we are not taking it seriously at all if this is the level of analysis being offered.

If Sats are the height of our ambition in assessment – and there are still signs, under the new government that this is what pupil progress will revolve around – we really have a problem, then. We need to look at the use of much more sophisticated and useful measures of children’s understanding, both from the point of view of helping the individual child improve, and from the point of view of getting a much better understanding of what is really happening nationally.

 The rest of this speech, to a meeting held in Parliament in January to launch a joint Association of Teachers and Lecturers/National Association of Head Teachers/National Union of Teachers pamphlet on assessment and accountability to which I contributed, went on to talk about problems of the washback effect on teaching of high-stakes test-based accountability, with which readers of this blog will be familiar.

1 Comment
posted on June 1st, 2011

 

Tuesday, 5th April, 2011

This is just a brief blog to acknowledge the publication today of the interim report by Lord Bew’s inquiry into Key Stage 2 assessment.

I have to say, I have been impressed with the amount of evidence garnered by this review. More than 4,000 people responded to the online consultation, and the review also heard from 50 people in person. There is a lot of research referenced. I gave evidence myself, setting out concerns raised in Education by Numbers, and discussing with the panel the strengths and weaknesses of the current system.

On a snap judgement, Bew seems to me to be taking a more thorough look at this subject than any other government inquiry since I’ve been covering this ever-contentious field. There is, for example, more evidence on display in this report than that discussed in the last government’s “expert review” on assessment, which concluded in 2009 and led to the scrapping of Key Stage 2 science tests.

As I said to the latest inquiry, it is still amazing, I think, that more than 20 years into this system of national assessment there has been no comprehensive observational investigation into the extent and nature of test preparation and other side-effects, if extensive test preparation is to be seen as a side-effect, of results pressures in schools.

 In the absence of this detailed study, perhaps this latest investigation – brought about, don’t forget, following industrial action last year against the KS2 tests by the National Association of Head Teachers – will be the most thorough we are going to get.

That said, I worry that its remit – and perhaps the tight economic situation – will limit its scope to make radical change. OECD evidence stating, says this interim report in a reference to the remit handed to it by Michael Gove, that “external accountability is a key driver of improvement in education” needs to be scrutinised carefully, too. I’m not sure that the OECD does have quite the evidence that high-stakes test-based accountability of the English/American sort has driven improvement, as measured by OECD test data. It may be, of course, that the remit is not specifying that England sticks with test-based accountability similar to the current sort, but I wonder if that is the hint. Anyway, the OECD evidence does deserve closer study, which I will get on to in the coming days/weeks.

There is also evidence cited in Bew which is highly relevant to this debate which is new to me, and hopefully will form the basis for future blogs on this site. So watch this space.

 If you’ve been following me on twitter, you’ll also have noted a string of tweets from me when the report was published today. I’ve just cut and pasted them below, essentially because they serve as my instant judgement “edited highlights” of the report. They are meant to be read from the bottom up.

Thanks.

Warwick (@warwickmansell on twitter if you want to follow me and aren’t already doing so)

Russell Hobby, gen sec of NAHT: heads looking forward to a “radical shake-up” of assess system. Criticism of current system “overwhelming”.

Final report expected June at the earliest, I think.  

Phew! That’s enough on that, for now. Impressed with range of evidence being used.Recomms will be interesting, esp given tight remit,economy

That last tweet reflects what was always a quietly powerful criticism of current system, I think. (And was cited by M Gove in 2009)

Bew: “feedback suggests secondary schools make limited use of stat test data to support transition”.Many 2ndary heads concerned re over-prep  

Bew: some discussion among some assessment orgs of stimulating a “market” in testing.

Bew: Headteachers involved in the [now abandoned] pilot of single level tests said they had gone positively.

Bew: split views on abolition of science tests; some say science now more fun; but general view that science teaching had “lost impetus”

…but writing tests generated most concern. Some 43 per cent of online consultants said they were “inadequate”; 33pc “not v effective”.  

Bew: “Widespread recognition” that tests themselves are well-developed.

Bew: some junior school heads concerned they lose out under current system, as some “infant schools inflate their KS1 assessments”

…but some research suggests replacing tests with TA might disadvantage poorer pupils, some ethnic groups  

Blimey: Centre for Policy Studies says current tests are “biased towards families from middle class homes”.

Bew: testing regime can disadvantage both high- and low-attaining pupils.

 Bew: feedback suggests progress and achievements of children with special needs not “appropriately recognised, celebrated” by current system

Bew: Cambridge Primary Review team argue that current national test data provide little useful information on national education performance

Bew: 59 pc of respondents said more weight should be placed on teacher assessment

Bew: 2009 DCSF survey found 65pc of parents valued their children taking KS2 tests.  

Bew: 2008 survey by the National Confederation of Parent Teacher Associations found 78 pc of parents placed high/med value on external tests

Bew: 15 per cent of online respondents criticised impact of test data on Ofsted judgements, “some expressing deep concern”.

Lord Sutherland, who conducted review of test marking shambles in 2008, among those criticising current system.  

Bew: ASCL believes league tables are “driving the whole education system”, leading to assessment for own sake, rather than re pupils’ needs

Bew: many heads say they “have to” teach to the test, despite clear evidence it is a poor strategy. [Unsurprising, when jobs are on line]

Bew: most respondents said they supported testing; way the data get used is problem.  

Bew: 62 per cent of online consultation respondents had concerns about the way test data are used.  

Bew cites OECD reseach saying “high-stakes accountability”..But accountability systems work very differently in different countries, I think

 Bew: 50 per cent of respondents wanted league tables removed.

Interesting..Bew review says one of its key tasks will be to define purposes of statutory assessment;system then designed to fit these purps

Bew: almost all respondents have questioned the purposes of statutory assessment.

Bew: most of the evidence submissions (61 per cent) were from primary heads; 23 per cent from primary teachers; only 4pc from parents.  

Bew: many contributors recognise positive features of current system, eg “impact on driving up achievement, progress”, which we shd protect.

Bew: “significant concerns” about focus on children on borderline of national test levels.

Bew: 4,000 responses to online consultation. Many schools feeling that they must drill children for tests is “deeply worrying”.

Bew: It is “increasingly clear that there is not a single set of solutions which can command universal support.”  

Bew: “Change is clearly needed” but acknowledges “complexity in the challenge we face”.

No Comments
posted on April 5th, 2011

…he’d say something about academies’ English Baccalaureate results

 Monday, March 28th, 2011

Last autumn, Michael Gove appeared on the BBC’s Question Time and launched a passionate attack on what he claimed was a glaring injustice within English education.

Children from disadvantaged backgrounds, he suggested, were being let down by a system which assumed they could not succeed in traditional academic subjects.

He said: “If you look at what happens in France, or in Holland, or in Canada, or in Singapore, or in Hong Kong, or in any of the countries which have got education systems many of which are much better than our own, they expect children at the age of 16 to have a rounded education.

“[This] means they are fluent in their own language…[and expected to] master the sciences, to study a humanities subject like history or geography, which build human sympathy. That’s the rounded education they expect.

“And the problem we have had in this country, as an historical problem, is we have automatically assumed an academic education is only for a minority: only 25-30 per cent of people can succeed. 

“Well, that is rubbish.”

“All of us are facing an educational challenge in this country,” he continued. “How can we ensure that we end the patronising twaddle of the last 30 years that assumes that just because kids come from working class backgrounds, they cannot succeed in academic subjects?

“With my background, I am determined to ensure that people have that chance. And when people say ‘oh, you are demoralising children because they cannot succeed’, what I hear is the next generation being written off because we do not have high aspirations for them.

“One of the reasons I am in politics is to make sure that we transform our education system so that kids who have been written off in the past at last have the chance to succeed.”

Well, all this was greeted enthusiastically by some commentators.

But is this passion real?

If so, you have to wonder why Mr Gove has not taken a much closer interest in what has been going on in his favourite type of school: academies. His lack of interest might suggest his emotion is synthetic. Or to be more charitable, when a seemingly heartfelt desire to do what he thinks is the best thing by working class pupils runs up against the demands of political ideology, ideology wins.

Before going into the detail on academy results, I should state something now.

It is this: I am a “what works” type of person.  I don’t like ideology, or the idea that something should be implemented because it fits a theoretical schema or model of how things ought to run best. This means I’m not one to dismiss any type of organisation of schooling out of hand. I am, as might be guessed from the length of some of the blogs on this site, a details person.

Any consideration of academies, then, should be carried out on the basis of as full as possible an understanding of the effects of these new schools across a local area. Academies are sometimes sold on the basis that their governance supports innovation, and that they have brought dynamism to England’s system. Evidence on this should be weighed against that relating to other arguments, including the financial implications of these new school arrangements, the impact of academy freedoms on equity, their effect on teacher recruitment and retention, their effect on local admissions and the interaction with accountability to local people. Above all, we should try to get an understanding of the detail of what has happened in academies and other schools in their localities.

The trouble is that we never get this fair reckoning, in my experience, because politicians of both this and the former government who shape the debate are so committed to the policy, as a structure of running schools which they prefer to the traditional model of state education, that they don’t present evidence even-handedly. GCSE results press releases have consistently highlighted academies’ results as better than those of non-academy schools, based on faster average improvements in academies than other schools on the main GCSE performance measures, when there are other ways of looking at what has been going on, and even though basic questions such as whether the pupil make-up of each academy has changed or not compared to its predecessor school(s) are not addressed in the statistics. Remarkably, this cheerleading presentation of academy results has continued even after the publication of results in Michael Gove’s new “English Baccalaureate” measure.

In January, league table results which saw Mr Gove introducing that new performance indicator – the English Baccalaureate – were revealing, although not surprising, in what they documented about the statistics of academies.

These schools, usually set up through a contract agreed between a sponsor and central government, had long been said by Mr Gove and his Labour predecessors to be improving their headline GCSE results at well above national average rates.

This was based on the main figures published under Labour: the proportion of their pupils achieving five A*-Cs including English and maths, in GCSEs or vocational equivalents.

But the “Baccalaureate” figures, which ranked schools on the proportion of their pupils achieving good GCSEs in not just English and maths, but also two sciences; history or geography; and a language, painted a very different picture. Many academies were right at the bottom of the English Baccalaureate league tables, with nearly a third of those with results to report recording zero per cent of their pupils achieving this new benchmark.

Some three quarters of the academies with results to publish had five per cent or fewer – that is one pupil in 20 – achieving the EBacc, compared to a national average figure that had 16 per cent of the cohort achieving the new benchmark. At all but 24 of the 187 academies with results, performance on the EBacc was below 10 per cent.

The press release put out by Mr Gove’s department – which would be very worried about the situation in academies, you would expect, if it shared his concern about pupils missing out on a broadly academic education as he defines it – said nothing about these statistics.

Instead, it only mentioned academies in reference to the old measure, proclaiming that: “Academies continue to show improvements in getting five good GCSEs (or iGCSEs or equivalents) including English and mathematics at a faster rate of 7.8 percentage points compared to other schools, which improved by 4.5 percentage points.” Government comment on schools’ results in the EBacc focused, then, on the 16 per cent figure for schools as a whole, which was seen as low and might underscore, in the public mind, a view that radical change – including academy status – was needed. In the press release, this seemed to be underlined by the inclusion of comments from two academy managers without, again, reference to academies’ EBacc results.

Many have said that, and might argue here, that the retrospective EBacc measure is unfair on schools. But it seems to me that someone introducing this measure simply with a desire to highlight the performance of schools in the subjects contained within it, without prejudice towards any particular type of school, would use the EBacc results to include a least a heavy element of caveat in what was being said about academy results overall. Yet the spin – or a desire to present academies as always better than other state schools – seemed to be taking over.

In recent weeks, the TES has been following these results up with stories, first, that only six per cent of pupils in schools run by the government’s three favourite academy companies achieved the English baccalaureate this year, compared to the national average of 16 per cent.

The government, or backers of academies, including those teaching in them, might respond here by saying that this comparison is unfair. Labour’s academies, which were the only ones open and able to provide the GCSE figures on which these statistics are based, were set up mostly in disadvantaged areas, with challenging intakes, so it would not be right to try to compare them to the national average, which will include schools with more middle-class  pupils.

But that, as Mr Gove’s comments to Question Time should make clear, is not a defence open to him if he wants to say what is going on in academies is not significant. For he has argued that we need to have high aspirations for good academic achievement for children from disadvantaged backgrounds. All schools should be getting good results according to his new benchmark, and the results of academies were especially concerning, you would expect him to say if he was being consistent.

Even more damningly, the TES also produced statistics claiming that academies actually fared worse, not just than the national average English Baccalaureate figure, but when compared to non-academy schools with similar intakes.

“No pupils gained the English Baccalaureate in 31 per cent of the academies that entered pupils for GCSEs and their ‘equivalents’ last year,” it said.

“But only 17 per cent of non-academy comprehensives and secondary moderns with the same proportions of pupils on free school meals with special educational needs completely failed to score on the EBac.” [My italics].

Going slightly further up the league tables, the TES found that 73 per cent of academies achieved less than five per cent on the EBacc measure, compared to only 55 per cent of non-academy comprehensives and secondary moderns with comparable numbers of special educational needs and free school meals.

Not once, though, as far as I am aware, has Mr Gove made any comment about this disparity. It begs the obvious question: is something peculiar going on in academies which is producing these numbers? Mr Gove doesn’t appear to have  looked very hard for an answer.

Indeed, in a letter to academy principals last month, after the EBacc results had been made public, Mr Gove began: “The Academy programme has already proved itself an exciting, powerful and dynamic force for higher standards in our schools.”

He added: “Sponsorship has been key to transforming some of our most challenging schools bringing added drive, vision, resources and expertise, to create a culture of higher aspiration.”

Although the letter talked about the importance of getting all schools above a new floor standard” of 35 per cent or more of pupils achieving five or more GCSE A*-Cs including English and maths, there was no mention anywhere within it about academies’ results in the EBacc.

Now, if Mr Gove were really concerned to look without prejudice at the effects of government policy in particular types of schools, he might also have wanted to consider a report produced in 2009 by the Historical Association, which contained some very interesting statistics, relevant to academies and other schools, on the exposure of pupils to one academic subject Mr Gove has been very concerned to emphasise.

This report, based on a survey of 644 schools including 23 academies, found that only 59 per cent of academies taught history as a discrete subject in year seven, which was the lowest of any of the four categories of schools. (The others were non-academy comprehensives, grammars and independents). Some nine per cent of academies had a two year key stage 3 curriculum, allowing pupils to drop history at 13, compared to six per cent in comprehensives, three per cent in grammar schools and one per cent in the private sector.

Nearly 48 per cent of academies reported that year seven pupils spent an hour a week or less on history, compared to 30 per cent in comprehensives, 12  per cent in grammars and seven per cent in independents. There was a greater spread of teaching time in comprehensives, however, with 38 per cent likely to devote more than 90 minutes to history a week, a higher figure than for grammar school and fee-charging schools. “The academies remain the least likely to give such generous allocations,” said the report. “Less than 20 per cent of them thought it worth investing more than 90 minutes a week in the subject.”

Academies also seemed to be reducing the time allocated to the subject faster than other types of schools. More than half reported that the time devoted to it in year seven had dropped since the previous year, compared to one third of comprehensives, while time reductions in year 8 and 9 were also most widely reported in academies (35 per cent, compared to 20 per cent for comprehensives).

In terms of GCSE history numbers, academies were the only type of institution where greater numbers reported a decrease in entries for the subject (33 per cent of academies, compared to 17 per cent of comprehensives) than an increase (19 per cent of academies, compared to 27 per cent of comprehensives).

The report also found extensive evidence, with no particular type of school mentioned, of history struggling for GCSE numbers in the face of competition from other subjects, with vocational qualifications “which in many cases lower-attaining students were being compelled to take” mentioned in a quarter of cases.

The report includes the following quotation, not mentioning what type of institution was involved. “Students have been deliberately denied an opportunity to study history by forcing them down vocational or academic pathways. GCSE students have also been taken off courses against their wishes to do BTEC qualifications in six months so that the school can boost its position in the league tables. This has happened to students who were otherwise on target for a C/B in history but were doing badly on their other optional subject.”

 It quotes a teacher, from an academy, saying: “History is seen to be too academic! Entrance to the course is based on Fischer Family Trust predictions, and students who are predicted lower than a B are not allowed to study the course…We are also not allowed to run ‘entry level’ GCSE courses for students with specific needs, as that is not thought to be meeting the attainment targets for the academy.”

Education by Numbers indeed, in both cases, and seemingly classic examples of the need of the institution to raise its statistics being put above individual student concerns, at least as these history teachers see it.

Ofsted’s report on history teaching in primary and secondary schools, published this month, also documented lower numbers taking history in academies. It found: “Entries for GCSE history from academies were significantly lower than for maintained schools overall,” at 20 per cent of students in academies compared to 30 per cent for non-academy state schools (and 48 per cent in fee-charging independent schools).

In 2009, the TES reported on a study by academics at the university of East Anglia and Southampton, which also found results pressures as a heavy influence on schools’ decisions over history.

The academics are quoted as saying, with no particular type of school identified: “Pupils’ interests were not necessarily put first. For the senior leadership team in some schools, the first priority was the school’s examination profile.”

Beneath the TES story, there was the following comment:

“I used to work in an academy in London, and as I was leaving I had to rank every pupil in year 8 as an A, B or a C. A means that they could get an A or a B at GCSE. Therefore history appeared in their option forms. The B category were pupils who were borderline C/D. The C meant that they were predicted grades G to D. Neither categories B or C had history on their option forms! They were encouraged to take other less rigorous subjects.

“Even though I had known students previously predicted Ds and Es get outstanding results, who went on to do exceptionally well at A-level, and some even went on to do history at university.

“What was most upsetting was the case of one student, with a range of learning difficulties. He loved history, and orally he was phenomenal. He was put in category C, and was therefore being guided down a different pathway. He was devastated that he would not be able to take history in year 9-11. His mother rang the school, and explained that it was likely whatever course he was entered into, he would be unlikely to either pass or do very well in, so why couldn’t he at least take a subject he enjoyed?

“The plea fell on deaf ears and the boy was placed in some random BTEC or GNVQ course taught by some bland paper pushing academy drone who was being shipped in to ‘sort’ the school out of failing pupils and failing teachers.”

The notion of pupils being forced into taking subjects for the good of the school’s statistics reminded me of a conversation I had in late 2009 with the parent of a child at a school which had just converted to be run by the Harris Federation of South London Schools.

I was following up on a story in the local paper on the anger of the mother, Moira Macdonald, that her daughter, studying at Harris Academy Purley, near Croydon, had been forced to take a sports BTEC worth two GCSEs.

The academy replaced Hailing Manor school, which was under pressure because its headline results were below the Labour government’s “floor targets”, at short notice in September 2009. In May of that year, Dan Moynihan, chief executive of the Harris Federation, wrote to parents saying that students would be “required” to do a BTEC in sport.

In the old school, French had been compulsory, but it became an option at the academy. The new academy’s options structure allowed pupils to take up to two optional GCSEs alongside English, maths, science, enterprise, religious studies and the BTEC sports course. The BTEC sports course (worth two GCSEs) would be taught in only three periods a week, said its options booklet, rather than the five the school was devoting to maths GCSE, which is worth one.

The parent, Moira Macdonald, told me her daughter had opted for geography and history and therefore had had to drop the French, even though she would have preferred not to rather than taking the sports BTEC because she had no interest in pursuing a career in sport.

Ms Macdonald said: “The academy is promising massively improved results and I am not surprised considering they are making soft subjects compulsory and dumping hard-earned GCSEs.

“The Harris Academy overrode the GCSE core subjects set for my daughter and her colleagues before the takeover, in order to improve their league table results.

“This is no way to educate kids – they need to be taught proper subjects and come away with proper qualifications.”

These quotations were featured in a report by the Civitas think tank in 2009, which tried to look behind the secrets of academies’ success. It asked the question which I think anyone should ask if confronted with statistics showing rapid improvements: how were they being achieved? It offers substantial evidence of what it says are some of the approaches within academies, of pupils being pushed towards “less challenging” [Civitas’s words] subjects and qualifications “to drive up headline results”. So this investigation, asking if there was a specific “academies effect” at play behind their generally improved headline results – ie searching for reasons behind it rather than the often-cited but too vague “sponsors’ ethos” claims - was available to Mr Gove but I know of no detailed reaction to it.

Now, I thought I’d say here what I think has been going on in academies.Perhaps it would be better to start with a question: if they have lower numbers of pupils taking academic subjects such as history, why is this? Well, I guess there are two responses.

The first is to say that, as mentioned above, the original academies set up under Labour tended to serve – though were not exclusively confined to – disadvantaged communities. All other things being equal, it could be argued that one might expect these schools to struggle to recruit pupils to the traditional academic subjects such as history and languages that Mr Gove now focuses on through the English Baccalaureate.

There is likely to be some truth in this. However, the TES figures suggest that not only do academies have lower results  on the EBacc measure when compared against the national average for all other schools  - driven partly, would be the assumption, by academies’ possibly lower take-up for EBacc subjects – but when compared to schools with similar intakes.

Further, as suggested above, this is not a defence open to Mr Gove if he is truly to be seen as a champion of academic education for the vast majority of pupils. If he really cared about that, he would be speaking out passionately against some of this practice.

To me, a second response suggests itself. It is this: the practice in academies could be seen as a kind of “Education by Numbers” squared, or “Education by Numbers” amplified. There are such large institutional forces on them to raise results on the published indicators that the kind of practices documented would be expected to have occurred, perhaps to a greater degree than elsewhere in the maintained sector, because of results pressures.

Of course, there is no evidence that these approaches, including  pushing pupils away from the academic subjects now included in the Ebacc towards those which have up to now carried high weight in league table calculations, were and are going on in all academies. But I do think there is likely to have been this general tendency, based on a few facts about academies, which run as follows.

Academies were usually set up, under Labour, in response to perceived problems of low attainment: the results of the schools they replaced were said not to be good enough. I have observed, for example, how raw exam statistics, in relation to some schools, were virtually the only evidence put forward as the rationale for the extensive restructuring and investment that comes with setting up an academy.

In this context, almost the raison d’etre of these new schools was to improve headline GCSE statistics. If they didn’t do so, one could ask why the change to academy status, which was often controversial and which had been backed with often tens of millions of pounds of investment in new buildings for individual schools, had come about. I suspect, also, though have never seen evidence other than a reference to the odd “performance related bonus” in a job adverts, that academy leaders have had performance pay tied to raising published exam numbers.

The published results are not just high-stakes locally for individual schools, of course. They are also important for academy chains, whose reputations – in a system which really does not look very hard for alternative evidence – rest on them.

And politically, at a national level, of course, the success or failure of the academies scheme was seen to be judged almost exclusively on whether one or at most two numbers rose: the central indicator of the proportion of pupils achieving five or more GCSEs or vocational equivalent at A*-C including English and maths, and arguably the old measure without the English and maths stipulation.

In this kind of atmosphere, academies will have been under even more pressure, I believe, to game the system in obvious ways – such as a very sharp focus on C/D borderline pupils and use of alternative qualifications – in order to deliver those results, against the context of them being demanded quickly from, often, very challenging pupil cohorts.*

But the impact on individual pupils in the chase for better results – in terms of denying them precisely the kind of curriculum Mr Gove claims now to want for all schools – can be large.

I should say here – and some will no doubt challenge this – that I don’t want to criticise non-GCSE qualifications. The idea that schools, pupils and parents should be free to choose the courses they think are right for the pupil, with a full understanding of the likely benefits to the individual in the long-run, is powerful. My concern is that our current system, including the lack of scrutiny of what has been going on beyond statistics which have placed a surely-too-high weight on some non-academic courses, has pushed schools to take decisions based on the worth of a course to the institution, rather than to the individual.

It was revealing, I think, that in a recent exchange I had on twitter on this subject with Sam Freedman, Mr Gove’s adviser, Sam predicted that academies’ results would improve quickly on the Ebacc indicator now that it has been introduced. But this only seemed to confirm, in my mind, that academies have been exceptionally focused on league table ranking metrics, ie on the results for the institution. Mr Freedman may suggest that this is OK, now, since this government has sorted things out so that the metrics are now better aligned with pupils’ interests. I think that is a very optimistic reading. It also implies a lack of interest in what has happened under the old system which, if you follow Mr Gove’s logic, has resulted in disadvantaged pupils being wrongly pushed towards courses which were not in their long-term interests.

In investigating school results and the impact of non-GCSE qualifications on league table rankings, I have been in contact for several years now with Roger Titcombe, the former head of a community comprehensive whose school eventually was turned into an academy.

Roger’s argument throughout was that he passionately believed in what he saw as an important strand of the comprehensive ideal. This was the right of all pupils – from whatever background – to pursue a broad liberal education, in which all would have access to a range of academic subjects.

He saw that as coming under threat from the academies movement, because these new schools were so desperate for better results, some would sacrifice that ideal by pushing children towards qualifications mainly because they would help the school’s data.

Not just through academies, but throughout the schools system, a new class divide was at risk of emerging, he thought, with those from better-off families concentrating on academic courses and the rest pushed towards non-GCSEs in their options. But he believed, and the evidence presented above would suggest, that there is an “academies effect”, which makes them particularly susceptible to the type of behaviour described here.

I think, actually, that Roger’s ideals in this respect are very similar to those put forward by Mr Gove. However, in the absence of any other explanation, it seems the Education Secretary’s desire never to be seen to criticise the actions of academies overwhelms his stated commitment to speaking out for the options of those from disadvantaged backgrounds. Ideology, then, in this new government, is king.

* Remember, also, that no school can have complete control over the results a child achieves after sitting at a desk to complete exam papers, so institutions under pressure seem to me to be particularly incentivised to go in for strategic approaches that afford them greater control.

4 Comments
posted on March 30th, 2011

Friday, March 25th, 2011

Warwick Mansell

I have been interested in two debates in English education for several years now.

One starts along the lines: “Standards are not high enough. We need to hold our schools to account properly so that they improve exam results for all young people, who so desperately need better grades. We also need to use results data to target our efforts to help pupils do better.”

The other says: “Schooling driven by performance indicators is creating a whole host of negative consequences, which go to the heart of pupils’ educational experiences.”

I often feel like these debates take place almost in parallel, with very little acknowledgement of how they are inextricably linked, and little attempt at communication between the two. Frequently, it seems as if people accept the force of either argument, without realising that they are so closely related as to be almost two sides of the same coin.

This has been going through my head again in recent weeks, as I come across yet more evidence supporting the second of those statements. As always, it’s not necessary to look very hard for this stuff; it just keeps coming.

Exhibit one was a report on history teaching in English schools by Ofsted.  

Now, it’s important to get the context right here: the Ofsted report went out under a press release headline: “History a successful subject”, and offered plenty of support for the way the subject is taught.

For example, it said “history teaching was good or better in most primary schools” among those inspected in this programme from 2007 to 2010, and that “history was successful in most of the secondary schools visited because it was well taught, notably in examination classes at GCSE and A level”.

In secondary schools, it said: “the large majority of…history teachers were very well-qualified. In the large majority of the schools visited, the quality of the provision also reflected the strong leadership of the history departments”.

It added: “The subject knowledge of the specialist history teachers in the secondary schools visited was almost always good, often it was outstanding and, occasionally, it was encyclopaedic. Inspectors found so much good and outstanding teaching because the teachers knew their subject well.”

These central findings might have been lost on readers of some media coverage when the report came out.

So we are talking about a generally successful subject, taught by enthusiastic  professionals. However, within that generally positive context, there were several instances offering more evidence underlying the dangers of “Education by Numbers” – ie exam-results oriented practice being adopted which, the report would suggest, is bad for underlying learning. 

First, probably the starkest negative finding in the report related to exam-endorsed textbooks. Ofsted said:

“In recent years, more textbooks have been written specifically for the examination course specification, both at GCSE and A level. The textbooks, often written by the chief examiners for the courses, are generally endorsed by examination boards and gain the status of a ‘set text’. The history teachers in the schools visited were well practised in supplementing these with additional materials as necessary. However, it was clear that, at A level, the mushrooming of course-endorsed and linked textbooks was having a negative impact. They stultified teachers’ thinking and restricted students’ progress. The weaker students relied on the textbook as being sufficient preparation for the external examinations and were less willing to read beyond the ‘set textbook’. Their written and oral work revealed how their understanding of the topics they studied was narrowed. It also meant that students were not as well prepared to meet the challenges of higher education where independent learning and extensive reading were required.”

Damning stuff, I thought. To put this in almost-punchy soundbite terms: “Exam-endorsed textbooks – in the way they are sometimes used – are stultifying teachers’ thinking; restricting students’ progress; narrowing their understanding”.

Second, the report offered more evidence of pupils being steered away from history because of “league table” (I put it in quotes here because this is often a shorthand, I think, for wider hyper-accountability/results pressures) concerns. It said: “In some of the schools visited the students were restricted in their subject options at GCSE and some had been steered towards subjects which were seen to be less demanding than history.”

It continued: “Entry level…is intended for students who find GCSE too demanding. However, the declining number of students taking this examination reflects not only a lack of confidence that entry level meets the needs of those for whom it was intended, but also decisions by curriculum leaders to avoid a course that does not contribute significantly towards their school’s attainment profile”. [my italics].

Ah, OK, so this could be summarised: “Pupils steered towards certain subjects because of a school’s need to improve its figures”.

Third, the report offers criticisms of the move towards completing key stage 3 in two years, rather than three. I think this is related to the themes of “Education by Numbers” because I believe some of the calculation of schools in making this move is to increase the time they have to focus on raising GCSE performance. That, of course, can be seen as a rational move to make, from the individual pupil’s point of view, if it helps secure better grades. But hyper-accountability/ “league table” calculations are also likely to be a factor. This means, of course, that a child not going on to study history GCSE will stop at 13, rather than 14. And Ofsted’s criticism of this is on educational grounds, although admittedly from a subject-specific point of  view, that of a specialist history inspector.

Among the quotes on the report on this are: “The national curriculum orders and programmes of study in Key Stage 3 have led to much high-quality teaching and learning in history. However, in one in five of the secondary schools visited, curriculum changes, such as the introduction of a two-year Key Stage 3 that allowed some students to give up history before the age of 14, and thematic approaches to the curriculum, were associated with teaching and learning that was no more than satisfactory.” [I know the use of “no more than” satisfactory will jar, at least to a teacher audience, since my dictionary defines satisfactory as “fulfilling expectations or needs”, but you get the point].

“In 14 of the 58 secondary schools visited…whole-school curriculum changes [including a two-year KS3] were having a negative impact on teaching and learning in history at Key Stage 3.”

 It goes on: “In England, history is currently compulsory for students beyond the age of 14 and those in schools offering a two-year Key Stage 3 course can stop studying history at the age of 13. England is unique in Europe in this respect. In almost all the countries of the European Union, it is compulsory to study history in some form in school until at least the ages of 15 or 16.”

The report also points out that children do not get access to specialist history teachers in primary school, meaning that some will only have specialist teaching in the subject for two years of their school careers.

So that would be: “some pupils are only getting two years’ history teaching in secondary school, and this is not a good thing”.

Fourth, there were problems with teachers’ professional development – because of an over-reliance on exam-specific training – in some schools. “Access to training for history was an increasing concern for all teachers…In 28 of the 64 secondary schools visited in which this aspect was specifically inspected, access to subject training was only satisfactory and in 10 of the schools it was inadequate. In one in every five schools visited, training by the examination board was , and had been for several years, the only type of out-of-school subject-specific professional development for history teachers.”[my italics]

It goes on to say that this training, in schools doing it well, was only one of a number of approaches.

So that would be: “In one in five schools, teachers’ only professional development is geared to teaching towards particular exams”.

Fifth, the inspectors found that, in “a minority” of primary schools, foundation subjects such as history had been “squeezed”. “In year 6 in particular, teachers said to inspectors that the foundation subjects were ‘not a priority’”. Year 6, of course, embraces the run-up to key stage 2 tests.

So I think it’s fair to infer the following statement from this: “Some primary schools neglecting subjects such as history in drive to raise test scores in English, maths and science.” Ofsted points out,  of course, that some schools don’t do that and still get good results, but I don’t think the statement above is unfair.

Ok, that’s probably enough for now, from this document.

Exhibit two is discussions at last week’s Advisory Committee on Mathematics Education. Andrew Hall, director general of the AQA exam board (England’s largest GCSE and A-level board) worried about a sixth issue which I think is related to “Education by Numbers”: pupils being entered early for GCSE subjects.

This is an issue on which Mr Hall has spoken of having concerns before. At the conference, he documented a rise in the number of maths GCSEs taken by “15-year-olds and younger” [I take it this means those in years 10 and below] from 32,908 in 2008 to 83,179 in 2010. The latter figure represented some 11 per cent of the total entry for maths last year, he said.

He said: “That’s an almost three-fold increase in three years. I absolutely expect, from what I’m seeing in entry patterns this year, to see a significant increase again in 2011. Not just in maths; English has seen the same pattern.”

“I think there are some serious causes for concern here…For some students, early entry may be a really good thing. Those students who are particularly strong performers, and who will continue to get good maths education, may benefit. But my question is: what about the others?

“Are the pressures of league tables pressurising teachers to enter students early to bank a grade C, in order then to focus on those who did not get there? Is this going to impact on students post-16? I venture to suggest it will, but we need to get the evidence.”

He added: “This is not just about maths. Across a whole range of specifications, we are seeing students entered early. It may be the result of modularisation.” [More GCSEs have become modular in the last few years].

One audience member, who said he worked with “gifted mathematicians”, said some were put off persisting with the subject if they took it early and then had a gap without maths before entering the sixth form.

To be clear, I don’t think I’m qualified to judge whether early entry is always a good, or a bad, thing for a pupil. I took maths exams a year early myself throughout the latter years of secondary school, but that was a judgement made by my school on its merits, for me, since league tables and other institutional results pressures did not exist in the 1980s.

My concern, which clearly is shared by Mr Hall, is that results pressures for the institution, rather than the long-term learning needs of the pupil, may be playing a large part in decisions. This calls to mind an article I wrote last year in which there was evidence of a school entering pupils for GCSE maths and English early in order to attempt to “bank” a C grade, then removing them from that class if they achieved it in order to concentrate on achieving a C in other subjects. This, a former member of staff at the school told me, simply neglected to consider any need to give the pupil a chance to score higher than the C in the summer exam, since the C grade was all that mattered to the school’s headline statistics.

I also remember a head telling me, a few years back, that modular exams were far better because they afforded the school greater control over the eventual result. She could not, she said, take the risk of any surprises in the form of children underperforming on the big day and thus dragging the school’s published numbers down. And this was from someone highly sceptical about results-driven hyper-accountability. Again, this is not an argument for or against modules, but a suggestion that results considerations for the school are influencing decision-making.

So, Mr Hall’s concern could be summarised: “Results pressures on schools may be helping to push pupils towards being entered for exams early. I’m worried it’s not in all of their long-term interests.”

Mr Hall was also pressed, at the conference by an academic from Southampton University, on two points also mentioned in Ofsted’s history teaching report. Specifically, was he concerned about exam board-endorsed textbooks, and what about teachers only getting professional development through courses run by exam boards targeted at improving pupil performance in particular exams?

On textbooks, Mr Hall seemed, I think, to be acknowledging the issue and suggesting there might be ways of addressing it by seeking to employ senior examiners in-house at the awarding body. (Currently, all examiners, I think, are employed on a freelance basis. Mr Hall seemed to be suggesting that part of the issue was that they needed to supplement this income by publishing textbooks).

 He said: “There is this thing called restraint of trade. Can we prevent someone who is an examiner, off their own bat, writing a textbook?”

He added: “I think there are some issues around the  ways in which awarding bodies choose to engage the people who work as examiners. We are looking at what’s the right mix for us. Do we want to move some of those people into the organisation so that we can reward them more appropriately so that they do not need to do those things?”

On the training courses, he said: “One of the things I believe very strongly, as an organisation, is that we should not just offer help to teachers for our own specifications. We are using our charitable status to try to offer developmental opportunities across a broader spectrum.”

One final exhibit would be last month’s Wolf review of 14-19 vocational education, which suggested that schools were being pushed towards non-GCSE exams because of the worth of some of these qualifications to the institution for league table (ie league table and the rest of the results apparatus) purposes. I’ve blogged about that here.

 A summary of one of Wolf’s concerns might be: “Pupils pushed towards GCSE-equivalent qualifications which might not help them in the long term because of the weight these qualifications are given in league table indicators.”

Right, to sum up, then, I would come back to the original two debates, mentioned at the top of the piece. Because these two competing priorities aren’t often enough, I think, expressed directly against each other, I think it would be interesting to imagine a hypothetical conversation between two people – each supporting either viewpoint mentioned at the top –and using evidence uncovered here.

The conversation, I think, might go something like this:

First person: “You know, we really need results pressures on schools of the current type because, without them, teachers would just let down pupils with poor teaching.

Second person: “But look, you can see evidence here that teaching to exam-endorsed textbooks are damaging at least some pupils’ history lessons and leaving them underprepared for higher education.”

1st person: “I know, but we need these results pressures in schools.”

2nd person: “But we have evidence that some pupils are being steered away from taking history, even though they want to take the subject, because of results pressures in schools.”

1: “I know, but we need these results pressures in schools.”

2: “But schools are moving towards a two-year KS3, partly because of the pressure on them to improve their GCSE results, and inspectors of history, [and the Historical Association, as it happens], think this is limiting many pupils’ experience of the subject in a way that does not happen in other countries.”

1: “I know, but we need these results pressures in schools.”

2: “But we know that in one in five schools visited, teachers’ only professional development is offered by exam boards in relation to particular exams.”

1: “I know, but we need these results pressures in schools.”

2: “But we know that in some primary schools, subjects such as history and geography are marginalised in the year leading up to the national tests.”

1: “I know, but we need these results pressures in schools”

2: “But we know that hundreds of thousands of pupils are being entered early for GCSEs, and there are worries that this might be educationally less than ideal, but informed by results calculations on the part of schools.”

1: “I know, but we need these results pressures in schools.”

Results pressures seem to be the rock on which our education system is now built, with any other consideration seemingly having to negotiate its way around them.

OK, I caricature this debate a bit, though not that much. Ministers of both this and the previous government will say that actions proposed through, for example, the Wolf review or Labour’s 2009 assessment inquiry which scrapped KS2 science tests have offered a case-by-case approache to mitigating the problems, and Mr Hall clearly appears to be taking them seriously.

Some will say that the three “exhibits” reported on above are not the only perspectives worth considering, either. (I want to look a bit more closely at what I think are contradictions in Ofsted’s own approaches on some of these issues, but that will have to wait a bit).

As ever, I’m not interested in blaming schools for what might be perceived as the more negative aspects highlighted by Ofsted, Mr Hall and the Wolf review, but just questioning whether this system as a whole – including the signals it sends to teachers about the overwhelming emphasis to be placed upon exam success on the measured indicators – is supporting good learning, or not.

Given the number of issues documented in just a few weeks, I wonder why we still aren’t linking these two debates properly, and looking more fundamentally at the real impact, for good or ill, of statistics-driven schooling.

No Comments
posted on March 25th, 2011

Wednesday, March 15th, 2011.

England’s secondary maths curriculum is likely to become “more challenging” for pupils from 2013, one of the government’s leading civil servants said today.

Jon Coles, who has a key role in the national curriculum review which was launched in January, suggested that while the primary maths curriculum in this country was quite similar to that of “top-performing” countries internationally, this was not the case from the age of 11 onwards.

He set out the thinking behind the review and – perhaps boldly, given that the review is only just over six weeks old – offered a taste of what some conclusions with regard to maths might be during a talk to the annual conference of the Advisory Committee on Mathematics Education at the Royal Society in London this morning.

Mr Coles, director general for education standards at the Department for Education, said: “What I think [the review] will mean, from the early evidence beginning to come through in maths, is that it will probably mean some increasing challenge, especially in the secondary phase.

“There’s a great deal of commonality between the national curriculum in primary schools in this country and in the highest-performing jurisdictions.

“There are some differences, in primary, in timing and sequencing [of when things are taught], and we do have one area where we do a great deal more than other countries, which is data handling, in which we are quite unusual in this country.”

However, in general, he said, there were not huge differences between what was taught before the age of 11 here and good practice elsewhere.

But he added: “At secondary level, we will see a pushing up of challenge and expectation. That would be my guess on the basis of what the review has seen so far.”

On the wider thinking behind the review of the curriculum for 5- to 16-year-olds, for first teaching in 2013, Mr Coles set out the idea behind it of trying to learn from what happened in other countries which do well in international studies such as the OECD’s “PISA” tests and TIMSS, the Trends in International Mathematics and Science Study.

He said: “The review team are looking extremely systematically at what happens in the top-performing jurisdictions in the world.

“Specifically, they are interested in what is put in the curriculum at what age, what is the sequencing, what leads to progression and high performance in these systems, what can we learn from them and what should we transfer into our system?

“The overall aim of the review is to be much less prescriptive…to reassert the balance between the national curriculum and the school curriculum.”

He implied that the idea was to strip down the amount of time taken up by the national curriculum in schools.

He said: “The national curriculum…should be a specification of the core knowledge and principles needed to progress, not a complete specification of everything that schools teach.”

The government would not be specifying how teachers should teach (a move back to the days before Labour entered into the world of prescription over pedagogy with the national literacy and numeracy strategies). And he suggested, I think, that schools would need help adapting to this new world, saying the government would “need to support schools” in doing so.

Summing up, he said: “The task that the review team are undertaking is to come up with a pretty spare, pretty knowledge-focused national curriculum, based on the best international evidence.”

The timescale looks challenging, I think, with curriculum materials due in schools by September next year.  

But Mr Coles added:  “I think what we will see in this review is trying to get draft Programmes of Study out much earlier in the process than has been the case in previous reviews.

“That’s a good and important thing to do.”

Mr Coles also had some interesting things to say about funding. Asked a question about funding for a particular initiative, he said: “Our budgets are under a great deal of pressure.

“It’s true that the DfE has done rather better than many other departments. But we are experiencing a significant change [from spending under the previous regime, I guess] and what we are trying to do is prioritise front-line budgets.

“The most important thing to do is to prioritise schools and colleges and early years budgets.  Doing that, when this was already 80 per cent of our budget means that it will become 90 per cent of our budget.

“So the rest of our budget has halved. Given that within [that part of] the budget are some very big things, like initial teacher training – which I suspect many of you in this room would advise us not to cut – we do not have lots of pots of money.

“That’s a direct result of what’s being done to reduce the budget deficit.

“In the next few years, do not expect us to come up with pots of money for good new ideas. We will have to prioritise, and respond to good ideas, but I suspect not with new money. That’s the situation that the whole of the public sector is going to face.”

Ok, this has largely been a blog without comment from me, because I thought readers might be interested in these words as they stood. I will just add a final comment on Mr Coles’s speech, however, in relation to what he had to say about progress in maths in this country over the past 10 years or so.

Actually, I wondered what Mr Coles would say on this, as he has been at the department a while. Under Labour, he led the 14-19 programme which introduced the now beleaguered diploma qualification. Could his assessment of how things stand on school standards possibly be as bleak as that of his political boss, Michael Gove, who often seems reluctant to offer any sense that things might have improved in any way since 1997, I wondered.

Well, actually Mr Coles, perhaps unsurprisingly but interestingly, offered a more balanced view of matters with regard to maths education.

He told the conference: “Over the last 10 years, just looking back at the figures, we have an awful lot to look at that suggests progress, and that’s good and positive.”

The number of people coming into maths teaching over the last 10 years had doubled, he said, while the number passing* the subject at GCSE had grown very significantly, the numbers taking maths A-level had grown by 50 per cent since a low-point of participation was reached in 2002, and the take-up of further maths A-level had also increased markedly in recent years.

He said international evidence presented challenges. But even here – and I almost choked on hearing the next bit – there were some chinks of light.

He said: “There are some positives. We are the most improved nation in TIMSS, in the international comparisons.”

I nearly choked because, of course, as I have written here, somehow Mr Gove never seems to find the time to mention England TIMSS results in major speeches setting out why he thinks our schools need radical reform. Last November’s white paper is also free of the statistics on maths which Mr Coles presented.

Of course, he did go on to set out the agenda which has been put forward by the Government, saying: “Actually, PISA does present some very large challenges.  Our 15- and 16-year-olds are doing significantly less well than they are in some other countries.

Shanghai’s performance on the last PISA tests put it two whole years on the PISA scale ahead of us, and that gives us a real challenge.”

He also highlighted a recent report for the Nuffield Foundation which documented the low proportions of young people in England, Wales and Northern Ireland persisting with maths after the age of 16.

But  all Mr Coles offered a more balanced view of the evidence than was presented either by Mr Gove, in launching the recent education bill (Gove: “I would love to be able to celebrate a greater level of achievement, but I am afraid that this is the dreadful inheritance that our children face”), or in the white paper on which it was based. I wonder whether those TIMSS figures will ever get a look-in in official documents and government speeches in future. I’m not holding my breath.

This conference also had plenty to debate about the influence of results pressures in schools, which as you would expect I will be writing about in the coming days.

*[I note that there was no reference to this equating to pupils achieving a C grade or better at GCSE. A C grade is not, of course, formally, the cut-off for a “pass”. GCSEs were introduced with a passing scale of grades A-G. It is surprising that even officials are now saying a C grade is a pass, when this is not how the grading system works. I heard the conference chair, Professor Dame Julia Higgins also equate a C grade with a “pass”, and it featured in the Wolf review earlier this month. I understand why it’s happening, but I still find it strange that we have a technical language which defines a pass in one way, and everyone else now seeming to define it in another.]

1 Comment
posted on March 15th, 2011

 

Wednesday, March 9th, 2011

Ok, I’ve decided to do something slightly different, here, in the form of a blog largely not written by me, but based on two emails I’ve received in recent months on the vexed and often technical issue of data analysis systems and target-setting.

This may be overly technical for some non-teacher readers of this blog, but I thought I’d put it up here as I get occasional inquiries about the Fischer Family Trust system in particular, and am interested in the implications of how these systems work in the classroom.

What follows are the more-or-less verbatim contents of two emails (reproduced here anonymously but with the authors’ consent) I received re data analysis systems, one from a teacher who seems reasonably positive/pragmatic about the whole experience, and the second who, as you will see, has concerns.

So here is the first teacher, who is a senior leader.

“I have always liked my schools to use two data sources, past performance and CEM Centre (MiDYIS, YELLIS and ALIS), although my current school uses CATs. 

“Raw data informs me as the teacher, but I adjust the targets that I give to students (no student in my GCSE classes is told that they will achieve less than a C, because all can easily achieve that and most can surpass it).  Data, as I tell staff, only provides questions and never answers.  It informs good teaching, but doesn’t make a good teacher.

“I then also tell staff the most important analysis of exam performance is comparing how students did in your class in comparison to other subjects in school. Did they do better with you or elsewhere? Then if they are below the data targets you need to take the mirror test. Do you feel that you did everything to help that student do better (look yourself in the mirror). If you are happy with what you did, move on, but ask ‘can we make adjustments to next years interventions?’

“I do recognise that these sets of data are not perfect and they can only ever be an indicator.  For FFT the worry is because of the inflation or deflation of scores at KS2 because of brilliant or poor teaching. In CEM and CATs students can do worse than they are capable of because of all the factors that can suppress test performance.  However, overall they do produce part of a useful guide and highlight possible underperformance to all staff.

“I’m happy to discuss any of this.  I am no way a zealot, just want all my students to progress so constantly looking for things to improve what I do. I think teachers’ fear of data comes from poor leadership as to how to use it.”

Here is the second email, reproduced verbatim from the start:

“Dear Mr Mansell,

“Thank you very much for your work on testing, hyper-accountability and the many problems in education today. I found your book Education By Numbers to be very thought-provoking, my copy is full of highlights where I was almost shouting out in agreement with many of the points you made.

“I have been teaching maths in the same high school for thirty years, and I find the current obsession with getting the best results for the school very dispiriting.

“I have tried to talk to my head of department and Head Master about improving learning and understanding, but it is a waste of time. They want to meet the targets, so pressure staff and pupils, force pupils to attend extra classes after school or instead of attending morning tutor meetings, but do not consider real educational improvements; too risky?

“Also, the pupils who will never achieve a C are effectively written off by the RAP(raising achievement process)which only targets D to C or C to B, and some troublesome pupils who the FFT say should achieve seem to move on elsewhere so they do not drag the results down.

“The RAP system is being extended to years 7, 8 and 9, not to improve education, but to achieve the magical FFT 5th percentile. Regular testing, split levels i.e. 3a, 3b, 3c etc, when it is doubtful if any teacher can reliably say ‘Jonnie is working at level 3 in algebra’. Levels may be estimated plus or minus one, sublevels are a nonsense, also the use of numbers as labels for ‘levels’ is misleading as the levels are descriptive, categorical data not measurements on a scale.

“One thing that I want to say to you is that the message that is regularly given about 5 or more grade Cs at GCSE being ‘good’ is a disaster for some bright pupils. I have had a few say ‘as long as I get 5 Cs I am doing well’.

“For able pupils C is poor, to get the message across to year 10 and 11 pupils I have bluntly said that for them ‘C means crap’, what they should be getting are 7 or more A*, A, B grades.

“The obsession with grades and levels for the benefit of the institution, instead of a focus on helping pupils to achieve the best for themselves, is a cancer in the education system”.

2 Comments
posted on March 9th, 2011

 

Wednesday, March 2nd

I had an interesting chat yesterday with the Ofsted press office. A press officer called me after I wrote an article for the Financial Times*, which was published on Saturday, on the effects of results pressures in schools.

This included the following paragraph:

“Ofsted inspections have, in recent years, focused heavily on statistical indicators of school quality that are largely based on exam performance.”

Ofsted’s argument was that inspections aren’t now as dependent on test/exam data as is commonly perceived. Particularly since the introduction of the latest version of the Ofsted framework, in September 2009, more emphasis is being placed on lesson observation, it was stressed to me. It is also not the case, as is sometimes thought, that schools are being pre-judged, before inspection visits, on the basis of their results.

I have been promised more information on this from Ofsted, and will update this blog on this subject when I receive it. In the meantime, anyone with thoughts or experiences on the current inspection process is welcome, as ever, to leave a comment or get in touch with me at

By the way, I wrote the FT article for a supplement which included league tables of school results. Another article in the supplement included some comments from a university admissions tutor on familiar themes, to readers of Education by Numbers and this website.

The piece, by Liz Lightfoot, included a quotation from Richard Austen-Baker, a law admissions tutor at the University of Lancaster. He said: “The exam boards compete for customers – teachers and students – and what they want are the best possible grades, especially with the pressure from school league tables.”

Dr Austen Baker added, of exam preparation in schools: “I have been told by teachers that they discourage students from wider reading because there is a danger it might introduce them to material which is not in the syllabus and if they use that in their exams instead of material from the exam specification, they will lose marks.”

Anecdotal stuff, of course, from one individual, but I thought I’d mention it as this website is supposed to be documenting views alongside research evidence of the effects of the current system.

 *I think you’ll need to register with the FT site to read this piece, and the one by Liz Lightfoot.

No Comments
posted on March 2nd, 2011

Friday, February 25th

Just a quick blog now on two interesting stories in this morning’s TES.

First, Helen Ward wrote a piece about the Government abandoning plans billed as “league tables for five-year-olds”. This proposal, spotted by Helen in the small print of the Department for Education’s draft Business Plan last autumn, said data would have been published on the “achievements of children at the end of the Early Years Foundation Stage Profile, by school”.

Today’s story reveals that the move is being abandoned, following serious opposition including a petition which garnered nearly 1,000 signatures. Those quoted in the piece were all opposed to the Government’s move, with objections including that it would load too much pressure on to young children.

This is very welcome news, of course. Government ideology would say that requiring institutions to publish results, and then encouraging them to compete to raise those scores, inevitability pushes up standards.

However, the widespread concern that, perhaps, the consequence of this would be to push early years providers to over-concentrate on early years foundation profile data, thus ratcheting up pressure on children with the government using the profiles for a purpose for which they were not designed, seems to have won the day.

Many will also have observed that other countries – including the oft-quoted success story that is Finland – don’t try to race ahead with pushing children towards formal goals at an early age at all.

A generalised sense – at least in most of the Government’s rhetoric – that transparency and data production is always a good thing seems to have been trumped, then, by concerns about the implications of this in the early years.

The petition organiser also fears, however, that change along the original Government lines might come back at some stage, so I will be watching for developments.

One thing I wonder, actually, looking back at the business plan, is a section earlier on in the document where the DfE pledges to : “Work with local authorities to develop a plan to increase voluntary and community sector involvement within Sure Start Children’s Centres, improve accountability arrangements, increase the use of evidence-based interventions, and introduce greater payment by results.”

Does “payment by results” mean payment by assessment results, I wonder? I will try to get some more information on this. For a longer blog I wrote on payment-by-result thinking at the top level of the coalition, see this piece.

The second story , by William Stewart, related to a suggestion by Isabel Nisbet, the outgoing chief executive of Ofqual, that computers should replace pen and paper in all exams, with GCSEs and A-levels taken in the traditional manner running the risk of becoming “invalid” for today’s pupils.

These were very interesting comments, and were seized upon enthusiastically by two of England’s three main exam boards. (The other, OCR, sounded more cautious),as well as being followed up elsewhere in the media.

Many would agree with the sentiments behind these comments– it will strike many as anachronous that teenagers still spend up to three hours hunched over a desk scribbling away, when longhand writing has next to no place in today’s workplace.

But the aspirations voiced by Ms Nisbet, whom I respect, by the way, have been around for years now. The question is not whether computerisation in this way would be a good thing in an ideal world, but how detailed practical problems facing anyone who wants to move the system in this way can be overcome.

As the article mentions, back in 2004 Ken Boston as head of the Qualifications and Curriculum Authority set out a series of detailed milestones which would have seen the system largely computerised by 2009. But none of these were achieved. (See an article I wrote on this here).

I can’t help wondering how much has changed in the intervening two years since I wrote that last piece. I asked the Ofqual press office if Ms Nisbet, or anyone at Ofqual, had any detailed plan as to how her objectives could be achieved, wondering also how exam boards might be helped with computerisation. I was told: “There is not [a plan] as such. It was just Isabel setting out what she thinks should happen in the future.”

I wonder how long we will be waiting.

1 Comment
posted on February 25th, 2011

Friday, February 16th

I found last Wednesday’s Second Reading debate on the new  Education Bill so hard to watch, I had to switch off in the end. The politicised, partial and sometimes dismissive nature of leadership being given to our education system, by the individual who now seems to be accruing huge powers to shape its future, really struck me as astonishing.

This is especially the case when one is aware of a fuller picture with regard to evidence than was presented at the dispatch box.

I just about got to the end of Michael Gove’s speech, but not beyond, having grown increasingly annoyed about a number of statements he made about various aspects of evidence behind the claimed need for education reform, on which much of the change which is set out in the bill seems to be being based.

Having caught up with the written record of the debate now on Hansard, I wanted to examine a few highly contestable aspects of Mr Gove’s speech quite closely. This is going to involve a fair bit of detail.

- First, there was the suggestion, which Mr Gove and other members of government have made frequently in recent weeks, that change is essential because this country is slipping down the international league tables of education performance.

Mr Gove began by saying that one of the three challenges facing “our country” (it was never specified if this meant England, which is Mr Gove’s responsibility as Education Secretary, or the UK), was “educational decline, relative to competitor nations”.

He then quoted a set of statistics which showed, he said, that “all our children were failed by Labour”. (What: every single one of them? I wondered. That’s quite a remarkable reach, for any political party). Quoting from the well-known OECD Programme for International Student  Assessment (PISA) tests, the latest results of which came out in December, he said “we moved from fourth to 14th in the world rankings for science, seventh to 17th in literacy and eighth to 24th in mathematics by 2007”.

He added: “By 2010, we had moved from fourth to 16th, from seventh to 25th and from eighth to 28th in those subjects.”

These rankings are all correct, the first set relating to tests taken in 2000 and reported in 2001, Mr Gove’s 2007 figures relating to tests taken in 2006 and his 2010 stats based on assessments taken in 2009.

He added: “The only way that we will generate sustainable economic growth is by reforming our education system so that we can keep pace with our economic competitors.”

Ok, well leaving aside the fact that the notion of a direct link between performance in international education tests and a country’s economic output is highly contested  – Mr Gove’s adviser on vocational qualifications reform, Alison Wolf, devoted an entire book to criticising the link between investment in education performance and economic output – the Education Secretary left out a large chunk of the evidence on how England actually fares in international comparisons.

I will return to PISA shortly. But first we have to consider that there is another major international testing study, the results from which Mr Gove did not mention and which presents an entirely different picture, for England, than PISA currently does.

The Trends in International Mathematics and Science Study (better known as TIMSS) is based at Boston College in the US, has been going longer than PISA and while not quite as large in terms of the number of countries taking part, is still very substantial: the last round of TIMSS, in 2007, was taken by the largest number of pupils of any international test (these have been taking place since the 1960s) until it itself was surpassed by the PISA tests of 2009.

The last TIMSS study produced what looked like unalloyed good news for England. TIMSS tests are given in maths and science, to 10- and 14-year-olds. Between 1995 and the last tests in 2007, England’s primary maths performance improved by a greater margin than that of any of the other 15 nations which had pupils taking tests in the two years, including Singapore, Japan, the Netherlands, the United States, Australia, New Zealand and Norway.

 Its score went from below the international average to comfortably above it in that time, while its ranking improved from 12th out of 16 countries in 1995 to 7th out of 36 in 2007.

The other tests in the last round of TIMSS also brought good news. In secondary maths, England was the joint third most improved of 20 countries over the 1995-2007 period, rising from 11th out of 20 to 7th out of 49 in the rankings.

In science – which is traditionally England’s strongest subject in international tests – the country was seventh most improved out of 16 in primary (its ranking moving from 6th out of 20 countries in 1995 to 7th out of 36 in 2007) and fifth most improved out of 19 in secondary (its ranking improving from seventh to fifth between these two years, even though the number of countries taking part increased from 19 to 49). In these science tests in 2007, English pupils finished ahead of, in primary, countries including the United States, Germany, Australia and Sweden; and in secondary, ahead of these countries plus Russia, Hong Kong and Norway.

It was therefore surprising to hear Mr Gove telling Parliament that “the statistics produced by the OECD [ie PISA] are ungainsayable. I would love to be able to celebrate a greater level of achievement, but I am afraid that this is the dreadful inheritance that our children face”.

Well, if he was looking for some figures to celebrate, he really did not look very hard.

Remarkably, Mr Gove actually mentioned the existence of TIMSS, though not any of the results it has recently generated, towards the end of his speech, in highlighting plans to force schools which are selected to do so by the tests’ sampling systems to take part in future rounds. So omitting to mention England’s results looks doubly serious.

Some may want to dismiss this omission as to expected in a political debate. But it goes further than a non-mention in one debate.  Actually, international evidence is being used as the justification for the government’s entire reform programme. And, again ministers and civil servants are simply ignoring the findings of TIMSS.

Last November’s white paper, setting out the Government’s plans for the education system, began with the following statement, in a forward written jointly by David Cameron and Nick Clegg. It said: “So much of the education debate in this country is backward looking: have standards fallen? Have exams got easier? These debates will continue, but what really matters is how we’re doing compared with our international competitors.”

It is clear that the coalition has not been looking very hard, or very thoroughly, at what it says is a vital question, as it goes on to say: “The truth is, at the moment we are standing still while others race past.”

This selective reading of the international evidence also formed the basis for the Government’s “Impact Assessment” of the education bill, published earlier this month, which sets out the rationale for ministers intervening in the schools system in this way.

It said, on page one: “The Schools White Paper set out how we are falling behind in the international league table of educational performance compared to competitor countries. The most recent PISA survey – the international league tables of school performance – reported that since 2000 we have fallen from fourth to sixteenth in science, seventh from twenty-fifth in literacy, eighth to 28th in maths.”

Again, there was no mention of the alternative picture reflected by TIMSS. This is also ironic given that TIMSS is a closer test of pure curricular knowledge of the sort about which Mr Gove often enthuses – ie the problems could be seen as more “traditional” – than is PISA, which tests application of reading, maths and science understanding in “real world” scenarios.

To seek to base your reform strategy on international testing evidence, and yet ignore the conclusions of the world’s second largest testing study because they don’t fit the political picture you want to paint is, to sum up, ludicrous.

Returning to the PISA figures themselves, they certainly are not good news for the last Labour government. However, David Blunkett, the former education secretary, rightly pointed out in the bill debate that it was, obviously, misleading for Mr Gove to quote out of context the UK’s sliding raw rankings figures between 2000 and 2009 when the number of countries taking the PISA tests expanded dramatically over that time.

Indeed, the OECD itself has said that comparing the UK’s results directly between 2000 and 2009 is not statistically valid, because of problems with the sampling in 2000, as this blog by “Fullfact” points out, although some general, unofficial comparisons can still be made, I think, by looking at the underlying data for the UK from the two years. There is also a strong case to be made that the differences in average test scores between western countries around which nations obsess, in both PISA and TIMSS, are relatively small, and therefore using such comparisons as your main basis for reform is unwise. If you are going to use them, though, it would be wise to use all the evidence, rather than just some of it.

Finally, Mr Gove said: “How can a country that is now 28th in the world for mathematics [in PISA] expect to be the home of the Microsofts, Googles and the Facebooks of the future?”

As my former TES colleague Helen Ward has pointed out, this might not have been the best example to choose. For the United States, the home of the Microsofts, Googles and Facebooks of the present, actually finished just below the UK in the latest PISA maths tests. (And has a fairly mediocre record in previous PISA rounds, as measured by average test scores).

- Second, Mr Gove told the House of Commons that “Inequality worsened under Labour and the education system exacerbated it”.

I think I have heard the second part of this claim from the Education Secretary before, and it is very fishy.

He went on: “If we look at the gap between children eligible for free school meals and their more fortunate and privileged counterparts, we can see that as those children moved through the eduction system and progressed under Labour the gap between rich and poor widened.

“At age seven, the gap in reading scores between those children who were eligible for free school meals and those who were not was 16 points. At age 11, the gap was 21 points in English and maths. At age 16, the gap was 28 points at GCSE.”

The argument is pretty clear, then. These statistics show that the achievement gap between those children eligible for free school meals and the rest increases over time, as they get older and move through the school system (or at least it did under Labour). And therefore the schools system “exacerbates” the problem of social inequality in achievement.

Leaving aside the question of whether test scores achieved at different ages are directly comparable in the way suggested here, the assertion that schools are actually making the problem worse is highly questionable, and probably actually very insulting to those working within them.

For these figures offer no conclusive evidence to back Mr Gove’s claim. If pupils’ achievements really are moving apart in this way, schools may be to blame in part. Or they may not. To blame them entirely for that situation – as Mr Gove does here – is simply to write off the huge advantages that some better-off children will have at home over their peers eligible for free school meals. (All other things being equal – and this is, of course, a big assumption – one would expect a child who had more resources at home to pull away, educationally, from one who had fewer.)

To put it another way, it may be that the “schools system” is doing all it can to make up for huge differences in parental or cultural support for education, and not succeeding to the degree of wiping these differences out entirely when they are picked up by testing statistics. (Which it would be doing, by implication, if these test score differences did not widen over the school years). This is very far from showing that schools are “exacerbating” inequalities: making them worse, rather than having some effect in counteracting them.

It may be I am wrong, and it really is as bad as Mr Gove makes out. State schools, even though they largely educate children according to the same curricula and with teachers largely trained and inspected to a common template, may actively be making inequality worse, for all teachers’ best efforts.

But the statistics he presents are unconvincing as evidence one way or another.

- Third, Mr Gove talked about the bill enhancing teachers’ professional freedom. He talked about this in relation to giving them more powers over how to discipline pupils but also, more interestingly, in relation to the curriculum.

He said: “I am happy to reassure my honourable friend [the Conservative MP Edward Leigh, who had asked a question which worried about Labour proposing the introduction of compulsory sex education in primary schools] that I will not accept amendments in Committee [the next stage of the bill] that seek to make the curriculum any more prescriptive or intrusive.

“The Bill will enhance professional freedom and autonomy, because we recognise that it is only by doing that we can ensure that our economy and education system are fit for the 21st century.”

Yet Mr Gove’s curriculum reforms are, rightly or wrongly, certainly not only about enhancing teachers’ freedoms. Indeed, their defining idea is probably that teachers’ latitude over what to teach needs to be reduced, at least in areas deemed by Mr Gove and his advisers to be central. That is, the curriculum under the latter years of Labour gave teachers too much freedom over what to teach, because it was not specific enough in its requirements, is the clear implication of what Mr Gove has said on other occasions and again, here.

This much is clear from another section of Mr Gove’s bill speech. In this, he was attacking staffing arrangements at the Qualifications and Curriculum Development Agency, which I will go on to talk about. But first, we should just focus on what he said about the curriculum.

He said: “Let us take the QCDA…which has 393 employees. Can any Member of the House tell me how many of those work in the QCDA communications department? ….The answer is 76 out of 393. How can it possibly be an effective use of public money to have 76 people involved in communications at a curriculum quango, when that quango has been responsible for a secondary curriculum that mentions not a single figure in world history apart from William Wilberforce and Olaudah Equiano? How can it be right that we have spent money – so much money – on that curriculum authority, when its geography curriculum mentions not a single country other than the UK, and not a single river, ocean, mountain or city, but finds time to mention the European Union?  How can it be right that we can find money to employ 76 people in communications – 76 spin doctors – when our music curriculum does not mention  a single composer, a single musician, a single conductor or a single piece of music?”

The contradiction should be obvious. If Mr Gove really believed in enhancing teachers’ professional autonomy in every aspect of their working lives, he would not say this. A geography teacher trusted to exercise professional discretion would not need to be told what countries pupils needed to be taught about. Nor would a history teacher need to be told that coverage of World War 2, for example, which is mentioned in Labour’s current secondary curriculum, is likely to include references to Churchill (alongside Hitler and Stalin, of course).

It would be perfectly coherent, in my view, for the Secretary of State to say he thinks teachers should be given professional freedoms in some areas, such as over discipline, or over specific areas of the curriculum, but that in others – particular areas of the national curriculum – there is a public interest in being more prescriptive. This, after all, was largely the rationale behind the introduction of the national curriculum in the first place: that some structure needed to be in place to ensure that pupils in different schools had a common experience in terms of what they were taught. That view has not been uncontested, but it is not illogical.

It is, I think, the true rationale behind Mr Gove’s current curriculum review, which wants to be more prescriptive about the teaching of what it will define as core knowledge and concepts in selected subjects.

However, the party political suggestion that one party is on the side of “freedom” and the other is not, when a central plank of the coalition’s reforms depends to a large extent on this reduction of freedom, just makes the government’s position look ridiculous.

-  Mr Gove suggested there was no alternative but to side with the government on its reform programme, as the rest of the world was heading down this track and therefore not to follow was to risk being left behind.

He said: “We must all recognise that the reforms we are talking about, including the creation of free schools, are the sorts of reforms that we are seeing across the developed world.

“Ministers such as Arne Duncan [US Education Secretary] and John Key in New Zealand and Julia Gillard in Australia, and countries such as Sweden, Singapore, Finland, Hong Kong, Alberta and South Korea all recognise the need to reform their education systems, and we cannot afford to be left behind.”

Well, it’s probably best to take a few of these countries in turn, beginning with the reform movement being led by Mr Duncan for Barack Obama in the US.

It is true that there are several elements of Mr Gove’s plans that have similarities with those going on over the Atlantic; some of the English reforms are borrowing explicitly from American policies.

Mr Duncan is building on the work of George W Bush’s administration, which launched the controversial test-based No Child Left Behind school accountability programme, and he seems likely to receive support this year from the Republicans in Congress.

But despite this bipartisan support, the reforms being spearheaded by Mr Duncan are actually at the head of a hugely polarised debate in the US.  The reform effort centres largely on viewing value-added test scores as the final word on teacher quality, backs test-based performance pay for teachers and sees changes to school structures – charter schools – as a panacea to America’s education problems, which are not the same as England’s, by the way. Failing teachers and failing schools are the dominant note in criticism of state education by reformers in the US at the moment, and the idea is largely that schools would be run better if they operated according to the model used in corporate America.

For a powerful critique of this position, see the book by the former assistant Education Secretary under George Bush (the elder), Diane Ravitch, which I reviewed here. A recent article by three well-known US educationists who are supportive of the reform agenda also makes the point that its advocates need to show a bit more “humility” in the face of mixed evidence for the success of the changes they back, including the seemingly not-much-liked-by-the-US-public No Child Left Behind act.

 Turning to the other countries, well Sweden’s recent lack of progress in the PISA tests Mr Gove now lauds (as well as in TIMSS), is well-known, despite  the country having allowed the creation of a type of  independent state school on which Mr Gove’s “free schools” policy is based. Hong Kong has reportedly indeed launched reforms, but according to a recent TES report these are in the opposite direction from what is due to happen here.

There are elements of the English model in the Australian and New Zealand reform programmes currently taking place. But politicians in both main Australian parties have said they do not support league tables being used to rank schools, while in New Zealand as I understand it a minister could never simply order exactly how a school’s structure should change, a power Mr Gove is suggesting in the bill should be given to him.

Finally, any attempt to link the English system with what happens in Finland is…well highly disingenuous. Whatever one thinks about the two systems, they are hugely different. Finland has fully “bog standard” comprehensive schools before upper secondary begins, with absolutely no setting and streaming. Formal schooling does not start until the age of seven. There is no English-style accountability system, with no inspection system and no published national test data.

The only area in which Mr Gove might realistically be said to be borrowing from the Finnish experience, from my understanding of it, is in relation to, ironically given his statements about enhancing professional freedoms, the prescription within the Finnish national curriculum about what should be taught.

Different countries, then, take differing approaches to reform and there is no inevitability to any one model. They all need to be debated on their individual merits, rather than trying to close down that debate by suggesting that there is a common reform model and that it is inevitable.

- Finally, I want to turn to Mr Gove’s claim about staffing levels at the Qualifications and Curriculum Development Agency.

You will remember that he presented figures purportedly showing that the QCDA had 393 employees, out of which 76 worked in the communications department. “How can it possibly be an effective use of public money to have 76 people involved in communications at a curriculum quango…How can it be right that we can find money to employ 76 people in communications- 76 spin doctors – but…” etc.

On hearing this, I immediately smelt a rat. From my dealings with the QCDA and its predecessor the Qualifications and Curriculum Authority over the years, I suspected that it was stretching things, to put it mildly, to suggest that all communications staff working in an agency such as this were “spin doctors”, the implication of which is that they were all concerned mainly with bolstering the organisation’s image with the media and thus, as far as a government keen to safeguard the public interest was concerned, were clearly a waste of money.

In fact, there were only ever a handful of people working in the press office at the QCA/QCDA, I remembered. I thought some others were likely to have worked in internal communications, ie helping the organisation to communicate with its staff. But how did Mr Gove, or whoever gave him the information, arrive at the larger figure?

In fact, the “communications” department at the QCDA embraces a far larger number of jobs than could ever be classed as those for “spin doctors”. On Wednesday, I received a statement to this effect from the organisation’s chief executive, Lin Hinnigan.

According to Ms Hinnigan, the figures for people working in the QCDA’s communications department relate not just to those in the press office, but those who could not be called “spin doctors” at all, including those staffing its helpline services which communicate with schools. They also seem to relate to more than one organisation.

Ms Hinnigan’s statement reads: “The figures [used by Mr Gove] show QCDA’s spend and staffing between 2008 and 2010, during which time the organisation was funding the set-up of Ofqual as well as relocating both organisations from London to Coventry.  So the figures represent the total costs of two organisations running across dual sites, recruiting a new workforce and establishing new systems.”

She added: “This figure (76) [used by Mr Gove under the ‘communications’ heading] covered all staff in the communications and QCDA and Ofqual customer services departments, including switchboard and helpline operators; web and publishing editors; people who support schools and local authorities in delivering national curriculum tests, and those who deliver communications to employees.  In April 2010, prior to the announced closure of QCDA, there were 15 staff at QCDA dealing directly with communications, including three in the press office and one in internal communications.  The remaining 11 were people liaising with schools, college and employers to support them in delivering the Diploma; general qualifications and National Curriculum Tests, as well as the consultation around the primary curriculum.”

In other words, many of these QCDA employees have been exercising a “communications” function, but communicating with schools (and others) directly, helping them, rather than with the media. There were, erm, three people working in the QCDA press office at the time Labour handed responsibility for the organisation to Mr Gove.

I know that people at the QCDA have been dismayed, if not enraged, by his comments. It may seem like an intellectual exercise in culling needless “back office” functions to Mr Gove, or of making party-political points, but these are real people who are losing their jobs. (It has been pointed out to me that Mr Gove was reported to have told head teachers before the general election that he would have a new piece of paper for QCDA staff: a P45, which also went down a treat, I understand). They should have been shown more respect.

By the way, I have searched for a comparison between the QCDA and Mr Gove’s own department in terms of how much they spend on “back office” functions. In a sense these comparisons are slightly moot, as you could term all of the work carried out by both organisations as “back office”: none is frontline in the sense that it involves direct interaction with pupils, and so might be said to be vulnerable to politicians seeking cuts. So trying to label certain functions as “back office” within the Department for Education or the QCDA, and certain others as not, seems slightly perilous. However, the Labour government did do this, just before the election last year, the Cabinet Office producing a document called “Benchmarking the Back Office”. This found that the QCDA actually spent considerably less on the strictly “back office” functions it charted for all organisations of finance, human resources and procurement than did the Department for Children, Schools and Families. It also lost an average of 3.2 days per employee per year to sickness absence, compared to a figure of 7.9 per cent for the DCSF. There was only one area where it was more expensive than the DCSF on the categories listed as non-frontline in this document: the cost per square metre of its office space, which was just over double that of the DCSF. It is unclear from this document whether this related to the QCA/QCDA’s former home in Green Park, west London, or to its current base in Coventry.

That is it, on the specific points. It should also be said that, during the session in the House of Commons, Mr Gove also responded on occasion dismissively to questions about the bill from his political opponents. In some ways this is surprising, as he is known to be courteous in private.

Some might observe all of this and see it as the natural rough-and-tumble of the Parliamentary process. Politicians being not entirely straight with evidence is hardly news, you might say.

But the position the Secretary of State takes to such evidence, and to presiding over this debate with a sense of fairness, does matter, and particularly at this time.

It is true that, in some aspects of education, this government is taking a more hands-off position than was the case in the most controlling of the Labour years. For example, there are now no national teaching strategies, much of the targets regime is being dismantled, and the inspections system is being loosened dramatically for schools said to be doing well.

However, in other areas this is a very centralising bill, vesting yet more powers with the Secretary of State. Examples of powers given to the Secretary of State in this and recent bills include the ability simply to order schools to become academies, and to be the ultimate arbiter as to whether a local community would benefit or not from having a free school or academy set up in its midst, whatever local people think.

Labour has protested that the bill gives Mr Gove 50 more powers, but the reality is that power over what happens in English education has been becoming increasingly centralised over a period dating back 30 years, under both parties. For an example of how it increased under New Labour, see this article I wrote a couple of years ago here. It often seems as though, in the way successive education bills have been written in recent times, that those drafting legislation are of the view that “the Secretary of State” is now synonymous with “the guardian of the public interest”. Because he has some kind of very indirect democratic legitimacy – through being appointed as Secretary of State by the leader of the largest party following a general election – the Education Secretary is seen to be the holder of the public will, nationally and locally, in relation to many aspects of our schools system.

The demise of agencies such as the QCDA – for all its problematic history – will also concentrate power more directly in the hands of the person who is head of the Department for Education, which will take on its work: Mr Gove.

This degree of unfettered power is exceedingly rare, in other countries.

Especially given the huge power he now wields, the Secretary of State needs to act fairly, in the national interest, rather than simply pushing forward a particular party political agenda which often seems to have the prime aim of making the other side look bad.

Schools, parents and pupils deserve much, much better than this.

11 Comments
posted on February 18th, 2011