Friday, June 17th, 2011
A study apparently demonstrating the benefits of academy status seems to have been highly influential in recent weeks.
The research, by academics at the London School of Economics, was published in April. It has been picked up not only by Blairite commentators who backed the original academies policy, but now by the Department for Education in its push to encourage all schools to become academies.
I would also hazard a guess that it was in the mind of the Today programme presenter Sarah Montague when she asked a sceptical head teacher yesterday morning to accept the statement that academies improve schools’ results.
The research, by Stephen Machin and James Vernoit of the London School of Economics, produced some conclusions which look very positive for academies. As the Financial Times reported when the research was published, the study found that “turning a school into an academy improves its performance – and that of neighbouring schools”. The study was based on an analysis of pupil-by-pupil results of schools turned into academies under Labour, in the years 2002-9, when most of the institutions converting had low GCSE results. It includes a caveat that it does not relate to academies which have converted since the coalition came to power.
Having looked at this research in detail now, I am very impressed with a number of aspects of its methodology. Specifically, it performs statistical checks on institutional results which seem far more robust than similar exercises which have been carried out in widely-cited analyses of the academies policy in the past.
However, there is a gap in this research: any qualitative investigation into how academies opened under Labour have managed to produce their apparently impressive statistics.
This is an obvious question to ask: though academies’ benefits are often cited in broad-brush, quasi-ideological terms (such as allowing schools to break away from LA influence, encouraging innovation through a sponsor, or just simply promoting an often undefined quality called autonomy), why in detail would simply changing the structure of a school’s governance make a difference? What precisely have academies done to drive these results improvements? If they have greater independence, how have they used it and what has been the connection with results?
And once you look into that, as this blog and other research by the Civitas think tank has done, you start to have doubts over whether this policy is quite the panacea that is now widely being claimed.
OK, first the impressive bits, then. Well, for me if you want to know whether schools can improve their results by being turned into academies, and you want your research to have any claim on credibility, you have to do at least two things, neither of which seem to have loomed large in claims made about academy results in the past.
First, you have to compare like with like. Over the past few years, governments have looked at the GCSE (or equivalent, of which more below) results of academies, and compared them to those of the schools these academies replaced. On average, they have tended to find academy results improving, compared to those achieved in their predecessor schools, at least on the headline published figures, at a faster rate than those of the predecessor schools. Therefore, the argument goes, here is evidence that the academies policy is a success.
There are a couple of serious objections to any conclusions based on these calculations, though, including the following. What if the pupil clientele changed from the time before the school was an academy to now? The schools converting to academy status under Labour generally tended to have relatively large numbers of disadvantaged pupils. If the replacement of such schools by academies tended to draw in pupils from slightly less disadvantaged backgrounds – drawn, perhaps by the huge extra investment in new buildings that went with academies under Labour – with better results from their primary schools, this would be to the advantage of the academy but might mean that, when results rose, it was more to do with changing pupil intakes than anything the academy had done itself.
The Machin and Vernoit research tackles this issue by looking at the results achieved in key stage 2 tests by pupils who went on to attend schools which were to go on to become academies during the period under study, and those of children who joined the schools after they had become academies.
And the study finds that the pupil intake of academies did indeed “improve”. In other words, the academies under study were taking in pupils with better key stage 2 results than had been achieved by pupils entering the schools the academies replaced.
But here is the impressive bit: the researchers found that even after taking this pupil intake factor into account, the results achieved in the academies were better than achieved by a control group of schools.
The second impressive aspect about the study was that it sought to take into account the effect on neighbouring schools. This has always seemed to me to be important, since the success or failure of a policy should not be judged in terms only of the effect on an individual institution but in terms of its impact on an entire area: if an academy – which under Labour usually came with new buildings worth eight figure sums – succeeded only by drawing in more “educable” pupils from neighbouring schools, while those around it suffered and their results declined, this would raise questions about the policy.
But the Machin/Vernoit research looked at this issue, too. It found that neighbouring schools did suffer (to put it crudely) from the introduction of an academy nearby, in that the average achievement level of the pupils they recruited in year seven, as measured by their primary test results, fell. In other words, some of the higher-achieving pupils moved, at the end of primary school, to the academy wheras in previous years they might have attended its neighbouring school. However, despite their intake getting “tougher” in this way, the results in these neighbouring schools at GCSE also improved. The paper suggests that this was probably the result of greater competition from an academy nearby spurring improvement, on the main results metrics, by the neighbouring schools.
Ok, that’s the good news. Here I come to my beef with this study. And I should say first that I am not trying to hit academies over the head for the sake of it with observations around strategies some schools might use to boost results. (The other day I met, as it happens, the principal of an academy with a very tough, non-selective intake in an area with grammar schools now under pressure from the Government’s new GCSE floor targets, and thought what a challenging, important job that must be.) But neither do I think that we should just abandon detailed scrutiny of whether academies are quite the answer to all educational problems that they are being made out to be, and what their results really tell us.
So back to the research. The trouble is, for all the statistical expertise and checking that has gone into this study, it is still based on the assumption that you can use a set of exam results formulae – on one or two performance indicators – to attempt to answer definitively the question as to whether these schools are actually providing a better education than their predecessors. In other words: the implicit view is that this question can be answered entirely statistically, without any reference to any qualitative understanding of what has actually happened to make these schools “better”.
Yet there are some fairly big alternative explanations. The obvious one is that academies have simply been more results-focused, in the main, than other types of school and thus they have sought to do whatever it takes to boost grades on the Government’s published indicators. That means that while the central indicators have improved, other indications – statistical or otherwise – might give cause for concern. So while the stats improved, actually if you tried to get a wider sense of what might be felt to matter in education, you would get a different picture. Academies might have, to put it more crudely as a hypothesis, paid more attention to gaming the results indicator system than other schools.
You could say it is unfair to single out academies in this way, and for newcomers to this blog, this might sound hyper-cynical. But, as I’ve written before, academies under Labour seem to me to have been under more pressure to raise results than other schools. Most of these schools were specifically created to address the claimed underperformance of a predecessor school. They came, often, with tens of millions of pounds of extra funding for new buildings. Their results were subject to extra scrutiny in the media, not just at the school level but at the level of the national politicians overseeing the academies policy, whose reputations were staked on headline scores improving. They might – though I am guessing here – also often come with a business mentality, reinforced by their sponsor, which incentivised senior leaders to get results up, come what may, through bonuses linked to GCSE exam performance. It would be surprising, then, if one or all of these factors did not produce a very strong focus on those headline measures.
So, how to check whether any other explanations lie behind those improvements cited in the study than just a general sense that education has improved in the academies under investigation?
Well, I have to confess here, that I have no killer line, or proof that this study is wrong in its conclusions. But I do think we should be wary about them. I want to come at this first statistically, and then anecdotally.
First, on the statistics, another impressive aspect of this research is that it does attempt to address, through the data of course, the most obvious way in which results could have been boosted artificially, if you like. This is through the use of non-GCSE qualifications.
Under the system in operation in recent years, other courses are counted as “equivalent” to GCSEs, for league table and results purposes. This is the case for the main measure used in this study: the proportion of pupils in each school achieving five A*-C grades at GCSE or vocational equivalent, including maths and English. Yet the fact that some of the GCSE-equivalent courses have been given high weightings in the results formulae – worth up to four GCSEs – and have high pass rates means that they can have a heavy influence on the overall published results. Schools encouraging high numbers of pupils to take these courses – whether they are doing so because of their own need to boost results, because of students’ needs or a bit of both – are therefore likely to get a results improvement out of doing so. Might not academies, then, under greater pressure to produce results gains, simply be turning to these courses to a greater degree than other schools?
So, back to the research. I was surprised to find that not only did Machin and Vernoit address this possible alternative explanation for the better results of academies, but that, when they did so, they found that it did not explain the results improvements academies seemed to show. In other words, the use of non-GCSE “equivalent” qualifications did not explain the relative success of academies, they suggest. The success, then, stood even after taking into account this possible alternative explanation.
The way they calculated this was fairly straightforward: simply to perform their calculations using GCSE qualifications alone as the measure of success in each school, rather than GCSEs “or equivalent”.
This, they say, represents their check on this idea – that I refer to above – “that the performance improvements [in academies] are largely driven by performance improvements in unconventional subjects”.
So, they conclude that putting pupils on “unconventional” GCSE-equivalent courses does not explain the academies’ results success. I should say, here, that I lack both the professional statistical expertise of these researchers or the time they no doubt spent on their study. But I would say that it is a slightly odd conclusion, given some other things we know about academy results, as revealed in more recent data sets.
First, I have performed a very crude version of a similar type of test to the one they used in their study, simply by looking at the latest published GCSE results of academies (all of them academies set up under Labour, and therefore the group from which the LSE study schools were taken) with “equivalents” and without. I have then compared these figures to those of non-academy schools.
I did this using Department for Education spreadsheets, adding up the number of pupils in academies in 2010 who achieved five A*-Cs including English and maths in GCSE or vocational equivalent, and comparing that to the total number of pupils in the academies they attended. The same calculation was performed to total up the number of pupils in academies achieving five or more GCSE A*-Cs when these were not allowed to include “equivalents”.
The figure for academy results – the proportion of pupils achieving five or more A*-Cs including English and maths with vocational equivalent, which was the main published measure used in league tables under Labour and is continuing to be the main target for schools under the coalition – comes out at 43.3 per cent. Without them, it drops to 33.0 per cent, a drop of 10.3 percentage points.
Now, a similar comparison for non-academy schools reveals a far smaller gap. With equivalents, non-academies end up on a figure of 57.0 per cent. Without equivalents, they finish on 52.5 per cent. This is a gap of 4.5 percentage points.
So, on the 2010 figures, “GCSE-equivalent” courses have contributed far more to academies’ headline results than they have at non-academy schools.
Second, there is evidence from the Government’s much-debated new English Baccalaureate measure. This found, as I blogged about here, that nearly a third of academies with results to report had a score of zero per cent on the English Bacc, which records the proportion of pupils in each school with A*-Cs in English, maths, two sciences, a language and history/geography. Furthermore, the proportion of academies with that zero score on the EBacc was twice as high as it was with a comparison group of schools with similar intakes.
This data would suggest, then, that if academies were improving their results, they were not doing it exclusively on the narrowly “conventional” subjects that Michael Gove has chosen to highlight through the EBacc. Yet the LSE study says its figures do not show the improved results at academies are the product of gains in “unconventional” subjects. So, to repeat, it is strange how this evidence contrasts with the LSE research.
Other than the GCSE “equivalents” move, there are other strategies which can be used to boost school performance if schools of any kind are particularly desperate to see their statistics improve. These include entering pupils multiple times for GCSEs in English and maths in particular, with schools knowing that these are crucial to their published rates. The Advisory Committee on Mathematics Education documented this practice in relation to maths last month, pointing out that sometimes pupils would be removed from the subject by their school if they achieved a C grade earlier than the end of their course, to give them time to focus on other subjects important to the school’s results, even though the pupil might be chasing a grade higher than a C in maths (not important to the school’s published indicator). I have no evidence that this has happened to any greater degree in academies though, as I say, I think the pressures on most of them to improve results have been great. But any study should be aware that headline results indicators will often not present the whole picture of what has been going on in schools.
My final detailed response to the study is anecdotal. And here, I just want to refer back to my original blog on academies’ EBacc results, a couple of months ago, for evidence.
This made several points in relation to studies and anecdotes on the subject of history.
Academies were more likely to have fewer students studying history to GCSE than other types of school, according to research by the Historical Association. Academies were also more likely to have a two-year Key Stage 3, which gives pupils more time to prepare for GCSE but was a concern to the HA because it meant many were likely to lose one of the only three years they would study history at secondary school.
The report also quotes a teacher, from an academy, saying: “History is seen to be too academic! …Students who are predicted lower than a B are not allowed to study the course…We are also not allowed to run ‘entry level’ courses for students with specific needs, as that is not thought to be meeting the attainment targets for the academy.”
An Ofsted report on history teaching in primary and secondary schools, published earlier this year, also documented lower numbers taking history in academies. It found: “Entries for GCSE history from academies were significantly lower than for maintained schools overall.”
One online comment after a 2009 TES story documenting another academic report on the pressures facing history as schools sought to boost their results in league tables, ran as follows:
“I used to work in an academy in London, and as I was leaving I had to rank every pupil in year 8 as an A, B or a C. A means that they could get an A or a B at GCSE. Therefore history appeared in their option forms. The B category were pupils who were borderline C/D. The C meant that they were predicted grades G to D. Neither categories B or C had history on their option forms! They were encouraged to take other less rigorous subjects.
“Even though I had known students previously predicted Ds and Es get outstanding results, who went on to do exceptionally well at A-level, and some even went on to do history at university.
“What was most upsetting was the case of one student, with a range of learning difficulties. He loved history, and orally he was phenomenal. He was put in category C, and was therefore being guided down a different pathway. He was devastated that he would not be able to take history in year 9-11. His mother rang the school, and explained that it was likely whatever course he was entered into, he would be unlikely to either pass or do very well in, so why couldn’t he at least take a subject he enjoyed?
“The plea fell on deaf ears and the boy was placed in some random BTEC or GNVQ course taught by some bland paper pushing academy drone who was being shipped in to ‘sort’ the school out of failing pupils and failing teachers.”
If you look back to my earlier blog, you will find reference to the parent of a pupil at a school taken over by the Harris chain of academies, who told me (and the local paper) that her daughter had been forced to take a BTEC sports course (worth two GCSEs to the school), at the expense of French GCSE, despite her daughter having no interest in sport. This was a clear case, said the parent, of the needs of the school to boost its published results taking precedence over those of her daughter.
So in response to this LSE study, I have put forward some statistics that run contrary to one of its more important findings, and also some anecdotes.
Not much, you might think. But there is a bigger point here: there should be more to the evaluation of a policy than simple results statistics, however clever the methodology and however robust the statistical cross-checks, especially in a complex system such as secondary schools results calculations which offer plenty of opportunities for schools to take tactical decisions to boost results. This runs the risk of following less-than-ideal behaviour, from a pupil’s point of view, within particular subjects.
And is all that matters the number that appears at the end of the educative process? Or do we care about what happens along the way, and how the numbers are generated? If particular subjects have been affected in the drive for higher results, should an influential study like this not be investigating and having something to say on this? Or should such a perspective just be ignored: the idea is that we lay down the statistical rules for success, check whether the statistics have been raised and that, apart from some clever checking of data, is pretty much it?
To sum up, how do we know that academies under Labour did not simply pursue a more relentlessly focused version of “Education by Numbers”?
I think if researchers are going to make claims which are going to be used, whatever the caveats in the original research, by others to say categorically that a policy “works” and by implication that the education on offer in academies is better in a general sense than in other schools, they are going to have to be prepared to dig a little deeper – and not just statistically – into what has been going on behind the figures. Economists who do not do this will never be able to see or pronounce on the whole picture, I believe. Their research will therefore always be incomplete.
So it is a shame that statistics are simply being held up as conclusive evidence, one way or the other. This really is not, I think, for all the complicated formulae and technical expertise on display in this paper, a very sophisticated way of understanding what has really been going on in our schools.