Ok, I mentioned a while back on this blog, I think, that I was keen on setting up a section on this site called “Data watch: Ofsted”, which is intended to feature observations on the use of exam statistics in Ofsted inspections. This is something I’ve written about before, notably in a front page story in the TES last year about the connection between schools’ Ofsted gradings for their test and exam results and their overall inspection verdicts.

I’ve recently been trying to do some more work on whether patterns can be found in the details of schools’ exam results and the outcomes of their inspections. Specifically, I wanted to investigate claims that schools’ contextual value added (CVA) scores play a large part in how effective they are adjudged to be by inspectors.

I thought there might be a strong link here, as CVA could be seen as the most sophisticated (it is certainly the most complicated…) statistical measure of school performance there is. It compares a pupil’s overall test or exam performance at a certain point with what they might have been expected to achieve, given their previous achievements and a bewildering array of other characteristics about each child which might be thought to have an effect on performance. Results for each child are then added up to give a total figure for the school, which might be seen, if the system were infallible, as the last word on whether or not it “adds value” to its pupils’ education.

As part of my investigation, which centred on secondary schools, I got hold of a spreadsheet which Ofsted now produces of all schools’ inspection judgements over a given time, this one relating to the 2007-08 academic year. I then plotted these judgements against league table results for 2007 for each of the inspected schools. This allowed me to compare each school’s overall inspection verdict with its CVA score for 2007, which is likely to have been the set of CVA results inspectors would have had access to when visiting the school.

And the results were…well, to be honest I was surprised that there was not a closer association between CVA and overall verdict. Although there was a clear trend that schools with higher CVAs tended to have better Ofsted verdicts, the relationship was not as strong as I thought it might have been. (I would like to show you an Excel graph of the relationship, but alas I think this has defeated my computing abilities at this stage). I did find schools which were adjudged to be “inadequate” by Ofsted but which had above-average CVA figures (the highest I found, among “inadequate” schools, was a figure of 1013, in a school which also, puzzlingly, had figures for the percentage of pupils achieving five GCSEs at A*-C, and achieving five GCSEs at A*-C including English and maths, which were both above the national average). And I found that 18 out of the 174 schools which were said by inspectors to be outstanding had below average CVA scores. There was the same sort of relationship, overall when I compared schools’ figures for 5-plus A*-Cs with their inspection verdicts, and for 5-plus A*-C including English and maths with inspection verdicts.

This doesn’t seem to me to be proof that CVA is not important in the inspection process: it may be tricky, for example, for inspectors to reach a verdict which is strongly at odds with what CVA says. But the relationship is not as clear as perhaps I thought it might be.

If anyone out there would like more on this, or a look at the spreadsheets and graphs on which this posting is based, please email me at [email protected]

Leave a Reply

Your email address will not be published. Required fields are marked *

I accept that my given data and my IP address is sent to a server in the USA only for the purpose of spam prevention through the Akismet program.More information on Akismet and GDPR.