This is my speech at a seminar at the 20th anniversary conference for Keele University’s excellent Centre for Successful Schools. I gave it yesterday (September 22nd, 2009)

Thanks very much for having me here. I just wanted to start with this first slide.

“My satire is against those who see figures and averages, and nothing else.”

(First slide)

Any views on who might have said this?

Charles Dickens

(Second slide).

Yes, as you can see, this is Charles Dickens, in a letter to a friend explaining his intention in writing Hard Times, which introduced to the world the character of Mr Gradgrind.

I want to argue in this talk that the current approach to schools policy has a strong sense of the utilitarianism against which Dickens railed in Hard Times. Statistics have been elevated to become ends in themselves, often on the back of very questionable, politically-driven, assumptions borne in part, I think, of a needlessly low opinion of teachers. These assumptions, implicit in the accountability system which directs so much of what now goes on in schools, have quietly moved education to becoming increasingly synonymous with exam preparation, with very little debate about whether this is really what the country wants.

But more of that later. I wanted to start on a positive note. I think part of the reason I’ve been invited to speak at this conference is that I have worked with the CSS in the past, reporting on its findings that the overwhelming majority of parents like the school to which they send their children. In November 2007, I reported how analysis of the centre’s survey results in the years 2004 to 2007 found 88 per cent of parents would recommend their child’s school to friends. This is in line with other survey findings, both in this country and in the United States.

In nine years as a reporter at the TES, I came across many examples which were striking in terms of the quality of the teaching on view. I am thinking, particularly, of attending the conferences of subject associations over the years, when admittedly very committed teachers shared ideas about how they would develop their lessons and spark pupils’ interests. A couple of examples stand out: I can remember seeing a lesson in which five-year-olds were being encouraged to grapple with early understanding of algebra in an incredibly inventive way, and visiting a school in Slough to see the bespoke work it was doing with recent asylum seeker children.

I’ve also got, I think, personal experience of the value of state secondary schools. Although I attended schools in both the state (primary and comprehensive) and independent sectors myself, ending up in a fee-paying school, most of my friends went to local state secondaries in the 1980s. I think these people, largely very successful and happy now, are a tribute to the education system of the time. It was far from perfect, clearly, but I think sometimes the “failures” in quotes, can be overdone.

I also believe that this government should be commended for some important education policies, not least the transformation in the numbers of support staff now working in schools, its long-overdue move to improve school buildings and teaching initiatives such as the numeracy strategy. I believe, too, that school leaders have become increasingly innovative over the past decade or so, while there is no doubt that some of the data analysis work which has grown up over the last 20 years has been useful, if handled sensitively.

However, I was driven to write a book about the problems with one major aspect of school policy having come across, in my day job at the TES, mountains of evidence that all of the good work I have just described has been produced against the background of a policy agenda which I might describe as broadly hostile to it. That is, good teaching occurs, where it does, in spite of, rather than because of, the standards agenda.

I should say at the outset that I am an outsider in all of this. I am not a teacher, and I do not have children. I just speak as someone who has looked at evidence, for at least five years now, on the effects of exam results-led schooling and has come increasingly to question it.

I want, now, to outline why I think the statistics-based model through which schools’ behaviour is now directed is dysfunctional.

First, what is this model? Well, I have called it hyper-accountability. It could also be seen as synonymous with the standards agenda. The idea, over the past decade or so, has been that elected politicians lay down the goals for the education system in terms of a series of targets, most of them measured in terms of exam and test results, for which they themselves are held to account.

They then use a panoply of mechanisms to try to impress on schools that their central goal is to improve these statistics. These include: league tables which make it very clear which institution has the best figures; Ofsted inspections which have increasingly focused on test and exam statistics; performance pay and performance management in which results can be central to teacher success; statistical analysis systems such as the Fischer Family Trust apparatus which have the unintended effect of helping reach a statistically-based view on which are the “good” and “bad” teachers; threats from ministers to use direct powers and those of local authorities to intervene in, or close, “underperforming” schools; and visits from, for example, local authority staff and School Improvement Partners impressing on school leaders that results improvements are vital. As the statistics then improve, education as a whole will get better, is the argument, and the undoubted extra investment in schools will have been justified to the electorate.

I believe this model is, to use a word beloved of critics of our schools system, failing, for at least four reasons.

First, there is a huge conceptual problem at the heart of hyper-accountability. The idea behind it is that, by using the accountability system to remind schools that their first thought should always be to improve results, as measured by the Government’s statistical indicators, their interests will be aligned with those of their pupils. For, it is contended, pupils need to do well in the exams around which the indicators centre, so hyper-accountability ensures that schools serve pupils’ interests.

In fact, what I found in covering this subject for the TES and in subsequent research, was that hyper-accountability actually very often sets up a conflict of interest between the needs of the school, in delivering a set of short-term exam results, and the needs of building long-term understanding and hopefully lifelong engagement in education for all pupils. When this happens, too often the needs of the school, understandably, tend to take precedence, because so much is riding on its results. That is, hyper-accountability is dysfunctional because it encourages decisions to be taken on the basis of the need of the school, rather than of the pupil. That this has been achieved despite, I believe, many teachers wanting to help their pupils in the long run – of which I will say more later – makes it doubly damaging.

What is the evidence of this conflict of interest? Well, my book is an attempt to present the evidence on the side-effects of hyper-accountability, for pupils. I should say, here, that in it I was not seeking to blame teachers and school leaders for some of the actions which this strange system encourages them to take. And politicians, who have control over this regime and could change it to reduce such behaviour if they wanted to, deserve little respect if they then turn around and blame the profession for following its logic.

As I say, my book is an attempt to chart these effects in detail. They include: schools directing pupils towards some GCSE-equivalent courses which, though having high weighting in league tables often have questionable currency in the jobs market; the tendency, prevalent at least before coursework rules were tightened up, for teachers to give too much assistance to pupils in their assignments, in the teacher’s desperation for results, thus changing the student-teacher relationship and sending a message to children that hard work is non-essential; the well-known targeting of level ¾ and GCSE C/D borderline pupils for  extra attention at least in large part to the drive to improve a school’s published results, thus, for me, sacrificing an ideal of state education that values equity. There is also copious evidence that the months, and, perhaps, years that both primary and secondary schools have been forced to devote to teaching to the test can be against pupils’ long-term interests.

Again, the evidence on this is weighty – I also have a website which attempts to chart it – but I will just share some more of it, mainly focusing on evidence I have come across in recent months.

This includes a recent investigation for the Qualifications and Curriculum Development Agency by academics at Nottingham University, which found that some schools which are under pressure to raise the proportions of their pupils gaining a C grade in maths GCSE were reacting by entering pupils in year 10 or early in year 11 for the exam, in the hope they would “bank the C”, and then allowing them to move on to other subjects later in year 11. The academics were concerned this could lead to pupils leaving compulsory education with a less firm grasp of the subject than they might otherwise have had.

(Third slide)They concluded: “One of the most significant challenges to improving learner experiences in mathematics classrooms is the effect of high-stakes external assessment on the experienced curriculum, particularly the ways teachers are compelled to behave in response to performative pressures.”

(Fourth slide) This backs up a conclusion by Ofsted last autumn on maths teaching in primary and secondary schools. It said:  “Evidence suggests that strategies to improve test and examination performance, including ‘booster’ lessons, revision classes and extensive intervention, coupled with a heavy emphasis on ‘teaching to the test’, succeed in preparing pupils to gain the qualifications but are not equipping them well enough mathematically for their futures”.

Evidence continues to pile up more or less every week on this subject, in studies from academics, position papers from teachers’ subject associations and so on. Recently, a study by academics at East Anglia and Southampton universities, based on focus group work with history teachers, found that less academic pupils were being discouraged from taking the subject, because this might affect their school’s league table position.  

(Fifth slide) And focus group work with 250 university staff, included in this year’s Nuffield Review of 14-19 education, concluded: “Narrow accountability based on exam success and league tables needs to be avoided. This leads to spoon feeding rather than the fostering of independence and

critical engagement with subject material.”

As I say, I could go on and on and on with this stuff, but you get the picture. The long-term learning needs of all pupils are often not well-served by this crude results-for-the-school-are-everything system.

This is particularly important though, I think, bearing in mind the second conceptual problem with results-based accountability of the current form. The model as it stands is dysfunctional, because improving national exam results are not, actually, in themselves, of much use to pupils. Therefore to gear so much effort to the production of them, as ends in themselves, is, to my mind, folly.

This may sound counter-intuitive. But I’m pretty sure it’s right. Exam results are treated by politicians as if they are an absolute good. An increase in the supply of good grades is unreservedly good news, since it means that more pupils will go on the next phase of their education or employment with the qualifications they need, and also because they indicate rising underlying standards of teaching and learning.

But I am afraid it is far from clear to me that this stands up to scrutiny.

For an increase in the national supply of good grades is not, by itself, a good thing for the pupil. Simple laws of supply and demand say that, as more young people achieve good grades, the value of those grades, in terms of their use in securing finite places in employment and further and higher education, becomes less for the individual. This is not to make a judgement about whether that is fair or not. It is a fact of life. An increase in A-level A grades will not improve the prospects of students unless universities make more places available or more jobs are created. In the absence of this happening, as we have seen in recent years, an increase in national exam results simply leads to those using results to make selection decisions – universities and employers – raising the bar as to the grades they expect of students. To put it another way, if a certain set of Government policies helps improve a pupil’s grades, but all other pupils improve nationally to the same extent, there may be no net benefit to the pupil.

People might see this argument as elitist, in that I believe that exam success should be restricted in some way But this is far from the case. I think it would be fantastic, and potentially have a transformative effect on our society, if the rises in the number of good grades reflected true underlying gains in teaching and learning. The assumption is that the national statistics reflect this. Unfortunately, and I make no apologies for being sceptical, here, the current system does not work in such a way for us to be confident that this is happening. Instead, the accountability system of “improve your results or else” simply encourages schools to pursue grades as ends in themselves. And, crucially, there are no robust checks on what these improved results actually mean. Essentially, the system does not care, it just says they must be generated.

I can pull back and expand on this. A diagram may be helpful at this stage.

I think the business of turning good teaching and learning into good end results, in terms of Government indicators for a secondary school, could be represented as a series of stages, from A to E.

(Sixth slide)

A Good general teaching of a subject

B Hard work by pupils

C Exam-specific preparation/teaching to a particular test. This includes the selection of exam-specific textbooks, for example.

D The production and calculation of results in particular subjects. This is the process by which results are produced, including individual boards’ determination of grade boundaries.

E The production and calculation of results for the school.

This is the final computation of an institution’s overall statistics.

Now, I think that, in an ideal world, if this system were working effectively, stage E: the production of good grades for the school, would simply reflect the quality going on in stages A and B. In other words, good grades for the school would be a direct product of good subject teaching and hard work by pupils.

In fact, because the structure I have described is relatively complex, there are many opportunities for schools to improve their results by what I would call “gaming the system” – paying attention to what happens at the right hand end of the screen (stages C, D and E), in terms of the calculation of results formulae, rather than the left (A and B).

So, schools can spend a lot of time thinking about taking tactical decisions which might help them improve results in an individual subject, such as which pupils are entered, which level of difficulty exams they are entered for, whether the course is modular or linear, which exam board is favoured, or which exam-focused textbooks they buy.

I reported on what I thought was a classic example of a tactical approach to within-subject improvement in the Guardian a few weeks ago. A history teacher had documented, in a website discussion, the measures he had taken to boost his pupils’ success, including opting for an exam board whose papers he thought would be helpfully “predictable” for his students, and then analysing questions endlessly to help ensure that his pupils could get a C even if their underlying grasp of the subject was shaky. Is that your definition of a good teacher? To me, this is someone who is very attentive to the rules of this game. But the game itself is at fault. On the subject of history, I attended a course in 2006 in which a senior GCSE examiner told teachers they need not bother teaching the hardest material, as pupils did not need to know it for an A*. This might not be ideal, he admitted, but it was a case of “realpolitik”, or “the ends [of better results] justify the means.”

Crucially, schools can also look at the mechanism by which results are calculated for the school and then decide how to focus their attention. The current dominant indicator, of the proportion of a school’s pupils achieving 5 A*-Cs at GCSE, including English and maths, has led some schools to encourage pupils to take courses worth four GCSEs; to take some pupils out of studying other subjects in order to focus on maths and English; and to focus on C/D borderline candidates.

The following interesting letter in the Guardian in May this year, from a teacher in Tower Hamlets, illustrates what can happen. It said: “The 30% cut-off for the percentage of pupils gaining English and maths GCSE, below which schools become National Challenge schools facing changed status and possible removal of the head, does not include [English] literature. Therefore, in my school, any pupil in danger of missing a grade C in English has had their entry for English literature withdrawn in order to receive extra coaching to ensure the school’s all-important benchmark is reached.”

I believe that, in the final analysis for this country as a whole, it is only the quality of A and B which really matter. Unfortunately, too much of schools’ attention gets focused on C, D and E. This means that, when results improve, it is difficult to be confident that this is due to any underlying non-exam-specific improvement in the quality of teaching and learning, or simply to schools getting smarter at playing the statistical system with which they find themselves confronted. And this is the case even though exam boards take great care in trying to set grade boundaries in individual exams which they hope will preserve standards, in that exam, in line with the previous year.

Furthermore, it may be that the gains which the results improvements imply would not be replicated if pupils’ underlying understanding were measured in a different way.

There are some interesting studies which appear to bear these fears out. One, presented this month at the British Educational Research Association’s annual conference, in which secondary pupils were presented with identical maths tests that were given to a similar cohort in the 1970s, found little difference in scores between the two eras, despite the transformation in GCSE results. There is also the research by Michael Shayer at King’s College, London, which found that some key underlying aspects of scientific understanding had declined in 11-year-olds over the same period, despite the improvement in test results since the 1990s. Recent international studies have had mixed findings, with English pupils appearing to have made gains in one set of assessments given to young people around the globe, but not in another.

I have tried, and perhaps failed, not to labour this point too much, but there is another issue, here. It is also the case that details about the way the GCSE and A-level system works mean it is not a good indicator in itself of underlying changes in education standards. Put briefly, too much about this system changes from year to year for anyone to be certain that when results go up, teaching and learning has improved, or something else has happened. Among the reasons are that exam papers have to be changed every year – making direct comparisons between pupils’ performance in identical papers from year to year impossible; the rise and rise of exam-focused support from awarding bodies, which may or may not be a good thing but certainly makes comparisons with years when it was less available more difficult; changes to the structures of GCSEs, including the move to modularisation; changes in the options available to pupils, including the relaxation of the rule that all had to study languages at key stage 4; the availability of exams in many sessions throughout a course; and the increased use of data analysis to target pupils for extra support, which may be a good thing in itself but is not the same as underlying teaching quality overall improving. As the old adage goes, if you want to measure change, don’t change the measure. But GCSEs and A-levels, and the support available to generate good results, are changing almost constantly.

One of the main functions of GCSEs and A-levels, now, has become as gauges of the nation’s education standards. This was not their original purpose, which was to measure the comparative achievements of pupils in a particular year. In this, I think they do a pretty good job.

But because they are not very good as indicators of national standards, for the reasons I have outlined, this just feeds public scepticism about state schooling. Scepticism about what the results really mean is undoubtedly encouraged by the media. But there are sound reasons why it will not go away, as I hope I have indicated. If this system were to have a chance of building confidence, we would need reliable, objective measures as to whether or not schools are improving. We have not got them. We need to try to design them.

Indeed, it could be argued that the production of national results statistics, several times a year, just gives critics frequent chances to attack state schools. For, however good the results are, they are never good enough. And even to try to speak up about success in the figures might be construed as not having high enough expectations of what can be achieved. And yet, the paradox, that most parents support the school their child attends, struggles for attention. This is another reason why exam statistics-driven accountability may be self-defeating, in the end, in contributing to a fall-off in public confidence in education.

It also could be argued that the overwhelming focus on results does serve a social justice function, in that it helps to ensure that pupils, particularly those from poorer backgrounds, achieve grades that will help them do well later in life. Without the pressure on schools to produce results for these pupils, is the argument, they might be let down. But recent evidence on social mobility is hardly encouraging for exam-driven schooling. Indeed, recent A-level results have shown independent and grammar schools pulling away in the production of A grades, which will allow their pupils to gain access to highly selective universities.

I should, probably, pull back from this slightly now, and say that I wouldn’t disagree with any head teacher who said that it is right, from their point of view and bearing their pupil’s needs in mind, that they pursue improving exam results, almost as an end in themselves, as a goal for their students. There clearly is a benefit, from the point of view of the individual school, in pursuing this aim, particularly if the outcome is that a certain school’s students end up with better grades than those at the school or college down the road.

I just don’t think that, if this is going on across the country, this grades race – the pursuit of results as ends in themselves – is doing us much good as a whole.

The third reason why hyper-accountability is dysfunctional and damaging is that it is, I believe, based on a purely dogmatic view of what matters in the public services. This has been imported from outside of education and simply adopted into schools policy without proper thought as to its implications.

This dogma is the widely-held theory that in education, as in other areas of public life and that of corporations, “outcomes” are what matters and the “inputs” – the processes by which those outcomes are secured – are insignificant.

In education, “outcomes” have been translated, unthinkingly in my view, into “examination results”. Thus, we have a system which is run as if all that matters during a child’s years of learning are the “outcome” measures generated along the way. This is probably by default: exam results are the most obvious indicator to hand, since they are easily measurable.

The thinking behind this is understandable, on one level. The nightmare for a politician is that investment and energy are pumped into a public service and are directed at changes which might be seen to be valuable, such as cutting class sizes, increasing teacher pay or promoting “creativity” in lessons, but without any demonstrable, measurable impact of a benefit to users of those services as a result. Without such evidence, the public might turn around and say that this money has been wasted. Moreover, there is clearly something to be said for reminding teachers and other public and private sector workers that all this investment has to come with an end product for those they are serving.

The problem is, in education this ideology has gone far too far. Do the public really believe that a good set of exam results is all that we seek from education? This is, at least, surely, contestable. It is completely reductionist. Yet the reality is that this assumption underlies the accountability system by which schools are now regulated, and their behaviour influenced. They are encouraged to act in ways suggesting a good education is synonymous with a good set of exam results. This has happened largely on the quiet, without debate.

 Last year, the School Teachers’ Review Body, which sets teachers’ pay, came up with the following statement in a report.

It said: (Seventh slide):  “Our strongly-held view is that teachers are accountable for outcomes, not inputs or activities.” School Teachers’ Review Body, April 2008.

In one fell swoop, then, it was deciding that teaching, the obvious “input” of education, had no intrinsic value whatsoever. Pupils’ actual experience in the classroom was simply written off. The STRB, of course, advises ministers on teachers’ performance pay. This is a model of school regulation which crudely sets up schools as factories, using “inputs” to churn out “exam outcomes” as if they were widgets on a production line.

Another example is Ofsted, whose inspection regime has increasingly been founded on the view that a good school is synonymous with a good set of exam results. The statement in square brackets on the slide [below] is my own, but I think this is the clear implication of what Ofsted was saying.

(Eighth slide): “No school can be judged to be good unless learners are judged to make good progress” [As measured by test and exam results].

Ofsted guidance to inspectors, February 2009.

This is supporting an entirely reductionist view of teaching. To put it another way, the ends of better statistics justify any means of achieving them.

(Ninth slide) As the Nuffield Review of 14- to 19 education concluded in May, the “means” (typically, how much pupils become interested in the particulars of a subject) are vital. “There may well be spin-offs from the teaching of Macbeth (the meeting of externally imposed targets and the passing of exams),” it says, “but the educational value lies in the engagement with a valuable text.

Furthermore, the fact that the Government is so obsessed about outcome indicators means that it has, I believed, missed a trick in doing more to help schools with some of the problems which may be inhibiting their chances of success. For example, I was staggered to find out that, as of a few years ago, the department, which compiles a staggering array of statistics on test performance, had very little data on the distribution of maths and science teachers, particularly to schools in challenging circumstances. It would have done far more to help those schools by focusing on helping supply these high-quality “inputs”, good teachers, than simply issuing more threats to them that they had to improve their results or face closure.

There are examples of the Government making an effort to improve the “inputs” – providing memorable learning experiences for pupils. I recently saw an advert for “Film Club”, the initiative which enables pupils to experience films after school. But we need more of them, and they often struggle in the face of performativity pressures: I am thinking particularly about moves to support subjects such as music and PE in primary schools, which can struggle in the face of the pressures on schools to raise their Sats indicators.

The fact that this system does less than it should to support schools in providing good learning experiences to pupils, and far too much telling them to improve statistical outcomes or else, leads on to the fourth and final reason that this is system is dysfunctional. It is based, I believe, on an essentially negative view of human nature, and of the motivations of public sector professionals. The idea is that all that institutions and individuals who are “under-performing” (in quotes) need is a little bit more monitoring through the Whitehall-orientated accountability system, a little more carrot and stick, and they will then do a better job. Other reasons why success might be more elusive for them are simply, implicitly, dismissed as excuses.

This philosophy has been well enunciated by a Blairite thinker called Professor Julian le Grand, of the London School of Economics. He was an adviser on health reform to Tony Blair in 2003-5, and has written extensively about bringing the concept of choice to the public sector as a whole.

(10th slide): Teachers: “knights” or “knaves”?

 

 In a 2003 book on public sector reform, le Grand used the terms “knights” and “knaves” to describe the way public servants could be viewed. Before the 1980s, he said, public servants tended to be thought of as “knights”, naturally inclined to serve the public interest. Margaret Thatcher changed that, viewing public servants as essentially self-serving, or “knaves”. There was a tendency for them to serve their own – the “producer” – interests, rather than those of they were meant to serve – the “consumer”.

Le Grand said this latter position was based on a view of human nature as essentially self-interested, which dates back to the days of the economist Adam Smith, the philosopher David Hume and also, I am told, embraces the 19th century utilitarians against whom Dickens argued. Basically, Le Grand sides with it.

Staggeringly to my mind, Sir Michael Barber, in his 2007 book: “Instruction to Deliver”, quotes le Grand approvingly, agreeing with him, that the theory of people as essentially rational self-interested individuals translates, in the sphere of public services, as: (11th slide): ““However committed the professionals are, they can never have the degree of concern for users that the users have for themselves.” But this statement, made in support of an argument for the use of quasi-markets in public services, is as evidence-free as it is insulting. It is particularly mind-boggling, when you consider that teaching should be, almost by definition, an altruistic profession.

But by Le Grand’s thinking, you need a system of economics-style incentives and influence over the motives of public service professionals, so that they will not simply revert to self-interest and fail those they are meant to be serving. In many ways, the hyper-accountability system I have described is top-down and Stalinist, but in some ways it is also Thatcherite in this sense that the only way to get public sector professionals to stop serving the “producer”, rather than the “consumer” interest is to give them self-interested reasons to perform, ideally by getting them to compete statistically against their fellow professionals.

(12thslide ): “Not everyone in public services likes league tables, but I love them.”

The top-down, authoritarian nature of this regime can be deduced further from comments in Sir Michael Barber’s book. In it, he expresses regret over the departure of Chris Woodhead as head of Ofsted in 2000, saying that that when Woodhead left “a key lever in the strategy for school improvement – Ofsted – had been weakened” and approvingly says the introduction of “naming and shaming” some poor secondary schools on Labour’s assuming office in 1997 was a signal to the electorate that the party would be “hard as nails” with underperformance. He talks of “restless, sleepness nights worrying about where the next percentage point [in test result improvements] was coming from”, even though the system under which national test results are produced is almost certainly not accurate enough for changes of one or two percentage points to have any real meaning. He also professes a love of league tables – the whole book is, in a sense, a hymn to the ability of league tables to hand power to those at the political centre who design and compile them. Sir Michael says: “Not everyone in public services likes league tables, but I love them.”

Sir Michael has also talked about short-term results improvements being important to politicians and those who run public services, for without them, you cannot take the public with you in the direction you want to go. “A long-term strategy must be protected through the delivery of short-term results,” he argued in 2007. This ends-justify-the-means argument could be taken to underscore the need to take a pragmatic, non-principled approach to targeting statistical gains. The trouble, though, is that in my observation, those running public services – like football managers – are never allowed to escape the search for short-term results. In the worst cases, heads with one bad set of test scores can face losing their jobs if they are unfortunate enough to be visited by Ofsted in the aftermath. This unceasing pressure is, arguably, the defining feature of hyper-accountability, and must implicitly be viewed as a good thing by its advocates.

 

To me, the accountability regime, in its current form, only makes sense, given its many and demonstrable downsides for pupils, if most teachers actually would fail their pupils if left less regulated. But that seems counterintuitive. What evidence is there on this, I asked Le Grand, in an interview? Shockingly, he told me that there was no such evidence, since such a study would be difficult to construct. In the absence of it, he effectively writes off any public service motivations in teachers and others as if they did not exist.

If many teachers actually are altruists, wanting to help their pupils but often impelled to take action against their better judgement because of the demands of the accountability system, I think this is an exceedingly high price to pay.

That’s almost my conclusion, except that I would say that I do believe accountability is important. There clearly have to be safeguards on schools’ and teachers’ behaviour to stop things going wrong. I am just totally unconvinced that the present system is helping education in this country. Yes, pupils have become better prepared to take exams, and results have gone up, but too much of the attention has been directed at the boosting of performance indicators which have no value in themselves. I have said nothing about the excessive four years of exams now facing the majority of students at the end of secondary school, or about the rigid system of teaching to assessment objectives on which the exams are based.

I could say more about this system – the undertones of bullying that embraces Ofsted judgements in which someone’s decades-long career is allowed to be summed up in the word “inadequate”; the unthinking belief, implicit in the statistical monitoring systems, that all pupils will progress at a constant rate throughout their schooling careers, despite, as I understand it from psychology that this is not how learning works; and the anxiety that the system feeds, not just in pupils, but in parents who are led by the figures to believe that there are great differences between state schools when in reality they often share more characteristics than not.

People in the audience might point to changes in the offing, such as the recent revisions of Sats policy and the introduction of the school report card, as a replacement for conventional league tables. Although I clearly welcome any attempt to question whether school results “outcomes” should be seen only in terms of exam success, I’m not convinced by these reforms, which will replace one set of league tables with another. So many of the difficulties with hyper-accountability run at a far deeper level than can be addressed simply by changing the statistical indicators.

I do, as mentioned, believe that many teachers are doing a great job despite this agenda. I just wish it wasn’t in itself doing so much damage.

A better accountability system would place, I believe, more emphasis on human judgement, rather than statistics as the main way in which schools would be judged. For this reason, and I know this is far from perfect, I would go back to some more supportive version of old-style Ofsted inspections as the main measure of school success, rather than league tables. I would try to foster informal accountability between parents and schools – ie the ability of parents to build up good relationships with teachers but to have greater rights if things go wrong, and try to stop accountability being routed via Whitehall. On that note, I would greatly change the way politicians and civil servants exert influence in this system, as their effects – though sometimes well-intentioned – are so often malign. And I say this as someone who has very much been a supporter of the concept of government intervention to create a better society; you just get very cynical after observing the way it operates, close up, where the public interest so often takes a back seat to realpolitik. That word, I think, could come to define New Labour. And the dubious implications of it, for a public sector over which this government has exerted unprecedented control, need some serious thought.

The Government should be about offering the support mechanisms by which schools can provide excellent quality education for pupils and their parents, not forever getting in the faces of professionals who very often are trying to do the right thing by those they serve.

 

 

 

 

 

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *

I accept that my given data and my IP address is sent to a server in the USA only for the purpose of spam prevention through the Akismet program.More information on Akismet and GDPR.