Archive for September, 2009

Wednesday, September 23rd, 2009

I gave a talk yesterday on why, in my view, the accountability system now in place in schools is dysfunctional, counterproductive, based on highly questionable assumptions about what motivates teachers and frequently damaging to pupils’ long-term interests. The link is here: 

Apologies, as ever, for some slight glitches in the text.

I also should have put up, a while back, a link to a pamphlet I worked  on about assessment in the early months of this year. “Assessment in schools. Fit for purpose?” is a commentary on the assessment system of which I was one of the authors with the Assessment Reform Group. The product was part of the Teaching and Learning Research Programme, funded by the Economic and Social Research Council. The link for it is here.

- Warwick Mansell

No Comments
posted on September 23rd, 2009

Thursday, September 17th

Well, not one but two stories today taking issue at statistics-driven schooling.

Here is one in the Daily Telegraph, reporting on a document from the AQA exam board setting out the malign impact of league tables on the teaching experience for pupils:

And another one, also from the Telegraph but reported elsewhere, too: This sees Mick Waters, former head of curriculum at the Qualifications and Curriculum Authority, criticising the effects of test-driven schooling in primary schools.

- Warwick Mansell

No Comments
posted on September 17th, 2009

Monday, September 14th

The Guardian published an article over the weekend on students being pushed out of some schools and colleges half-way through their A-level courses.

Read it here.

- Warwick Mansell

No Comments
posted on September 14th, 2009

Monday, September 7th

Anyone who wants to investigate whether the at-face-value impressive rise in national curriculum test and GCSE results over the past 20 years is genuine is always on the look-out for alternative measures of education standards.

In other words, while the official data might point to seemingly staggering improvements, there could be other explanations than that children are simply becoming better educated, not least in the phenomenon of teaching to the test, whereby instruction becomes very focused on a particular exam. If the gains suggested by the improvements in official results are truly useful to the pupil, they should be capable of being measured through other tests.

A study which has just been presented at the British Educational Research Association’s annual conference in Manchester was particularly useful in this sense, as 3,000 secondary school pupils were presented, last year, with almost identical questions to those a similar sample of children had faced back in 1976 and 1977.  

And the results uncovered by this team of highly experienced academics from King’s College, London, and Durham University, were astonishing when one considers the transformation of secondary exam results over the same period, that there has been little change.

The questions – taken by 11- to 14-year-olds in both eras, were divided into three, testing either algebra, ratio or mastery of the manipulation of decimals. In the last of these three categories, pupils appeared to have improved since the 1970s, perhaps reflecting the greater use of calculators and computers where decimal notation is to the fore, suggested the researchers.

But the 2008 pupils fared roughly the same on the algebra questions as they had in the 1970s. And on the fractions questions, they came off slightly worse than their predecessors of a generation ago.

Looking at the results as a whole, lower ability pupils appeared overall to have performed worse in 2008 than they did in 1976/7. That is, the tests suggested a longer tail of underachievement. Today’s higher-achieving pupils, though, fared slightly better than their forebears.

Overall, though, the message of these test results was largely “no change”. Yet the proportion of pupils achieving O-level grade C or better was only  22 per cent in the early 1980s, compared to more than 55 per cent last year. This is potentially devastating stuff for a government which has made exam results the key indicator of national education performance.

GCSE results, like national tests, suffer as measures of national standards because they do not retain questions from year to year. Hence it is impossible to do the sort of direct comparisons which are made possible by the King’s and Durham study*.

The latest research had  a slight caveat, in that it is not until a further set of tests are carried out this year that the sample of modern-day pupils can be said to be truly representative. However, the indication so far is that the 2008 pupils slightly over-represented the higher end of the ability range, suggesting that if anything, the findings so far make today’s pupils look slightly better at maths than they actually are.

The authors conclude:

“There is no evidence for significant improvement, or significant deterioration, of standards between 1976/7 and 2008.

“Although performance in some areas has improved it looks as if, when all the results are analysed, there will be little evidence for the sort of step-change in mathematical attainment which might be suggested by the claimed improvements in examination results.”

Secondary students’ understanding of mathematics 30 years on”, by Jeremy Hodgen, Dietmar Küchemann, Margaret Brown (all King’s College London) and Robert Coe (University of Durham), was presented at the British Educational Research Association conference today (September 5).

* Because the GCSE papers have to change every year, examiners have to decide where to set grade boundaries in order to maintain standards from year to year. And there have been some suggestions, by very experienced people in this field, that this can lead to a gradual reduction in standards over the years. (For more on this, see chapter 13 of Education by Numbers).

- Warwick Mansell

No Comments
posted on September 7th, 2009