One of the functions of money, according to economists, is
to act as a unit of measurement. Countless hours of economists’ time is wasted
by constant adjustments to measurement to take account of the changing value of
money – as a result of inflation or deflation. On the other hand, endless hours
of family amusement have been caused over the years by reflections that ‘I used
to be able to buy a Mars Bar for less than 10p’, and so on. On balance, most
would agree that an absence of inflation, and the consistency of measurement
would be a good thing.
Similarly, in the UK at least, the examination of school
pupils and the grading of their efforts, has been subject to a form of
inflation. Are grades in this year’s exams consistent with the same grades in
the same exams in previous years? This kind of longitudinal consistency is
vital if exams are going to be meaningful – for example as a guide to the
probability of success in other academic courses, or as a means of comparison
of the ability of those who take their exams in different years.
It’s clearly been a mistake that a larger and larger
percentage of pupils have been gaining higher grades since GCSEs were first
taken in 1988. The change in the proportion has been greater than the rise in ‘standards’
in schools and so a B grade in English Language GCSE 2011 means something
different from a B grade in the same exam twenty years earlier. Of course this
has to be balanced against a greater accountability in schools, an inspection
regime which has teeth, increased scrutiny of what teachers do in the
classroom. It is possible that more pupils deserve their B grade in English
Language – just because more people have climbed Everest in the last twenty
years does not mean that doing so has got easier, or that Everest has got
shorter.
It is even possible that what is meant by proficiency in
English Language has changed over a twenty year period – for cultural,
technological and practical reasons. But this is not the point – a B grade (or
any other) should mark a basic level of proficiency which is intelligible to
those who will interpret it – in education, or in the workplace.
So, the solution to two decades of grade inflation is not to
reverse the trend, and gently reduce the percentage of pupils gaining higher
grades. To do this introduces a second layer of complexity in interpreting the
results. Two decades from now, who will remember in which year grade inflation
was reversed, or by how much. These changes make the nature of the signal given
by the grade even less clear – the measurement is even more ‘noisy’. It would
have been much more helpful to freeze the value of the GCSE at the 2011 value,
and insist that any deviation from that point could only be because there had
been a significant change in the capability of the educational infrastructure.
This would be the examining equivalent of joining the Euro – or for historians
of joining the Gold Standard.
If one does this, how does one discriminate at the top
level? By introducing – once and for all – a new grade. Perhaps an ‘S’ level at
GCSE, for the top 2-3% of candidates would accomplish this, although this is
only worthwhile if the examinations themselves reward the most able pupils, but
that is a whole new can of worms. The only tenable alternative is to shift to a
whole new grading system, as Ofqual proposes, on a numerical scale. On
this score, Ofqual is absolutely right. If it is to have longitudinal
integrity, however, there will have to be clear regulatory protection of the
standard required for each grade, so that grade inflation – or its more recent
cousin, grade deflation – becomes a topic in history, rather than in politics.
No comments:
Post a Comment