What if we are not the accurate judges that we think ourselves to be? What if our natural biases get the better of us, even when we don’t expect them to do so? Does it mean we are not really very good at assessing the work of our own students?

Over these last few weeks, thousands of teachers across the country have been beset by the albatross of coursework, ensuring controlled assessments are complete, and generally being ground down in the heavy mill of internal assessment. It can gruelling, with inspirational moments of students surging to heights they never thought they could achieve, with other students ploughing depths that you hoped they wouldn’t live down to, but still do. Mostly it is plain gruelling!

Don’t get me wrong. I think internal assessment has an important place in our curriculum and I think we should aim to improve it rather than cast it out with criticism of its validity. Not only that, there is ample evidence to prove external assessment is equally flawed and the subject of gross human error.

I do think that with a greater awareness of our biases, thereby making attendant adjustments, we can improve our assessment judgements. Knowing about our natural inclination to apply the ‘halo effect‘ can be useful. The ‘halo effect‘ is one of a veritable army of cognitive biases that attend our thinking, most often despite ourselves. It describes our natural inclination to apply our overall impression of someone: their likability, their attractiveness or their successfulness.

We typically attribute the ‘halo effect’ to celebrities or politicians, but we also apply it to our colleagues and even our students. In his superb book, ‘Thinking Fast and Slow‘, Daniel Kahneman describes how the sequence of his marking as a professor was influenced, and biased, by the first essay a student completed. Each student’s essay was pegged by the success of their first. Surely, if their first essay was so good, their ambition in subsequent essays was well placed (even if it wasn’t); they should probably be given the benefit of the doubt; they were likely to get a better grade than their peer who had developed a poor reputation.

Kahneman makes the subsequent action of not marking an indivdual and their multiple essays in a reconcilable sequence, but marking them all and making a brief note of their mark. His sense of certainty in subsequent in assessing subsequent essays is lost. Kahneman recognises he developed an uncomfortable degree of uncertainty, but that this deliberate difficulty made for better, and more objective, assessments.

What if we could more consistently randomise our internal assessments? I considered that A level students could type essays and then create a short code that they could recognise as theirs, rather than state their name,and I could mark in truly randomised fashion. I wouldn’t be able to rest on my laurels of lazy expectation. It would be harder, but more accurate. Guess what? I didn’t do it. I didn’t find the time and I didn’t fancy losing face either. But the mere consideration got me thinking about the grades or marks I attribute and my biases when assessing the work of my students.

We are, of course, are under pressure for our students to perform and succeed. This places us under a sometimes intolerable stress that inevitably biases our thinking. Add to that our assumptions about students based on their targets. What teacher has not felt the pressure to give an A grade student at least a B when they were realistically languishing at a C grade? Factor in our natural ranking of students in our classes, our all too human liking for industrious students who deserve to do well, and the opportunities for error are legion.

What to do? Perhaps randomised marking is a step too far, but I may well trial it next year. Good quality departmental moderation helps of course. Doing our level best to take the ‘halo effect’ out of our assessment methodology would be useful. Foregrounding the daily biases that cling to our thinking should give us pause.

Related Reading:

Take a list of this excellent post by Christina Milos on assessment bias – it is a great list! See here.