Press

In the UK, lockdown prevented end of year exams – and so the governments across the UK decided to give out grades based on an algorithm. The result? To cut a long story short: computer said no. 

The extent of the issue first became apparent when the Scottish Qualification Authority (SQA) released their results. The ‘normalisation’ algorithm, which took into account factors like teachers’ predictions, mock exam results and schools’ previous performances. 124,000 students had their predicted results downgraded – and this impacted students in more deprived areas more profoundly, particularly hardworking and talented outliers who had been predicted to excel despite difficult circumstances. When A-level results came out across the rest of the UK a week or so later, the same pattern was observed and the media was flooded with tales of talented youngsters losing prestigious scholarships, industry training places and university places. Both the Scottish and UK government have, by now, agreed to walk back on the algorithmically-decided grades and to award grades estimated by teachers. As a result universities are oversubscribed and youngsters and parents are still trying to figure out the next steps.

It’s a sorry state of affairs – but an instructive one too. It shows how data, poorly used, can entrench and reinforce systemic bias. It shows how ‘the power of prediction’ can poorly serve exceptional outliers. And how blind faith in ‘an algorithm’ reveals a lack of sophistication. Most pointedly, it reminds us that behind every data point is a human being, struggling through a crisis as well as they can.

Nic Pietersma, Director of Analytics, Ebiquity said: 

Algorithms are getting a lot of bad PR at the moment, but an algorithm is just a set of instructions or a mathematical routine that needs to be followed. Algorithms aren’t intrinsically good or bad – they should be judged by their usefulness.

In this case, Ofqual seems to have misjudged the legal and political ramifications of downgrading results to the extent that they have. Accepting teacher assessments may have been the lesser of two evils, but no doubt would also have repercussions elsewhere in the university selection process.

In programmatic marketing we often trust algorithms too much, without anyone in the room having a full end-to-end understanding of what they do with our investment. Our advice to clients is to have some form of validation to regularly ‘kick the tyres’ on the algorithm – we recommend transparent test and control methods.”

 

To read the article in full on LBBonline, click here. 

First featured 20/08/2020.

Want to read more?

Previous post

Next post