The head of Ofqual, the quango which drew up the controversial system for downgrading A-level results, led a study last year which warned that algorithms had the potential to cause ‘real harm’.
Reliance on algorithms – using computers to make decisions rather than humans – could be dangerous when they are making ‘important decisions about people’s lives’, it said.
The study was carried out by the Centre for Data Ethics and Innovation (CDEI), which is run by Roger Taylor, who is also chairman of Ofqual.
The CDEI warned there were particular concerns when algorithms were used to ‘make decisions which reinforce pre-existing social inequalities’.
Critics have said that Ofqual’s algorithm which drew up the A-level exam results last week was biased in favour of private schools and those with smaller class sizes.
Tens of thousands of pupils had their results downgraded because the computer decided their teachers’ predicted grades were too high, compared with their schools’ performance in previous years.
This led to accusations that the algorithm benefited pupils in wealthy areas compared to those who went to more disadvantaged schools.
A-level results at private schools improved far more than at state comprehensives.
On Saturday, Ofqual published guidance for pupils in England planning to appeal against their grades being marked down.
But hours later the guidance, setting out the criteria for youngsters to make appeals on the basis of their mock exam results, was suddenly withdrawn without explanation.
In a brief statement, Ofqual said the policy was ‘being reviewed’ by its board and further information would be released ‘in due course’.
The Mail can reveal that last year the CDEI launched an inquiry into biases in algorithmic decision-making.
In an interim report, Mr Taylor wrote: ‘Artificial intelligence and algorithmic systems can now operate vehicles, decide on loan applications and screen candidates for jobs.
‘The technology has the potential to improve lives and benefit society, but it also brings ethical challenges which need to be carefully navigated if we are to make full use of it.’
The centre’s inquiry considered potential biases in policing, financial services, recruitment and local government – but did not look at schools or exam grading in particular.
The document went on to say: ‘Concerns are growing that without proper oversight, algorithms risk entrenching and potentially worsening bias.
‘Algorithms can be supportive of good decision-making, reduce human error and combat existing systemic biases.
‘But issues can arise if, instead, algorithms begin to reinforce problematic biases, for example because of errors in design or because of biases in the underlying data sets.
‘When these algorithms are then used to support important decisions about people’s lives, for example determining whether they are invited to a job interview, they have the potential to cause serious harm.’
The centre’s final report was due to be published in March, but it has not yet appeared.
Ofqual did not respond to requests for comment last night.