Ethicist warns universities against using AI in admissions

Algorithms may simply lead to ‘self-fulfilling prophecies’ and do not give reasons for their decisions, Oxford researcher warns

September 20, 2019
Arcade toys

Using artificial intelligence to decide which students to admit or researchers to hire risks creating “self-fulfilling prophecies” that could simply reinforce which kind of people win opportunities, an expert in digital ethics has warned universities.

AI in admissions has recently emerged on universities’ agenda: the president of Imperial College London, Alice Gast, said last year that she expected AI would “augment” the process, while several Hong Kong universities have said that they are using the technology to find the student characteristics that predicted future success.

But speaking at a major conference on AI hosted by the University of Oxford on 18 September, Carissa Véliz, a research fellow at Oxford’s Uehiro Centre for Practical Ethics, said that she had several worries about the technology being deployed in education.

It is a “big concern” that AI was being used to “assess people, whether it’s professors, teachers, students, and to filter candidates”, she told Times Higher Education.

ADVERTISEMENT

For example, an AI system might analyse data on researcher career trajectories and find that people who did a PhD at certain universities had more success in the future.

On this basis, the AI might conclude that it made sense to award grants to applicants from those universities over others.

ADVERTISEMENT

But this risked simply reinforcing patterns that already exist, Dr Véliz warned.

We will never know whether the postdoc who did not receive a grant might have become a successful academic,” she argued. “As long as predictions are used to allocate resources and opportunities, the risk of self-fulfilling prophecies seems inevitable.”

Another concern is that algorithms use proxy data – which could be as arbitrary as where people live, or their Facebook friends – to predict how well someone will fare in the future, she said.

Advocates for using AI in admissions argue that machines are less prone to the cognitive biases that could unfairly sway the decision of a human admissions officer.

But some types of AI, such as those based on so-called neural networks that mimic the human brain, are seen as “black boxes” because it can be unclear why they have made a particular decision. “When the algorithm recommends someone, or brands someone as risky, we may not know why that is,” Dr Véliz said.

Humans have to justify their decisions with reasons, she pointed out, but with algorithms, “reasons are missing”.

Just as with new drugs, there should be randomised controlled trials to see what impact AI systems have on the distribution of opportunities, she argued, before they are “let loose on the world”.

Dr Véliz also took aim at universities introducing what in some cases might be “tech for the sake of it”.

ADVERTISEMENT

She asked: “When we introduce tech into universities and education, are we doing it for the benefit of students, and are the benefits really worth the risk? And what are the alternatives?”

Sometimes, low tech is surprisingly robust, and cheaper, and safer. If you think about books as a technology, they are incredibly robust, and much more so than any kind of digital tech that is glitchy, and has security issues and so on,” Dr Véliz said.

ADVERTISEMENT

For example, the filming and recording of lectures is a form of “surveillance” that “diminishes creativity and independent thinking”, she added. “When I lecture in university classrooms where there are cameras and microphones, there is typically less debate on sensitive issues, for instance. No one likes to be on record exploring tentative ideas.”

david.matthews@timeshighereducation.com

POSTSCRIPT:

Print headline: AI in admissions is a ‘big concern’

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Related articles

Reader's comments (3)

The use of AI in recruitment of any type needs to be done with caution. Machine learning may be programmed / code / algorithm written by individual / groups of humans but some form of "bias" will still exist. The same is true of psychometric tests. The results need to be evaluated by humans to see if they are producing the "right results " in terms of people recruited. Take a look at dating apps, based on similar technology. Matching people for relationships and candidates for jobs is a difficult task for any technology.
I feel transparency is needed in the admission processes everywhere. Institutions must decide how.
Algorithms aren’t just helping to orchestrate our digital experiences but increasingly entering sectors that were historically the province of humans—hiring, lending, and flagging suspicious individuals at national borders. Now a growing number of companies, including Salesforce, are selling or building AI-backed systems that schools can use to track potential and current students, much in the way companies keep track of customers. Increasingly, the software is helping admissions officers decide who gets in. Visit: https://ekeeda.com/branch/first-year-engineering

Sponsored

ADVERTISEMENT