Chinese facial recognition scholar ‘ignored questions, went home’

AI expert re-emerges at top Chinese university as former employer finds Uighur study breached Australian research code

九月 15, 2021
Group of surveillance cameras
Source: iStock

A Chinese academic at the centre of concerns about the use of Australian research has returned to his homeland, amid findings that he failed to obtain ethical approval for research into facial recognition of Chinese minorities.

Liu Wan Quan, an artificial intelligence expert who taught at Perth’s Curtin University for more than two decades, featured in a 2019 ABC Four Corners exploration of Australian research contributions to China’s surveillance of Uighurs.

Dr Liu co-authored a 2018 paper on “Facial feature discovery for ethnicity recognition”, published in the journal Data Mining and Knowledge Discovery. The paper suggests that analyses of T-shaped regions of people’s faces – the eyebrows, eyes and nose, for example – are “quite effective” for distinguishing ethnicity but unsuitable for general face recognition.

The study was based on facial images of 300 Uighur, Tibetan and Korean students at Dalian Minzu University in northern China. The paper does not explain the purpose of the research, but says that racial analysis based on facial images is a “popular topic” with potential applications in border control and public security.

Curtin reviewed the approval procedures for the study and said Dr Liu had not responded to some of its questions. It has now found that he breached the Australian Code for the Responsible Conduct of Research by failing to provide evidence that he had obtained ethical approval or the students’ informed consent for the use of the images.

In a letter to University of Leuven bioinformatics professor Yves Moreau, who has expressed concerns about the study, Curtin said that Dr Liu had also breached the code by claiming co-authorship for the paper even though “was only involved in the research technically”.

The letter says that Dr Liu has “resigned” from Curtin and is now a professor at Sun Yat-sen University in Guangzhou. Dr Liu’s LinkedIn page says he left Curtin in May and joined Sun Yat-sen the same month.

The university said that its concerns were not limited to approval procedures. “Curtin University unequivocally condemns the use of artificial intelligence, including facial recognition technology, for any form of ethnic profiling to negatively impact or persecute any person or group,” the institution said.

IPVM, a Pennsylvania-based organisation that reports on video surveillance, said “Uighur-recognition technology” was widely used in China. “The People’s Republic of China also harshly represses Tibetans and regularly tracks and deports North Korean refugees,” it added.

Professor Moreau, who campaigns against the “creeping development of mass surveillance technology”, said that computer scientists should think about the parameters of “acceptable” research. “Do we really need…models that [track] groups like Tibetans and Uighurs?” he blogged.

But the journal’s publisher, Wiley, defended the paper after concerns were raised in 2019. “This article is about a specific technology and not an application of that technology. It bridges artificial intelligence and physical anthropology, and contributes to this specific body of scientific literature,” Wiley said.

Curtin said that it had asked Wiley to retract the 2018 paper “multiple times”. A different paper based on the same data set has been retracted by the journal IEEE Access.

The episode could add to Canberra’s jitters about Australian research contributing to offshore repression. New guidelines on overseas partnerships are expected to be released this month, while the sector is also awaiting advice on agreements to be vetoed under the Foreign Relations Act.

john.ross@timeshighereducation.com

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.

Reader's comments (1)

“Curtin University unequivocally condemns the use of artificial intelligence, including facial recognition technology, for any form of ethnic profiling to negatively impact or persecute any person or group,” the institution said. They must know full well that AI research will be used for this type of application and probably even more dubious ones. With advanced robotics someone will create a terminator and all the ethical "controls" in the world will not stop this.
ADVERTISEMENT