‘Use of AI in research widespread but distrust still high’ – OUP

Poll finds most researchers are using AI but only a tiny number trust technology companies on data privacy

May 23, 2024
A woman and a robot work together
Source: iStock

Three-quarters of researchers routinely use artificial intelligence but only a tiny proportion trust technology companies not to reuse their data without permission, a major poll of scholars has revealed.

New research by Oxford University Press (OUP), which surveyed more than 2,300 researchers, found that 76 per cent use some form of AI tool in their research, with machine translations and chatbots cited as the most popular tools, followed by AI-powered search engines or research tools. AI is most used for discovering, editing and summarising existing research, the report found.

However, only 8 per cent of researchers trust that AI companies will not use their research data without permission while just 6 per cent believe AI companies will meet data privacy and security needs, says the study published on 23 May.

The study comes amid widespread concern in the publishing industry that AI tools are lifting and reproducing academic texts in a different format without proper attribution – a situation that will undermine long-established copyright and intellectual property norms for journals and scholars.

ADVERTISEMENT

According to the OUP study, about three in five feel that the use of AI in research could undermine intellectual property, and result in authors not being recognised appropriately for use of their work.


Campus podcast: how to use generative AI in your teaching and research


The poll, which drew responses from different disciplines, career stages and from countries across the world, found that 25 per cent of researchers believe AI will reduce the need for critical thinking and could damage the development of these fundamental skills for the future.

ADVERTISEMENT

David Clark, managing director of OUP’s academic division, said the research will help the university publisher to “understand how researchers are thinking about generative AI and its use in their work”.

“As these technologies continue to rapidly develop, our priority is in working with research authors and the broader research community to set clear standards for how that evolution should take place,” said Mr Clark on what he called a “fast-moving, complex area”.

"We are actively working with companies developing LLMs [large language models], exploring options for both responsible development and usage that will not only improve research outcomes, but also recognise the vital role that researchers have – and must continue to have – in an AI-enabled world,” he added.

The poll also asked researchers about institutional attitudes towards AI, with almost half (46 per cent) of researchers saying that the institution they work at has no AI policy.

jack.grove@timeshighereducation.com

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Related articles

Sponsored

ADVERTISEMENT