Media Releases

Back
Wednesday, 18 Sep, 2024

Duke-NUS spearheads global initiative to standardise generative AI ethics assessments in healthcare

An international consortium led by Duke-NUS Medical School introduces an ethics checklist for systematic and standardised assessment for research involving generative AI technologies in healthcare settings, enhancing accountability 

 

In just two months after its launch in 2022, ChatGPT garnered millions of global users1, signalling a surge in generative artificial intelligence (GenAI) uptake globally2. Users from the healthcare sector recognise the potential of the rapid rise, evolution and adoption of such tools, but also face a critical need for a standardised ethical framework. Led by Duke-NUS Medical School, an international team of researchers has reviewed existing ethical discourse on healthcare GenAI applications and crafted a pioneering ethics checklist to standardise and streamline ethical decision-making.

Senior Research Fellow Ning Yilin and Associate Professor Liu Nan from Duke-NUS Medical School’s Centre for Quantitative Medicine, first and senior authors of the study published in The Lancet Digital Health, discuss their team’s findings and explain why and how the Transparent Reporting of Ethics for Generative AI (TREGAI) checklist is an essential tool for all those working in the field. 

Senior Research Fellow Ning Yilin (left) and Associate Professor Liu Nan highlight the gaps in the existing ethical discourse on GenAI applications in healthcare that informed their new checklist // Credit: Norfaezah Abdullah, Duke-NUS
Senior Research Fellow Ning Yilin (left) and Associate Professor Liu Nan highlight the gaps in the existing ethical discourse on GenAI applications in healthcare that informed their new checklist // Credit: Norfaezah Abdullah, Duke-NUS

Q: Why was this review on ethical discourse on GenAI application in healthcare necessary?

A: While GenAI has exciting potential, ethical concerns have arisen because of its capability to generate realistic text-based and visual content, such as medical reports and images. Our motivation was simple—to find out if there were any existing ethical guidelines or suggestions on GenAI application in healthcare and present the available information systematically.  


Q: How did you conduct the scoping review? What kinds of material were you interested in?

A: We searched for English articles in databases using search terms associated with the concepts “AI ethics”, “generative AI” and “healthcare”. We were interested in peer-reviewed and full-length research papers that were healthcare-specific and discussed ethics in GenAI application in the healthcare sector.

While analysing the articles, we looked at whether GenAI was the cause of ethical issues, if solutions to problems were proposed, and whether they had in-depth discussions on ethics or just brief or general descriptions. We then categorised ethical issues into nine overarching ethical principles that were relevant to AI ethical guidelines and AI use in healthcare.

 
Q: What were some key findings of your review?

A: We identified four gaps in current discourse on ethics and GenAI implementation in healthcare settings: 

  • There is a lack solutions for ethical problems that come with GenAI use. Regulations and guidelines are insufficient as interpreting a broad ethical principle may be challenging. The complexity of methods and technology and rapid developments to them can also make it challenging to mitigate ethical issues even when regulations are in place.  
  • There is not enough discourse on ethical problems stemming from GenAI methods beyond large language models (LLMs) such as ChatGPT. An example would be generative adversarial networks (GANs), which are used to generate medical research data such as medical images.  
  • A common reference for ethical discussions in GenAI research is missing. Most of the articles we reviewed were only interested in selected issues, such as privacy. We also found that authors may have varying definitions of ethical keywords and may omit certain exclude certain keywords without explanation. 
  • There is insufficient discussion on multimodal GenAI. For example, GANs that can be used to generate both X-ray images and radiology reports. Due to their complexity and extended capabilities, multimodal GenAI methods can potentially cause more ethical problems should they be widely implemented.  

Having identified these gaps, we felt that it was necessary to have a tool, the TREGAI checklist, that would allow for consistent and efficient ethical discussions when implementing GenAI.


Q: So tell us more about the checklist. How does it address the four gaps mentioned earlier? 

A: Designed to help users conduct systematic and standardised ethical assessments in research involving GenAI, the checklist is based on nine widely accepted ethical principles— accountability, autonomy, equity, integrity, privacy, security, transparency, and trust— and beneficence, a principle we also deemed to be important. The checklist encourages transparent documentation of ethical issues and facilitates reviews by ethicists, whom we encourage users to work with for deeper ethical assessments.

As far as we are aware, our checklist is the first attempt at creating a practical solution to the ethical issues raised in the papers we included in our review. While the checklist will not close all these gaps, it is a tool that can mitigate ethical concerns by guiding users in making comprehensive ethical assessments and evaluations. 


Q: Who will find the checklist useful?

A: We developed the checklist primarily for members of the scientific community, such as scientific journal publishers, Institutional Review Boards (IRBs), funders and regulators.

For example, users conducting research using LLMs can use it to check if they have fully considered and addressed the relevant ethical implications of their project. Journal publishers can then use it to efficiently assess submissions for unaccounted ethical issues.


Q: Is the checklist only applicable in research-related settings or can it also be used elsewhere?

A:
The checklist can also potentially be used to evaluate the benefits, limitations and risks of other kinds of GenAI-generated content, including social media posts and teaching materials. However, adjustments will have to be made before it can be used in other settings.  

As GenAI adoption becomes more widespread, the checklist may also increase the general public’s awareness of ethical issues that have arisen because of it.

Q: As GenAI is expected to continue evolving, how can we be sure that the checklist will remain relevant in the future?

A: We maintain the TREGAI checklist live online to ensure that we can roll out timely updates if there are new ethical principles to include, changes to recommended courses of action, and developments in GenAI guidelines and regulations.


For media enquires, pls contact Duke-NUS Communications.

References:

[1] This is based on a UBS study cited in Reuters: https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01
[2] This is based on a 2024 survey by McKinsey: https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai

Browser not supported

Modern websites need modern browsers

To enjoy the full experience, please upgrade your browser

Try this browser