AI + Ethics and Governance


ai-ethics
AI Ethics for Responsible Healthcare Research & Clinical Implementation

Yilin Ning, Centre for Quantitative Medicine, Duke-NUS
Julian Savulescu, Centre for Biomedical Ethics, NUS
Michael Dunn, Centre for Biomedical Ethics, NUS
Gary Collins, Centre for Statistics in Medicine, University of Oxford, UK
Daniel Shu Wei Ting, Singapore Eye Research Institute
Roger Vaughan, Centre for Quantitative Medicine, Duke-NUS
Marcus Eng Hock Ong, Singapore General Hospital
Eric J Topol, Scripps Research Translational Institute, CA, USA
Liu Nan, Centre for Quantitative Medicine, Duke-NUS

Despite the potential of AI for healthcare applications and the growing body of AI research and innovations, the implementation of AI technology in healthcare practice is often limited due to various ethical concerns, e.g., consequences of wrong predictions and whom to hold accountable in such situations. However, currently there lacks a recommended best practice for a systematic assessment and subsequent mitigations of ethical issues in AI implementation for healthcare. This need for comprehensive ethics assessment becomes more urgent with the emerging generative AI (GenAI), where powerful technology such as ChatGPT is easily accessible to the general public and the healthcare industry to create health-related contents, perform administrative tasks, and even to inform health-related decisions. Unlike the advanced technological developments, ethical investigations on GenAI are still in an early stage, with more discussion on issues arising than actionable solutions. Our research aims at identifying gaps in current AI and GenAI research for healthcare, and developing actionable solutions such as ethics checklist and assessment tools that foster responsible AI research and clinical implementation.

AI Fairness in Healthcare: Perspectives, Methodology and Applications

Mingxuan Liu, Centre for Quantitative Medicine, Duke-NUS
Yilin Ning, Centre for Quantitative Medicine, Duke-NUS
Mayli Mertens, Dept of Philosophy, University of Antwerp, Belgium
Fei Wang, Weill Cornell Medicine, New York, NY, USA
Daniel Shu Wei Ting, Singapore Eye Research Institute
Leo Celi, Institute for Medical Engineering and Science, MIT, USA
Liu Nan, Centre for Quantitative Medicine, Duke-NUS

ai-fairness
The escalating integration of artificial intelligence (AI) in high-stakes fields such as healthcare raises substantial concerns about model fairness. AI biases, i.e., opposed to the principles of fairness and health equity, arise when relative variables like age, gender, race, or socio-economic status unjustly impact model-based decision-making. With a lack of a commitment to fairness, AI applications can potentially exacerbate, rather than diminish, health inequalities. However, despite extensive developments, concerns about AI fairness in clinical contexts have not been adequately addressed. Clinical AI fairness, specifically, requires a high level of contextualization due to the complex contexts of healthcare, meriting multidisciplinary collaboration between AI researchers, clinicians, and ethicists to bridge the gap and translate AI fairness into real-life benefits [1]. The goal of our research is to identify gaps in current clinical AI fairness research, develop interpretable methods that are fair and accurate in decision-making, and fairly implement these models in the real world of healthcare. 

[1] Liu M, Ning Y, Teixayavong S, Mertens M, Xu J, Ting DS, Cheng LT, Ong JC, Teo ZL, Tan TF, Narrendar RC, Wang F, Celi LA, Ong MEH, and Liu N. A translational perspective towards clinical AI fairness. npj Digital Medicine (2023), 6:172.