Two healthcare AI innovators delve into the ethical, technological and personal dimensions of integrating AI into medicine
 
In conversation with

Credit: Courtesy of Xiao Liu; Norfaezah Adbullah, Duke-NUS


It is hard to imagine that less than ten years ago, pursuing a career in healthcare AI required curiosity as much as courage, in some cases even a steely resolve, and was definitely not for the faint of heart.  

Fast forward to today, and the healthcare ecosystem is on the cusp of a revolution as AI’s role has matured from mere data handling to becoming a trusted cornerstone in personalised care.

To be ready for that, Daniel Ting, an associate professor with Duke-NUS, chief data and digital officer, Singapore National Eye Centre, and director of SingHealth’s AI Office; and Xiaoxuan (Xiao) Liu, an associate professor in AI and Digital Health Technologies and honorary clinician-scientist at the University of Birmingham and University Hospitals Birmingham NHS Foundation Trust, have joined an international effort to develop a consensus around the ethical use of AI in healthcare, called Collaborative Assessment for Responsible and Ethical AI Implementation (CARE-AI).

Nicole Lim, MEDICUS’ senior editor, speaks to them to find out more.

 

MEDICUS: Thank you for joining us. Most of us still grew up during a time before AI, so what drew you to this field?

Xiao Liu: I kind of stumbled into it. I was doing a PhD at the time looking at a diagnostic test on eye imaging. And around that time, which was 2017, 2018, there were tonnes of really big papers being published on AI in big journals like Nature and JAMA. I noticed that they had lots of methodological issues because I was working on that in my work. I started looking through the papers, scrutinising them a bit more, speaking to others, and then ended up doing a big piece of work looking at the diagnostic accuracy of AI in medical imaging. Because it was the first meta-analysis of diagnostic accuracy, it ended up getting a lot of attention. Our main message was: Look at all these methodological flaws, we can’t trust these results. That generated a lot of subsequent work that I went on to do around reporting standards, including developing the SPIRIT-AI and CONSORT-AI guidelines for AI clinical trials, and so on.

Even though my route of entry was around noticing problems, I was really excited about the technology and what it could do. And on a personal level, working on AI was a big step in terms of moving into a whole other discipline, with its own language, concepts and methods, and also its own culture. What I loved the most, and still love the most now, is that interdisciplinary collaboration.

Dan Ting: It really was by chance. First, when pursuing my PhD in Australia, I had a great mentor who had strong focus in ophthalmic innovation, and I decided to explore a project in which we combined portable retina cameras with the necessary software and basic medical imaging analytics to help screen for diabetic retinopathy. Then, I came to Singapore in 2012, and I was tasked to lead the AI lab in late 2013 or so prior to the pixel-based deep learning era for health.

With older feature-based AI techniques, the AI algorithm’s performance was sub-optimal for several years. In 2014, we saw how the advent of AlexNet revolutionised the computer vision domains and we decided to use convolutional neural networks to train our AI model. We did a first iteration, using VGGNet, and it shocked us. The performance increased from high 70s to low 90s. Just to be sure, we re-did the experiments multiple times and the performance was quite consistent. We started to prepare our manuscript around the same time that Google was working on a similar paper. Google, with all their resources and speed, got their first paper published in December 2016 in JAMA, while ours was under review at another journal. And sure enough, the journal rejected us. We’d lost the novelty. That was after we’d addressed some 400 reviewer comments in a week. That was probably the worst moment in my academic career, with many sleepless nights to work on the rebuttals. And it didn’t end there. I received ten consecutive rejections over the next six months because of the Google paper. I went from thinking we had the world’s greatest technology to multiple failures.

Despite all that, I had to continue because I hope that someone should sit between the technical team and the clinical team so that nothing gets lost in translation. Take Taipei 101 as an example. It’s more than just a skyscraper. The architect had to work out how to build a hundred-storey-high tower that can withstand typhoons and earthquakes. And that’s just as true for healthcare AI. For AI to work, you can’t just build an AI algorithm using medical datasets, you need the clinicians to construct the clinical and digital workflows to implement it in the real-world settings.

Get the latest news and features delivered to your inbox.

SUBSCRIBE TO MEDICUS


MEDICUS: Dan, how did you pick yourself up?

Dan Ting: Having great mentors help. During the worst moment in my AI career time, I was supported immensely by my mentor, Professor Wong Tien Yin. If he had not been supportive, I think I would not have been able to get to where I am today. To get our paper published in the end, I had to grade more than 200,000 retina images for three different conditions. We compared the AI performance against my grading and current standard of care. The AI showed human-like performance for all three conditions. So we wrote to Dr Bauchner, JAMA’s  then editor-in-chief, highlighting that this paper doubled the sample size and used real-world samples from a national screening programme. We even tested our model against 10 different independent testing datasets from around the world, and it maintained its accuracy.

 

MEDICUS: Can you describe what a Healthcare System that uses AI ethically but extensively would look like?

Xiao Liu: Going blue skies, my vision would be to have an AI companion that knows the status of my health, what I value, the kind of healthy life I want to lead, and what my data represents in the context of what we understand about human biology, physiology and pharmacology and all the things that we interact with in the environment, and can generate information that I can use to, for example, modify my lifestyle or seek a certain type of treatment or medical attention, so that I can get closer towards that ideal health status.

Dan Ting: I always felt that AI should not be artificial intelligence but augmented intelligence.

For example, if our mom is sick and you bring her to a hospital, do you want her to be seen by an AI robot or a human? Of course, a human. The next question is can we use the AI robot to augment the patient journey without replacing the human? Is there a use for AI before the consultation and after the patient steps out of the clinic? I think with the advent of generative AI, we’re going to transform how healthcare can be delivered in a very significant way. I see this impacting patient education and even how we communicate with them, not only in Singapore, but the rest of the world.

 

MEDICUS: AI consumes a lot of energy. Can it be justified?

Dan Ting: I always advise AI researchers not to consume too many GPU resources if you don’t need to. Less is sometimes more. Look at the task that you wish to achieve and select the AI techniques appropriately. If you don’t need to use the large models, don’t. Always use the smallest possible model to achieve your task. That is the only way to use AI responsibly and minimise the carbon footprint worldwide.


Dan Ting’s minimalist philosophy extends from the digital to the real world with a minimalist office // Credit: Norfaezah Abdullah, Duke-NUS

Dan Ting’s minimalist philosophy extends from the digital to the real world with a minimalist office // Credit: Norfaezah Abdullah, Duke-NUS


Xiao Liu: We know that AI, and particularly some of the bigger, generative models, costs a huge amount of energy to train and use. What I’m not seeing is mature conversations around the environmental trade-offs versus the potential health benefitsThat’s a conversation that needs to start very soon because lots of hospitals are deploying large language models, which means that we will soon be contributing substantially to that environmental burden.

 

MEDICUS: AI is already being used to optimise service delivery, diagnostics and many other aspects of care. But how can it enhance the patient experience beyond offering an efficient service?

Xiao Liu: We’re only beginning to scratch the surface of this question because we haven’t done a tonne of engagement with patients and the public about how they would like to see transform their care. We know that patients like not to wait, and to have access to more immediate care. We know that that improves outcomes and reduces anxiety, so reducing waiting lists seems like a good goal. But I think we could do more broader engagement around reimagining our health system. I would love to see more public engagement work on that.

 

MEDICUS: What kind of application of AI in healthcare do you foresee will have the biggest impact?

Dan Ting: Worldwide, the number of healthcare practitioners cannot keep up with demand, so how can we use AI to close the gap? I think in resource-rich countries, healthcare practitioners need AI solutions that will increase work productivity. The painstaking, repetitive tasks, like organisation and summarisation of clinical notes, can be done by generative AI. Patient communication is another area. Training an AI agent to be a clinic manager to plan the resources, manpower and financial projections—there is a lot of work being done in this space because the return on investment is huge.

Whereas if you look at low- to middle-income countries, there, the benefit comes from autonomous AI that increases access to care. But the ‘big but’ is that these countries need to have enough resources to cater and care for patients who are picked out by the AI for further care. Otherwise, we go against the basic principle of do no harm.

 

MEDICUS: Will it eliminate global health disparities?

Xiao Liu: If we do it intentionally, yes. But it’s not an automatic yes. If we only deploy AI in the most digitally mature hospitals where there are large investments or capabilities in terms of manpower, and technology, then we will do the opposite. If we stay cognizant of the ways in which technology like AI can help rebalance health inequalities and access issues, then we can kind of strategise in that way. But it has to be intentional, it’s not an automatic thing inherent to the technology.

Dan Ting: To close the gap, we can’t just invest in AI. We need to invest in the ecosystem so that the ecosystem is AI-ready and its implementation makes care more accessible, affordable and more efficient.

 


MEDICUS: 
Can you share a specific example where AI has made a significant, positive impact on healthcare?

Xiao Liu: One of my favourite examples is the big diabetic eye screening programme in Singapore. That huge programme has been so thoughtfully done, with a tonne of evidence to support it around accuracy, outcomes and health economics. There aren’t many other programmes that have that level of evidence elsewhere in the world that I’m aware of.

Dan Ting: Selena+ was developed to transform the workflow in the screening centre. Before Selena+, patients with diabetes would go to a polyclinic for their annual diabetes health check, including an eye screen. For this, the patient sits in front of the retina camera, which will take a picture. This is then sent to a reading centre, where a person trained to grade these pictures evaluates the image. If the image is deemed abnormal, a second, more experienced grader reviews the image to confirm the assessment.

So it took two humans in the loop to decide if a patient should be referred to a specialist centre like the Singapore National Eye Centre.

With Selena+, we’re getting the AI to perform the first grading. If the AI deems the image abnormal, a senior grader reviews the AI’s assessment before deciding whether this patient should be referred. After assessing the health economics of Selena+, we chose a semi-automated approach.

Given that we don’t want AI to miss stuff, we accept a possible increase in false-positive assessments. Nevertheless, the human eye is very good at spotting what’s normal and what’s not. When we combined the strengths of the human intelligence with our AI solution, that yielded the most cost-effective implementation.

This also offers better career opportunities for our graders as their job scopes are adapted. Now, the junior graders are trained to review those images flagged by AI as abnormal, while the senior graders are upskilled to perform more managerial tasks. 


Stock photo of an Asian man driving

If AI were to take over performing most tasks, Dan Ting says that driving would be the one thing he would miss the most // Credit: istock.com / vesnaandjic

MEDICUS: You’re part of a new group that’s trying to put together a consensus around the ethical use of AI in healthcare. Can you tell us more about your motivation to be a part of that?

Xiao Liu: The thing that I love about the CARE-AI concept is that it has a focus on implementation. A huge amount of attention over the last decade has been on developing the best tool, achieving the best benchmark, having the biggest dataset, the biggest models, and meanwhile, once you deploy it into a somewhat imperfect health system, we can’t achieve the results that we were hoping for.

So what I love about CARE-AI is that it focuses on that distal end of the problem. And it's about how we can ethically use prediction models at that implementation phase, so that we can ensure we achieve the outcome that the tool theoretically is setting out to achieve.

Dan Ting: CARE-AI brings together physicians, ethicists, regulators and technical experts to see how we can shape a very safe environment within the healthcare space, so that any AI healthcare algorithm can be assessed using our recommendations as a reference framework to make at least a minimally viable product. 

Leading the CARE-AI consensus

The Duke-NUS AI + Medical Sciences Initiative, or DAISI, which brings together deep expertise in developing ethical AI for trustworthy healthcare applications as well as quantum computing for drug discovery, will provide the institutional support for the CARE-AI consensus study. Writing in Nature Medicine, the CARE-AI consensus team set out their aim: to develop a new assessment tool to “provide recommendations to promote the implementation of fair, trustworthy and ethically responsible AI prediction models to improve health outcomes”.

Senior author Liu Nan, director of DAISI, said: “Given the scope of the study, forming the consensus will likely take about a year, as it requires significant effort and collaboration across the globe. Once developed, the CARE-AI consensus will serve as a guide for real-world implementation, integrating ethical considerations into the use of AI in clinical settings. It will address various stages of medical decision-making and account for diverse factors such as disease conditions and geo-economic contexts.”

MEDICUS: Why now?

Dan Ting: Chat 3.5 wowed the world 18 months ago. Fast forward to now, there’s a tsunami’s worth of gen AI algorithms and something has to be out there to provide an initial guidance and framework. This first version is important because we are addressing the technology that is already out there but these guidelines will have to continue evolving with the technology, so that we can be comprehensive and ensure the safe and responsible use of AI.

Xiao Liu: The more that we can embed ethics into best practices at every stage of the life cycle, the better. Sometimes, ethical considerations are an add-on to everything else. So the more that we can embed these considerations into each stage of the lifecycle, the more we are considering it from the get-go. 

Picture of an open fridge
Xiao Liu likes to use ChatGPT to construct recipes using whatever she has in the fridge to help her reduce food waste. // Credit: istockphoto.com / bluecinema

MEDICUS: We’ve talked about some weighty considerations and great opportunities for the healthcare system. But on a more personal note, what’s your go-to AI tool?

Dan Ting: Chat and Meta AI. I like to countercheck their answers.

Xiao Liu: I also really like using Suno Ai, a music generator. You can prompt it to generate music of a mixture of genres, give it a topic or some lyrics to start with, and it will generate an entire song. When my team hold events, I have previously prompted it to write a song about the event, and then I put it on as people are walking in.

 

MEDICUS: If you had an AI sidekick, what would you get it to do?

Xiao Liu: I would love to have an AI sidekick to help me reduce waste in every part of my life. Help me reduce energy wastage, work out what I can recycle, which journeys I can avoid, and even maybe help me optimise my sleep.

 

MEDICUS: What’s your favourite sci-fi portrayal of AI?

Dan Ting: The movie “Her”, where AI is a social companion.

Xiao Liu: Westworld’s Maeve Millay. She’s probably one of the most emotionally complex characters as she starts to develop awareness and begins to work herself out of these loops that are created within her narrative.


MEDICUS: What’s the biggest misconception about AI?

Dan Ting:AI is still not the solution for most things.

Xiao Liu: I hope this will date soon because it won’t be a misconception anymore, but I still hear that we should be worried that AI is continuously learningonce it’s deployed. As it stands, there are no examples of AI as a medical device that is constantly learning live and changing in a way that has no human oversight. Yes, we can update AI software, but as it stands it is still being done in a monitored way where we can test it, re-test it, and revalidate it. This is also done in a controlled way through regulation, such as predetermined change control plans.

 

question-mark
Have a question? Send it in and it may be answered in the next issue of MEDICUS!

ASK MEDICUS

MEDICUS: How do you relax?

Dan Ting: Spending time with my wife and two kids, seven and two, on weekends for outdoor adventures. Since 2.5 years ago, I have also put a lot of thought into fitness, healthy longevity and bluezone activities. On weekdays, I have two to three days, one-hour protected lunch time, to hit the gym. I find that keeps my mind sharp, and removes my tiredness (also helps with jetlag when I travel).

Xiao Liu: This year, I started dancing, taking up salsa and bachata, two forms of Latin dance styles. That takes my mind off AI and work entirely, which is wonderful. It’s lots of improvisation and really good fun, and super social.

 

MEDICUS: What advice do you have for medical students?

Dan Ting: I think they are learning medicine in an exciting era with AI. The sky is the limit with AI, although they all need to be trained to use AI in a safe and responsible manner to help patients’ care. Keep your eyes on the star, and feet on the ground – a quote by Ted Roosevelt that I always like to share with them.

Xiao Liu: They’ll have to learn new skills. But these are great skills to have, and will ultimately make us better clinicians. Quoting Eric Topol, it will make more time for us to do the things that clinicians value: interacting with patients and caring for the human side of medicine.

 

MEDICUS: Thank you both so much for sharing your insights and personal experiences with us.



Get the latest news and features delivered to your inbox.
SUBSCRIBE TO MEDICUS