Credit: Courtesy of Xiao Liu; Norfaezah Adbullah, Duke-NUS
It is hard to imagine that less than ten years ago, pursuing a career in healthcare AI required curiosity as much as courage, in some cases even a steely resolve, and was definitely not for the faint of heart.
Fast forward to today, and the healthcare ecosystem is on the cusp of a revolution as AI’s role has matured from mere data handling to becoming a trusted cornerstone in personalised care.
To be ready for that, Daniel Ting, an associate professor with Duke-NUS, chief data and digital officer, Singapore National Eye Centre, and director of SingHealth’s AI Office; and Xiaoxuan (Xiao) Liu, an associate professor in AI and Digital Health Technologies and honorary clinician-scientist at the University of Birmingham and University Hospitals Birmingham NHS Foundation Trust, have joined an international effort to develop a consensus around the ethical use of AI in healthcare, called Collaborative Assessment for Responsible and Ethical AI Implementation (CARE-AI).
Nicole Lim, MEDICUS’ senior editor, speaks to them to find out more.
MEDICUS: Thank you for joining us. Most of us still grew up during a time before AI, so what drew you to this field?
Xiao Liu: I kind of stumbled into it. I was doing a PhD at the time looking at a diagnostic test on eye imaging. And around that time, which was 2017, 2018, there were tonnes of really big papers being published on AI in big journals like Nature and JAMA. I noticed that they had lots of methodological issues because I was working on that in my work. I started looking through the papers, scrutinising them a bit more, speaking to others, and then ended up doing a big piece of work looking at the diagnostic accuracy of AI in medical imaging. Because it was the first meta-analysis of diagnostic accuracy, it ended up getting a lot of attention. Our main message was: Look at all these methodological flaws, we can’t trust these results. That generated a lot of subsequent work that I went on to do around reporting standards, including developing the SPIRIT-AI and CONSORT-AI guidelines for AI clinical trials, and so on.
Even though my route of entry was around noticing problems, I was really excited about the technology and what it could do. And on a personal level, working on AI was a big step in terms of moving into a whole other discipline, with its own language, concepts and methods, and also its own culture. What I loved the most, and still love the most now, is that interdisciplinary collaboration.
Dan Ting: It really was by chance. First, when pursuing my PhD in Australia, I had a great mentor who had strong focus in ophthalmic innovation, and I decided to explore a project in which we combined portable retina cameras with the necessary software and basic medical imaging analytics to help screen for diabetic retinopathy. Then, I came to Singapore in 2012, and I was tasked to lead the AI lab in late 2013 or so prior to the pixel-based deep learning era for health.
With older feature-based AI techniques, the AI algorithm’s performance was sub-optimal for several years. In 2014, we saw how the advent of AlexNet revolutionised the computer vision domains and we decided to use convolutional neural networks to train our AI model. We did a first iteration, using VGGNet, and it shocked us. The performance increased from high 70s to low 90s. Just to be sure, we re-did the experiments multiple times and the performance was quite consistent. We started to prepare our manuscript around the same time that Google was working on a similar paper. Google, with all their resources and speed, got their first paper published in December 2016 in JAMA, while ours was under review at another journal. And sure enough, the journal rejected us. We’d lost the novelty. That was after we’d addressed some 400 reviewer comments in a week. That was probably the worst moment in my academic career, with many sleepless nights to work on the rebuttals. And it didn’t end there. I received ten consecutive rejections over the next six months because of the Google paper. I went from thinking we had the world’s greatest technology to multiple failures.
Despite all that, I had to continue because I hope that someone should sit between the technical team and the clinical team so that nothing gets lost in translation. Take Taipei 101 as an example. It’s more than just a skyscraper. The architect had to work out how to build a hundred-storey-high tower that can withstand typhoons and earthquakes. And that’s just as true for healthcare AI. For AI to work, you can’t just build an AI algorithm using medical datasets, you need the clinicians to construct the clinical and digital workflows to implement it in the real-world settings.