AI in healthcare needs more than speed
And it is up to us to decide what else matters
 By Nicole Lim, Senior editor
 
A collage of four pictures generated by AI with cues from the article

A collage of images generated by an AI that was given prompts from the story 
 

As a child of the 80s, I grew up during the brief revival of The Jetsons, a 60s cartoon TV show about an American family whose lives epitomised the Space Age dream. The Jetsons, living in the latter half of the 21st century, travel by flying car and their housekeeping needs are tended to by Rosey the Robot. In Orbit City, everything is better because of technology, including healthcare: check-ups are done by swallowing a ‘pillcam’ that sends a live feed to the physician’s screen. And the family doctor can be summoned with the push of a button for an instantaneous video consult.

Today—almost 20 years ahead of the show’s imagined 2062 reality—capsule endoscopies are a licensed procedure, even capable of performing biopsies and delivering drugs. Video consultations, too, have been warmly embraced since the pandemic.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

But rather than demonstrating that humanity is on its way to living next door to the Jetsons, leading AI experts believe that we are heading for extinction, with the time left to adjust course running out with the launch of each new AI tool.

With a diminishing workforce that is under the mounting pressure of caring for rapidly ageing societies, healthcare is ripe for disruption by AI innovations that promise to maximise efficiencies, tailor treatments and accelerate diagnoses.

But are speed, efficiency and convenience what we value the most when we need care?


From stick to auto to self-driving cars: why medicine has to chart a different path

Public discourse on AI—as the statement by technology experts revealed—focuses on the assumption that AI will displace humans, as is happening with cars. And that it is happening by stealth, with an increasing number of day-to-day decisions and skills, such as changing gears while driving, being outsourced to an AI.

This kind of outsourcing, even in medicine, can be tempting, admitted Dr Pamela Basilio-Razon, an emergency medicine physician working at the Singapore General Hospital. A relative newcomer to AI, she currently leads a clinical trial to determine the impact an AI-powered triaging tool, called aiTriageTM, has on patients who present to the emergency room with chest pain. 

A picture of emergency physician Dr Pamela Basilio-Razon in front of Singapore General Hospital

Dr Pamela Basilio-Razon, an emergency medicine physician, is enthusiastic about the support AI can offer busy clinicians like her

While she’s enthusiastic about the potential benefit this can bring, particularly to patients whose chest pain is not a symptom of imminent danger, Basilio-Razon is acutely aware of the risk of outsourcing the decision-making: “There’s a danger that you could develop this cognitive laziness and start really relying on AI tool. I think that’s a real concern.”

For ethicist and moral philosopher Professor Julian Savulescu, this is precisely the point. In writing AI programmes, cut-offs and fixed parameters are essential. A self-driving car will use a universal map to plot the most efficient route to deliver its passengers to their desired destination. Providing good care, on the other hand, requires not only re-drawing the map but adjusting the route around the patient’s wishes and values, while stewarding the resources available.

“AI can’t tell you what you should do. It can compute the probability of that individual dying or the probability of you hitting the truck, but it can’t tell you whether to prefer your life or the pedestrian’s life,” said Savulescu, who heads the Centre for Biomedical Ethics at the National University of Singapore (NUS). “It can’t tell you what the right decision is.”

Reflecting on her experience of using aiTriageTM, Basilio-Razon agrees. While it has lent additional confidence to her decision-making, she doesn’t rely on its assessment alone. Particularly for elderly patients who may have multiple other ailments, a holistic assessment by the clinician is still key, she emphasised.

“At the end of the day, because I am the clinician, I am still responsible and accountable for my patients,” said Basilio-Razon. “So I have to be confident in the decisions I make. And that accountability, I would never outsource to an AI.” 

Get the latest news and features delivered to your inbox.
SUBSCRIBE TO MEDICUS

Another device where AI has been widely incorporated is the automated external defibrillator or AED. These devices incorporate AI-powered assessments that inform users when to shock a heart attack victim and when to continue performing cardiopulmonary resuscitation or CPR.

Thinking a step into the future, fellow emergency physician from the Singapore General Hospital and Director of Duke-NUS’ Health Services and Systems Research Programme Professor Marcus Ong said: “It’s not too far to extend that it could tell you when CPR’s futile. But is that a good thing or a bad thing?”

 

A photo of Marcus Ong an emergency physician and health services and systems research programme director at Duke-NUS  in his office looking out of the window

Professor Marcus Ong sees AI as an aid or decision-support tool for physicians who should remain at the heart of decision-making. What concerns him more are areas of consumer healthcare that fall outside of the regulatory framework developed for medical applications
  

“AI in healthcare is meant to be an aid or decision support. It should never be a replacement for the role of the physician… the human always has to be... at the heart of decision-making.”

Prof Marcus Ong

Stopping aggressive resuscitation when the victim will not regain any brain function may make a death easier to accept for those left behind.

“But then again, if you are making a premature decision to stop when someone has a chance to survive…” he trailed off. “So that’s where the research is interesting and where more data, not less, helps us make better decisions.”

But more data makes processing power an inevitable necessity. 

Dr Nicholas Shannon, a clinician-scientist in the public sector, remembers completing a project in 2012 during his post-doctoral training in bioinformatics at Cancer Research UK Cambridge Institute, which included just 80 patients. 

“Now, I am analysing datasets of 800,000 patients,” he said. And each patient comes with thousands of data points that are required to predict whether their tumour will react to a particular chemotherapy. To spot meaningful patterns in this ocean of information, computer assistance is essential to arrive at a timely diagnosis.

“You’re getting big data but at the end of the day, you’re trying to get something that you can apply to one single patient,” observed Shannon, a Duke-NUS MD programme alumnus (Class of 2017) and skin cancer surgeon.

Picture of Nicholas Shannon a DUke-NUS alumnus and skin cancer surgeon

Dr Nicholas Shannon, a clinician-scientist in public practice, welcomes a helping hand from AI so that he can focus on what matters: his patients


Charmed by our imagination

And despite AI’s allusion to something more human-like, current AI technology remains limited by the data it processes. And data scientist Associate Professor Liu Nan from Duke-NUS’ Centre for Quantitative Medicine likes it like that.

“At the moment, most AI algorithms are data-driven,” said Liu, who specialises in applying AI, machine learning and data science in various clinical domains. “And whenever we do the modelling, we do not want the computer to think by itself.”

But that can be hard to discern for the end-user. Ask a large language model like ChatGPT or Bard a question, and it will give an answer.

“The problem with AI is that when it gives you an answer, it sounds very authoritative,” said Shannon.

Before a surgeon like Shannon makes any treatment suggestion, he takes a thorough history of the patient, reviews their existing medical records and attempts to determine the values, wishes and needs of the individual patient.

But ChatGPT or Bard will simply generate an answer from an aggregation of what thousands of other humans have done. And when it lacks information, it resorts to filling the gaps with likely answers or made-up responses that look like the real deal.

Being able to discern when an AI is accurate can require subject matter expertise. As to how the AI derived that particular answer in the first place, that can be impossible to fully retrace, even for data experts like Shannon and Liu.

“If a clinician asks me how come a patient with cardiac problems and abnormal lab results is labelled as low risk—say a five per cent risk—by aiTriageTM,” said Liu, who developed the device with Marcus Ong, “I cannot explain the exact mechanisms behind the risk calculation.”

A robot takes the blood pressure of a patient

   Credit: iStock.com / miriam-doerr


For Liu, that low risk is not unreasonable because aiTriageTM calculates only “the probability of very bad outcomes”, such as an imminent stroke or heart attack. It doesn’t assess the risk of other complications that may still require some medical intervention.

Defining how an AI intervention can be best used and how its results should be interpreted will be key, the clinicians agree.

“It has to be viewed like someone else giving you that information, would you take their word for it and does that transfer responsibility to you?” said Shannon. “So, something that needs to be built into these models is the ability to explain themselves.” 

“This is an area where what we really don’t want is AI just being rolled out. In medicine, we need to do slightly better than that and avoid innovation without assessment and ethical justification.”

Prof Julian Savulescu

And that should not just be how the AI programme arrived at the decision but include an indication of the reliability of its sources. This could be an ability to differentiate between lab reports, medical information documented by a clinician and a patient’s self-reported symptoms, which can be very unreliable.

“I can be asking a patient with a scar down his abdomen if he had surgery before, and he says no,” said Shannon. “But AI will take that at face value.”

To help bridge part of this trust gap, Liu created a transparent, universal clinical scoring software, he calls AutoScore. The software can be trained on any dataset with binary, survival or ordinal outcome, to generate tailored clinical scores. 

And like any clinical score in use today, the clinician knows exactly what the factors are and how they are weighted. 

“The reason we developed AutoScore is to meet the needs of clinicians for clinical scores. It is a hybrid, so we use AI to automatically determine the cut-offs and parameters, but we allow the clinician to fine-tune them.

“So, we combine clinicians’ knowledge and experience with a data-driven AI solution,” said Liu, who hopes that the software, which is already being trialled by hospitals from New York to Seoul, can become a new standard for clinical score development around the world.

While explainability and transparency may help in promoting trust, Savulescu argues that what ultimately matters is justifiability: “Because you have to be able to justify what is being done on the basis of AI. It is the justifiability of the operation of the AI and the pattern of outcomes that it creates that are the most important from an ethical perspective.”

Chinese data scientist Liu Nan sits in front of his computer which displays the codes of one of AI programmes, AutoScore

Associate Professor Liu Nan is expanding his focus from AI modelling to include regulatory and ethical considerations

 

The clinician-in-the-loop future

For the clinicians interviewed, the end goal of AI in healthcare is not replacement. AI is a tool, and the justification of the outcomes remains a burden shared by clinicians and their patients.

Being assisted by an AI can help free the clinician to focus on the patient. That’s why for Shannon, the prospect of patients first being seen by an AI-powered healthcare assistant akin to StarTrek Voyager’s Emergency Medical Holographic programme, Dr Zimmerman, is a welcome thought.

“You can start the consult already having the information you need at baseline. So it can fast-track some of that process, including—importantly—the amount of time you spend looking at a computer screen to gather that information,” he said.

To ensure that safeguards are in place to manage the widespread use of AI, countries are racing to draw up regulations and frameworks. Singapore, for example, added a new licensing category specifically to capture AI-powered tools to the list of healthcare interventions that need to be tested and approved.

But like with any intervention, post-marketing studies and regular reviews of side effects need to be incorporated here too. And for Savulescu, these require society as a whole to agree on the values that go into AI programmes.

“We need to be assessing how AI is performing over time, and ensuring that the pattern of the distribution of benefits and burdens is ethical,” he said. “So, we need ethics to set up AI, set the values and because we can never be sure—particularly with black box AI and deep learning—that we’re maximising those values, we also need ethics to evaluate the outcomes.”

This view is shared by many clinicians and has led AI advocates like Liu to review their role as creators: “Many efforts focus on building a model without considering the negative impact on society. So, I’ve started to look beyond developing a score or model to look into the ethics and the governance of AI.”

Agreeing, Ong observed, “It has a lot of potential to address some of the problems we are facing like the shortage of trained healthcare manpower. We just must do it thoughtfully.”

He added: “When elevators were first introduced, people worried that we would forget how to climb stairs. But you are not going to climb 100 floors to get home, are you?”

Two men on a panel. The man on the right is Julian Savulescu

Professor Julian Savulescu (right) hopes to see discussions around the use of AI in healthcare that examine the justification of the outcome of the AI  // Credit: NUS Centre for Biomedical Ethics

***

The wings are filled with AI innovations promising to transform the way we live. But looking at the technicolour future of The Jetsons, the value that seems to lie at the heart of their bliss is progress through convenience.

“There’s always this question, whether it is progress or regress and what things should be preserved across generations and what should be rightfully gotten rid of. And now more than ever, we need this kind of active inquiry,” said Savulescu.

“This is an area where what we really don’t want is AI just being rolled out. No one knows what the effects of social media have been. In medicine, we need to do slightly better than that and avoid innovation without assessment and ethical justification.”

Picture of scales weighing drugs versus a heart in front of a healthcare professional in PPE

   Credit: iStock.com / JadeThaiCatwalk

The physics of morality

“The world is divided into two categories,” explained ethicist and moral philosopher Professor Julian Savulescu, who heads the NUS Centre for Biomedical Ethics.

On the one hand exists the factual world, which can be observed and explained through science. “On the other hand, you’ve got values. And ethics is the way in which we make better, or worse, value judgements,” said Savulescu.

Often these decisions require the balancing of different factors to arrive at a “right” or “good” outcome.

“In that way, ethics is like physics,” he added. “In physics, you have vectors of force and if you want to work out which way the ball is going to go, you have to weigh all the vectors to work it out.”

And just as the laws of physics govern all matter, ethics are woven into the smallest day-to-day decisions such as whether to take a taxi or public transport to work. In making these decisions, the “vectors” that might be considered are speed, cost and the amount of greenhouse gases emitted as a result of the journey. In determining the “right” decision, the decision-maker will even if only subconsciously draw on their values whether it is frugality, sustainability and convenience.

“The strength of each vector will differ based on the circumstances,” said Savulescu.   

And like physics, humans cannot escape ethics.

“People live under this illusion that you can avoid ethical decisions. But you can’t. Even if you do nothing, that’s an ethical decision.”

Get the latest news and features delivered to your inbox.
SUBSCRIBE TO MEDICUS