In conversation with Duke-NUS medical students

Three medical students share their hopes, fears and dreams for an AI-enabled healthcare future 
Profiles of three students with their AI-generated avatars

Our three guests for this issue's column and their AI-generated avatars. The avatars were generated by Vance AI using the portraits featured here


New technologies by their very nature precipitate change. From electricity to the Internet and social media, these tools have shaped our lives in ways both good and bad. Electricity brought light, transport and better storage of food, but in producing it, humanity fell in love with coal; while the price for having information at our fingertips is a polarisation of views and pressure-cooker mental health environment for many.

So it is not surprising to hear talk about AI destroying our world as often as it is hailed as the beginning of a new utopia. But among all the hype and horror, the discourse on AI has conflated its capabilities and potential with our vivid imagination. But is this just another stage of the Information Age or have we entered the Age of Intelligence?

In this issue, we speak with three Duke-NUS medical students—all digital natives who will potentially be the first generation of clinicians working in an AI-augmented healthcare world—about their hopes, worries and dreams for an AI-enabled future.

 

MEDICUSSince ChatGPT was opened to the public, AI has taken over the spotlight, triggering alarm and hope in equal measure. What is your reaction to large language models like ChatGPT and the coverage they and AI in general have had?

LOH DE RONG: ChatGPT is a very powerful language model that uses natural language processing. You can chat with it almost as though it’s a human. And that shows how far we have come and what more is to come in the AI space. And the widespread coverage of it brings to attention how AI can impact our lives, and raises awareness about AI and its implications. Until ChatGPT came along, we hadn’t thought much about some of these. We have to embrace it and adapt to it.

DHAKSHENYA DHINAGARAN: When I first tried it out, I was quite pleasantly surprised. It’s got this feature where it can hypothesise with you, where you can imagine a scenario which doesn’t even exist and ChatGPT can have a conversation with you about that. So that felt like a major advancement. But at the same time, I wonder with education, for example, how are we going to determine plagiarism in essays and work? So that’s my challenge with AI: every time it makes things more convenient for us, it also generates a new problem we need to catch up with. </font:>

ANDRE ANG: I think ChatGPT is an incredible tool. As an engineer, I may be biased, but I think that there are a lot of things that we can change. We can dictate the course, what AI can do for us versus what harm AI can do to us. I think that engineers have a responsibility to deploy such technologies responsibly.

A composite that introduces Andre Ang, one of the students interviewed for this feature

MEDICUS: How do you think AI will affect our world, jobs and life?

LOH DE RONG: AI has the potential to automate certain tasks and roles, especially those that involve very routine and repetitive work. We have already seen cashier jobs being replaced by self-checkout machines and those are not even AI. So, AI is a new wave of revolution that we must all learn to adapt to. It can create a lot of new jobs that will augment our work so that we can focus on more critical tasks. So, it requires a combination of upskilling, reskilling and thoughtful workforce re-planning to ensure that we take advantage of and maximise the benefits of AI while addressing some of the challenges that will come with it.

DHAKSHENYA DHINAGARAN: I can see it playing different roles for different people. For example, it can help reduce the burden of caregivers by keeping track of medical appointments or groceries and alert them when they’re running low on items. This could make a big difference to caregivers, for example. Other examples include biosensors and wearables that can track the health of individuals in our ageing population as well as sensors in the home that can help monitor elderly living alone who are at risk of falls. Such applications could also be beneficial.

A composite that introduces Dhakshenya, one of the students interviewed for this feature

MEDICUS: What do you think the future of healthcare will look like and what is the goal of using AI in healthcare?

DHAKSHENYA DHINAGARAN: I see AI safest and most helpful when used in a supportive role, such as in executing administrative tasks. It is insane how much admin house officers do. That is something that AI can certainly help with. For example, after a consult with a patient, AI can check in on them or send them a report and reminders before the next appointment. Or before a consult, doctors get a quick one-liner from the AI to give them a quick download about the next patient. But right now, I still feel a bit conservative and wouldn’t want too much to be changing at the moment in terms of AI. We still have internet separation, so on a practical level, I’m not sure how much we can allow AI to intrude into the healthcare environment as such.

LOH DE RONG: It’s really about improving health outcomes for everybody. For example, my current project is about using AI to enable early diagnosis of ADHD and autism. Right now, kids are being diagnosed as late as five years old. But we have data suggesting that we can diagnose them even before they turn one year. If we can do that, then this could benefit a lot of these kids. AI can also assist radiologists even in assessing normal films, which takes up most of their time. Personalised treatment is another area. And it is not just medicine, but also manpower and efficiency where AI can play a role, automating tasks and reducing errors. During one of my shadowing opportunities at Duke Hospital’s emergency room, doctors speak into a mic, and their speech is transcribed into notes. That speeds up things. These are great tools and it’s probably just a matter of time before they become part of our daily life. But I also foresee, at least in the near future, a last-mile problem. Ensuring responsible and ethical use of AI, addressing some of the biases that models may have, and setting clear standards—are all things that we need to look at before we just go ahead and use these technologies.

Thinking forward to being a practising clinician, I do hope that I will be part of this integration where I get to see some of the AI tools being used in our daily lives. With my background in computer science and medicine, I am quite excited to see how AI is integrated into clinical practices and the significant improvements that we can bring to patient care.

ANDRE ANG: If we can offload some processes from the clinicians’ shoulders, then they can concentrate on the patient, interacting and treating the patient as a person, while AI or other technologies assist in the background. That is something that I look forward to and I’m carefully excited about.

As a researcher, if I can have massive amounts of data that I can feed into AI which comes up with a conclusion or a result very quickly, that’s going to be very powerful and can advance medicine very quickly.

A composite that introduces Loh De Rong, one of the students interviewed for this feature


MEDICUS: So, what is your biggest worry when it comes to AI?

DHAKSHENYA DHINAGARAN:Security and surveillance. In healthcare, we’re dealing with very sensitive information and very vulnerable people; people who are sick, may be elderly or not of sound mind. If we’re looking to AI to help reduce the burden of healthcare professionals and allow patients to have conversations or engage with AI instead, what do we do when something goes wrong? You can’t blame the machine, right? So then we need to have someone checking up on what exactly is happening in those interactions, then aren’t we just giving ourselves a new job? And we’re still quite hazy on who is taking responsibility for these agents.

LOH DE RONG: On a personal level, I am concerned about losing some of my critical thinking, becoming overly trusting and reliant on AI. Because most of the time, it’s this black box, and we don’t know what it’s doing. Even if it can tell you what it’s doing like how ChatGPT can generate sensible replies, it is not always right. So, you have to be discerning about the information it gives you.

On a societal level, I am concerned about issues like bias and fairness, and how you train your data as well as security vulnerabilities, which has serious implications in fields like autonomous vehicles and cybersecurity. And the ethical dilemmas that come with these advances.  These are things that we have to grapple with and also need to have answers to before we just go ahead and use these technologies.

ANDRE ANG: Using AI has many risks, but they are not what you see in the movies. AI’s not about to take over the world. It’s a model that takes in data and generates data just like a calculator. The calculator is not going to reinvent math. Math is never going to change even with the calculator. You’re not going to be treated by a computer. But there are risks that AI can influence certain medical judgments and healthcare decisions that are very real.

The most obvious risk would be the bias that AI has. So, it depends on the data that the AI is fed. As of now, I don’t believe that the AI is fed enough data for us to use this sort of tool in Singapore. It depends on the people who are working on the model and the data that they have, which will dictate the outputs that the AI can generate. So that is the biggest risk, that we get misinformation or false information that the AI thinks is completely true based on the information we give it. AI is as reliable only as we make it.

And of course, privacy is very important in terms of patient information and data. If and when we implement AI into our healthcare processes that has to be a very important consideration; how information is shared, how the data is managed and how patients are protected.

When the first x-rays came out, people were afraid of the radiation and how much information it could give, but nowadays, it is quite commonplace. I’m hoping that AI can be as commonplace and safe as that in the future.

So, yes, there are things to be worried about, but they are different from what most people make them out to be.


MEDICUS
: How can we mitigate the risks of AI?

ANDRE ANG: Education is of paramount importance. Because at the end of the day, AI is a tool that takes in large amounts of input and gives you an output. So, people must be educated in the ways in which AI is limited but also in the ways that it can help the practice of medicine. So, understanding the limitations, the benefits and how far we can push this model before a human has to intervene, that’s important.

DHAKSHENYA DHINAGARAN: AI is often seen as a silver bullet that can do amazing things and we want to do the amazing things right away. But we need to do it in a stepwise manner. If it goes well, then we add a bit more responsibility or allow it to be a bit more intrusive and see how that goes. If it doesn’t work, then we scale back. I feel we do need to be cautious because we follow the rules for a reason—to keep patients safe and to make sure it fulfils its purpose.

Get the latest news and features delivered to your inbox.
SUBSCRIBE TO MEDICUS

LOH DE RONG: Number one would be the responsible use of AI. There are objective metrics for publishing models so that everyone understands each other’s work better. Another big part is explainability and that’s especially important in the context of healthcare. So, if we want a model to be deployed in a healthcare setting, physicians or anyone in the public would want to understand why the model comes out with certain decisions. AI will also need some oversight, some form of human-in-the-loop to make sure that we’re not missing dangerous things.


MEDICUS
: What is the one message you want to send to regulators as they draw up the rules of engagement?

ANDRE ANG: While it is good to be careful about significant shifts in technology, AI is an incredible to that can help in different areas of society, particularly in medicine. So we must recognise its use in healthcare and capitalise on it as soon as possible to aid us in our practice.

LOH DE RONG: Strike a balance between innovation and safeguarding against potential risks and ethical concerns. And I say this because I believe that we should be moving forward with AI and embracing it. But of course, safely and responsibly.

DHAKSHENYA DHINAGARAN: We need an institution like the FDA. We need that level of regulation. So, I would urge them to be cautious and conservative; not to get caught up in what the possibilities are, but to be realistic. See what AI can do, know that it’s safe before allowing it to move to the next stage. It’s all about taking baby steps, instead of allowing too much because then it’s difficult to scale back.


The interviews were conducted and edited by Nicole Lim, Senior editor.

Get the latest news and features delivered to your inbox.
SUBSCRIBE TO MEDICUS