MEDICUS: So, what is your biggest worry when it comes to AI?
DHAKSHENYA DHINAGARAN:Security and surveillance. In healthcare, we’re dealing with very sensitive information and very vulnerable people; people who are sick, may be elderly or not of sound mind. If we’re looking to AI to help reduce the burden of healthcare professionals and allow patients to have conversations or engage with AI instead, what do we do when something goes wrong? You can’t blame the machine, right? So then we need to have someone checking up on what exactly is happening in those interactions, then aren’t we just giving ourselves a new job? And we’re still quite hazy on who is taking responsibility for these agents.
LOH DE RONG: On a personal level, I am concerned about losing some of my critical thinking, becoming overly trusting and reliant on AI. Because most of the time, it’s this black box, and we don’t know what it’s doing. Even if it can tell you what it’s doing like how ChatGPT can generate sensible replies, it is not always right. So, you have to be discerning about the information it gives you.
On a societal level, I am concerned about issues like bias and fairness, and how you train your data as well as security vulnerabilities, which has serious implications in fields like autonomous vehicles and cybersecurity. And the ethical dilemmas that come with these advances. These are things that we have to grapple with and also need to have answers to before we just go ahead and use these technologies.
ANDRE ANG: Using AI has many risks, but they are not what you see in the movies. AI’s not about to take over the world. It’s a model that takes in data and generates data just like a calculator. The calculator is not going to reinvent math. Math is never going to change even with the calculator. You’re not going to be treated by a computer. But there are risks that AI can influence certain medical judgments and healthcare decisions that are very real.
The most obvious risk would be the bias that AI has. So, it depends on the data that the AI is fed. As of now, I don’t believe that the AI is fed enough data for us to use this sort of tool in Singapore. It depends on the people who are working on the model and the data that they have, which will dictate the outputs that the AI can generate. So that is the biggest risk, that we get misinformation or false information that the AI thinks is completely true based on the information we give it. AI is as reliable only as we make it.
And of course, privacy is very important in terms of patient information and data. If and when we implement AI into our healthcare processes that has to be a very important consideration; how information is shared, how the data is managed and how patients are protected.
When the first x-rays came out, people were afraid of the radiation and how much information it could give, but nowadays, it is quite commonplace. I’m hoping that AI can be as commonplace and safe as that in the future.
So, yes, there are things to be worried about, but they are different from what most people make them out to be.
MEDICUS: How can we mitigate the risks of AI?
ANDRE ANG: Education is of paramount importance. Because at the end of the day, AI is a tool that takes in large amounts of input and gives you an output. So, people must be educated in the ways in which AI is limited but also in the ways that it can help the practice of medicine. So, understanding the limitations, the benefits and how far we can push this model before a human has to intervene, that’s important.
DHAKSHENYA DHINAGARAN: AI is often seen as a silver bullet that can do amazing things and we want to do the amazing things right away. But we need to do it in a stepwise manner. If it goes well, then we add a bit more responsibility or allow it to be a bit more intrusive and see how that goes. If it doesn’t work, then we scale back. I feel we do need to be cautious because we follow the rules for a reason—to keep patients safe and to make sure it fulfils its purpose.