Our stories

Back
Friday, 13 Sep, 2024

Duke-NUS data scientists develop framework that achieves accuracy with fairness

Accurate AI healthcare models—like all AI tools—are trained on vast sets of data to spot the hidden patterns that can help predict someone’s response to a treatment or their need for hospital admission. These models may be accurate across populations, but when it comes to predictions for individuals, AI tools often fall short of the first principle doctors swear to uphold: to do no harm.

That’s why a team of researchers from Duke-NUS set about to develop a mathematical framework that generates accurate and fair predictions for clinical outcomes, such as whether an emergency patient will need to be hospitalised. The Fairness-Aware Interpretable Modelling, or FAIM, framework was published in Patterns, a Cell Press journal on 12 September 2024.

Duke-NUS Quantitative Biology and Medicine PhD programme candidate  and first author of this study Ms Liu Mingxuan said:

“Ensuring equity in healthcare delivery is crucial because biased AI models can inadvertently perpetuate or even amplify existing disparities. In clinical scenarios, fairness is not just a technical requirement but a moral imperative as it directly impacts patient outcomes and access to care. Developing methods to mitigate these biases rather than exacerbate them is essential to achieving trustworthy and effective AI solutions in healthcare.”

Two people writing the mathematical framework for a fairer AI model on a clear glass wall
Duke-NUS PhD candidate Ms Liu Mingxuan (right) developed the Fairness-Aware Interpretable Modelling or FAIM framework under the supervision of Associate Professor Liu Nan (left) // Credit: Norfaezah Abdullah, Duke-NUS
One of the biggest challenges she and her supervisor, Associate Professor Liu Nan, encountered was how to capture the inherently abstract concept of “fairness” in a comprehensive and mathematically expressible manner.


“There are numerous fairness metrics, yet none fully captures the complete picture,” said Ms Liu. It is little wonder then, that the project took some 12 months to complete as they refined and tested their framework using two datasets from emergency departments in Singapore and the US.

“We had to work through many iterations to refine the framework, particularly the models,” she added. The current version of FAIM can be applied to scenarios that involve a binary outcome such as whether a patient needs to be admitted to hospital or not.

Seeing their efforts published at least has brought a sense of accomplishment for Ms Liu who was thrilled when the acceptance landed in her inbox: “The revision process was quite challenging, but it made the acceptance more rewarding. Now, I am excited to further develop and refine the FAIM framework to implement models that enhance equity in healthcare. It is vital that minority groups receive equitable care, and I hope our method can contribute to that goal.”

Next, Ms Liu plans to adapt the FAIM framework to other clinical outcomes and non-binary model types.

Assoc Prof Liu Nan, who leads Duke-NUS’ AI and Medical Sciences Initiative, or DAISI, hopes that frameworks like FAIM will eventually lead to equity in patient outcomes:

“Right now, we’re focused on making sure that we have equality in model performance for all patients. Once we achieve this, we can then harness these tools to deliver true equity for everyone.”

Browser not supported

Modern websites need modern browsers

To enjoy the full experience, please upgrade your browser

Try this browser