Causal AI in healthcare: a black box revelation?

Share this article

Artificial intelligence (AI) and machine learning have great potential to improve people’s lives. From supporting data analysis in research to providing more accurate and quicker diagnostic tools. But their interior workings are questioned by many and understood by few. New models are needed to solve current shortcomings and causal AI might be our way out. By offering a peek inside the black box, it creates opportunities to implement AI in high-risk settings such as healthcare. But how far along are we and where is this journey taking us?

Would you trust AI to make healthcare decisions for you? Its possibilities are undeniably vast but also daunting, especially in healthcare, where data is sensitive and lives are at stake. But this environment also creates opportunities for AI to greatly benefit people’s lives. And yet, AI’s effective and safe application in healthcare is currently hampered by low data availability, ethical considerations, incompatible legislation, and IT infrastructure. Skepticism of AI is also prevalent among patients and physicians who don’t often trust AI to make the best decisions for their health. Current AI models are black boxes, providing no information about how predictions are acquired. Jarne Verhaeghe, PhD Researcher in Hybrid and Causal AI at Ghent University and imec, goes beyond simple data-driven models to find solutions. “Traditional models often have poor validation results when applied in the clinic or in the overall healthcare sector. Results are often poor because of biases and improper training. If you want to apply these models for treatment suggestions and decision-making, you want them to be very secure. New models need to come to solve these problems and causal AI might be the way out.” 

Causality mimics the human thought process

As the name suggests, causal AI finds direct cause and effect relationships between features, providing not only a prediction such as treatment effect, but also an answer to questions like ‘why does it give this prediction?’ and ‘what if no treatment is given?’. “In that way, it tries to answer questions in a way that humans also think,” says Verhaeghe. To accomplish this, it’s important to include the right features and exclude the wrong ones. Verhaeghe explains, “This requires critical thinking about your model. You get all this data and you need to look at the domain knowledge by collaborating with physicians and experts. You need to learn what the important factors are.” For example, if you built a causal AI model that predicts the effect of blood pressure medication for the whole Belgian population and you forget to take age into account, it will predict that blood pressure medication kills people since many older people take it. This would lead to incorrect conclusions.

A peek inside the black box

Including knowledge-based features and finding a causal relationship between them is what makes causal AI an interpretable and explainable model. “Machine learning is just a way to predict things. With causal AI, you go beyond. You go a step higher allowing to interact with more questions and more solutions,” emphasized Verhaeghe. A physician understands why certain predictions are made and what happens if other interventions are chosen, resulting in clear communication to the patient. Complementing your causal AI model with uncertainty quantification adds even more to its interpretability. Every prediction is accompanied by a confidence interval, providing an answer to the question ‘How certain am I?’. “This is an important part of an AI system in high-risk applications such as healthcare. You need to have this explainability together with this uncertainty aspect in order to use it correctly and make correct decisions,” states Verhaeghe.

Are we there yet?

Although very promising, there are some obstacles to overcome before we can implement causal AI in healthcare. Because it’s a relatively new technology, there are still few scientists with expertise in this field. Furthermore, there are large assumptions when implementing causal AI, making it difficult to work with real-world data (RWD). Firstly, when ‘feeding’ data to the model, the individuals’ features must overlap. If patients receiving treatment are too different from those receiving a placebo, the model can’t compare them, the treatment effect cannot be quantified, and features predicting good treatment response won’t be extracted. Secondly, bias in treatment assignment should be avoided. This is difficult to accomplish in real-world settings where treatment is not allocated randomly. Therefore, all factors explaining why treatment is given to a certain patient should be included in the model. “This requires a lot of expertise and trust in your own,” says Verhaeghe. Lastly, the outcome of one patient cannot influence that of another, creating conflict in settings like intensive care units (ICU) or pandemics. Spreading infectious diseases might influence other patients’ outcomes, creating bias. There are already techniques to compensate for this, but they are still in their infancy.

Clinical trials are the causal playground

Luckily, we can already reap the rewards of causal AI in controlled settings such as clinical trials. It helps to decide which patients to recruit, how to make trials more efficient and cost-effective and improves the analysis of the data. Clinical trials are randomized and exclude confounding factors (features which create bias), benefiting causality and the quantification of treatment effects. You can even go further and stratify patients based on their treatment outcome and extract features that correspond to a good or bad treatment response. In this way, it comes very close to personalized healthcare whereas normal clinical trials usually just take the average treatment response of all included patients. Verhaeghe agrees: “With normal AI models, it’s much harder to achieve personalized healthcare. If you want to know the effect of specific actions on a specific person, you really need to know the intricacies of your treatment.” Its clinical application is, however, still a long way off. Creating a causal AI model for personalized healthcare requires a lot of data, more than a traditional one. “Because you need to learn the function and secrets of both treatment and placebo,” says Verhaeghe.

Read this article to learn more about ‘building’ towards personalized healthcare with the ATHENA project consortium.

Causal AI is on the ‘slope of hope’

Although there’s still a long way ahead, the possibilities of causal AI are undeniable.  Verhaeghe explains its journey along the Gartner hype cycle, a graph representing the different phases new innovations go through, “Following the Gartner hype cycle, we have the peak of excitement, the fall of disappointment, the ‘slope of hope’, and then we have maturing. Causal AI is right in the middle of the slope of hope. People are getting to know the technique and see the potential.” This potential is reflected by the ongoing research conducted by Verhaeghe and the PreDiCT team of IDLab, Department of Information Technology at Ghent University-imec. The HEROI2C project tries to battle antimicrobial resistance by optimizing antibiotic treatment for patients in intensive care units (ICU). By analyzing the patient data, it tries to predict which dose should be given to which patient in order to stay within the therapeutic margin. Another project uses causal AI to see which factors increase or decrease the risk of atrial fibrillation in ICU patients. Given that this study is based on RWD, the findings are more complicated. “Just having the notion that certain factors could increase the risk is also something that is very important,” states Verhaeghe. This and other ongoing research shape the stepping stone for future development and implementation of causal AI, creating trust in its capabilities to improve people’s lives.