In AI we trust? Not without finding its clinical puzzle piece.

Twitter
LinkedIn
Email
Print

Generative AI solutions offer incredible opportunities for the healthcare industry to improve care across multiple dimensions and provide accessible and engaging user experiences. It makes sense that healthcare entities want to implement these tools quickly to attain the promised benefits and avoid falling behind their competitors. But these technologies might still fall short of the hype, with significant shortcomings that may overshadow the advantages for many potential customers.

GenAI’s medical hurdles

Right now, a doctor or other healthcare professional who uses conversational AI to assess patient symptoms would likely receive clinical recommendations that make sense in most cases. 

But with an enormous quantity of clinical data and evidence to inform AI-generated recommendations, it’s a gamble as to whether the output is clinically accurate, captures edge cases, and accounts for unique patient history and characteristics. When providers receive reasonable-seeming guidance, they should keep in mind that generative AI output comes from a statistical model that selects the most likely next word in a sequence—it does not rely on clinical experience or reasoning. And when generative AI output provides spot-on guidance, the “black box” nature of generative AI models means that doctors have no way to review how models construct their outputs or reproduce the AI model’s logic to test its conclusions.  

Simply put: While GenAI models can produce understandable and accessible content about almost every subject, they lack the capacity to reason like a human or understand clinical context. Without these crucial capabilities, they are not well-suited for clinical settings. 

Without a way to clearly determine or understand how a black box model arrived at its conclusions (regardless of accuracy), physicians and other healthcare personnel are less likely to trust the technology in a clinical setting. If providers need to build on a clinical report without any insight into AI-generated recommendations, they must spend more of their limited time to confirm the output themselves. As a result, the much-touted benefits of GenAI solutions, like reduced cognitive burden and greater clinical efficiency, are lost.

For GenAI companies to utilize their models in healthcare—a sector that demands an incredible amount of knowledge and clinical understanding—we will have to build additional tools and layers on top of these models to anchor their output to deep industry knowledge that can be reviewed and verified. But until these tools reach the market, GenAI solutions will continue to fall short of what providers need and expect from new technologies.

Still, GenAI holds enormous potential to drastically transform clinical work if we can find ways to address these challenges. Deployed appropriately, AI tools and systems can streamline care with reliable clinical reports and real-time guidance. But those can only happen with AI models that deliver transparent evidence-based recommendations—not another black box. Tools that utilize the best available evidence and clear reasoning can help providers work more efficiently, standardize care across the board, and promote equity across entire enterprises.

The role of explainable clinical AI

No one should have to pick between verified, evidence-based output and fluent output. In fact, we must harness the text fluency of large-scale generative AI models to produce the most powerful and effective tools possible.

A clinical reasoning AI engine that thinks like a doctor and leverages the text fluency of AI takes both needs into account. Kahun’s own clinical AI tool, for instance, functions through an explainable AI engine that bases its output on relevant peer-reviewed medical literature and other evidence-based sources. That roots the engine in tangible research that can then provide doctors with evidence-based clinical insights before, during, or after a patient visit.

But explainable AI doesn’t start and end at anchoring models to research—it also requires instilling clinical reasoning skills into the way the AI operates. Explainable AI does more than provide essential transparency that healthcare organizations need from generative AI solutions—it also helps models generate consistent output that aligns with how a doctor would assess a patient.

Innovations in explainable AI models that combine transparency and literature-backed conclusions will open the floodgates even wider for AI chatbots that assist medical professionals and reduce care variability. The need for a true AI-powered and research-based clinical assistant is especially pressing as professional burnout continues to climb across almost every medical sector. And when doctors and other healthcare professionals are drowning in work, the standard of care naturally fluctuates.

Generative and conversational AI already power some foundational tools to help ease burnout and optimize the time doctors have with their patients, and the industry is clearly receptive to its benefits. Expanding its reach in the healthcare industry doesn’t require starting from scratch—it’s a matter adapting AI models and generative AI technology with clinical reasoning capabilities, so they can offer meaningful clinical decision support in real time. When AI solutions can provide true clinical insights, generative AI can live up to its true transformative potential.

Michal Tzuchman-Katz, MD, is the Co-Founder and CEO of Kahun Medical, a company that built an evidence-based clinical reasoning tool for physicians. Before co-founding Kahun in 2018, she worked as a pediatrician at Ichilov Sourasky Medical Center, where she also completed her residency. She continues to practice pediatric medicine at Clalit Health Services, Israel’s largest HMO. Additionally, she has a background in software engineering and led a tech development team at Live Person.