As a practicing physician, I, like many others, can easily become overwhelmed by administrative burden and information overload. Seeing many patients in a matter of minutes, means I have limited time to digest relevant information from the patient chart that could positively impact the way I deliver personalized care. Every day, clinicians search through numerous tabs in the EHR to gather data before seeing patients —reviewing new labs, notes, and telephone encounters. This leaves less time for patient encounters and even less time to research evidence-based care based on unique patient experiences. This discrepancy between best practice and care delivered, coupled with lots of wasted physician time, leads to suboptimal patient outcomes.
It’s no secret that physicians spend way too much time in the EHR. According to a study by the American College of Physicians (ACP), physicians spend 49.2% of their time on EHRs, compared to just 27.0% on direct patient interactions. As a result, they often begin clinical encounters trying to piece together disparate information on the patient, which can lead to missing critical information or making the patient feel like the physician is uninformed. Many health systems and clinics have turned to nurses or PAs to take on pre-encounter summarization; however, this often takes up valuable time and leads to a disjointed patient experience. New developments in AI and strides in the interoperability landscape have made it increasingly possible to surface patient information to clinicians in a way that is relevant, actionable, and personalized.
To address physician challenges with pre-encounter workflows, Avo created an all-in-one solution called “Care Guide” that not only summarizes patient data but also provides actionable insights on care gaps and automates relevant clinical tasks. Care Guide, automatically pulls in patient data from across the patient chart and couples that information with relevant society guidelines and internal system protocols to produce a chart synopsis and care gap identifier for the physician pre-encounter. Additionally, the physician is able to see a summary of best next steps prior to the visit, based on the latest evidence and institutional protocols. Additionally, AI can improve disease diagnosis and treatment selection by using large datasets to identify patterns that might be missed by physicians. This tool not only helps physicians feel more confident walking into the patient’s room, but importantly helps improve patient outcomes.
Recently, researchers have observed that while AI models like GPT excel in tasks such as summarization and conversation—like translation—they still face significant challenges when it comes to deep clinical reasoning, particularly when the AI is required to read and interpret recommendations in real time. This gap becomes evident when applying complex guidelines and making nuanced clinical decisions.However, emerging research suggests that tasks traditionally difficult for a single AI to accomplish can be more effectively managed by splitting them into smaller, more manageable components. Each AI agent focuses on a specific, achievable task, and when these agents collaborate, they can collectively deliver higher accuracy and better performance on complex tasks.
Applying guidelines to patients requires more than simple summarization—it involves deep clinical reasoning. AI must navigate hundreds of guidelines, each containing numerous recommendations, making the task exponentially more complex than typical AI functions. While AI excels at basic summarization, it often struggles with intricate clinical reasoning tasks, such as identifying the correct billing codes from a list. To overcome these challenges, we developed a network-like AI architecture where over a dozen AI agents collaborate to capture care gaps and provide recommendations. In this “Mixture of Agents (MOA)” model, each AI takes on a specific role: data interpretation, guideline searching, recommendation aggregation, and even automation, such as finding the correct orders for the patient. For example, first, a “Data Interpreter AI” comes in and reads patient data to discern potential conditions (e.g. Hypertension). Then, another AI, let’s call it “Guideline AI” searches existing medical guidelines and aggregates relevant recommendations. The next AI, “Guideline Mapper AI” then applies these recommendations to the patient, and so on.
Clinicians can verify the source of each recommendation from Care Guide at a granular level, mitigating concerns about AI hallucinations or reasoning failures. If there are concerns about hallucinations or reasoning errors, users have the ability to verify the source of each recommendation down to the sentence level—going beyond merely providing reference links. Since the mapped guidelines can be extensive and potentially overwhelming, another AI agent aggregates (“Recommendation Aggregator”) the recommendations into a more readable and user-friendly format. This allows users to easily verify, edit, and then copy and paste the information directly into medical records. Finally, the “Automation Aggregator” processes the recommendations, searching for all relevant medication orders and other automation tasks, making it easier for the user to implement them efficiently. For example, if Care Guide recommends a medication class like ‘SGLT2 inhibitors,’ a dedicated AI aggregates relevant orders, allowing clinicians to pend them with a click—eliminating the need to navigate multiple EHR tabs.
Medical-specific MOA is very complex, as each AI unit needs to be assigned to a specific role with prompts built only by people who have deep clinical experience and prompt engineering experience. For this reason, we built a team of clinical informaticians, who are all board-certified physicians with technical programming experience.
The opportunities for streamlining clinical workflows and improving patient care with AI are exponential. The key is ensuring solutions reduce the risk of alert fatigue and genuinely improve clinical burnout while also keeping patient outcomes front and center. Chart summaries are not enough. AI solutions must go beyond summaries and reliance on physicians to understand the full picture of the patient at hand; instead, they must proactively go out and understand everything there is to know about the patient and the latest evidence without the physician ever needing to ask or search for the answer. While this approach pushes the boundaries of AI’s clinical capacity, it also offers massive potential for physicians like myself to focus on what’s important – patient-provider interactions.
About Avo
Avo’s clinician support platform empowers healthcare organizations to standardize care by effortlessly incorporating guidelines and protocols into the clinical workflow. By centralizing the latest information and transforming it into actionable tools in the EHR (or outside of it), Avo simplifies everyday tasks like documentation (with ambient listening), pre-charting, ordering, and decision-making for clinicians. At Avo, we improve quality of care with love, not alerts. Contact us to learn more.
Dr. Joongheum Park is Founder, Chairman, and Head of Product at Avo. He is a practicing board-certified internal medicine physician and clinical informatician. He is an Associated Harvard Medical Faculty (AHMF) at Beth Israel Deaconess Medical Center. He was an internist/clinical informatics fellow at New York Presbyterian-Columbia University Medical Center (“CUMC”) and was on the executive board of the National Association of Clinical Informatics Fellows. He is also a professional developer and has been mentoring computer science classes in healthcare software at Georgia Tech.
Additionally, Dr. Park leads multiple clinical AI projects in collaboration with Columbia University, Albert Einstein College of Medicine, Rutgers University, and the University of Pittsburgh.