AI surpasses doctors in emergency triage diagnoses

AI triage – A Harvard trial found AI more accurate than emergency doctors for triage and some treatment planning, while researchers stress human oversight remains vital.
A new kind of decision support is moving fast into emergency medicine, and the latest Harvard trial suggests AI may make more accurate triage diagnoses than doctors when information is limited and time is short.
In the study. Misryoum reports that large language model systems were tested against human clinicians using the same kind of electronic patient summaries that arrive in emergency departments.. The AI keyphrase—AI triage diagnosis—showed higher accuracy overall for identifying the correct diagnosis or one very close to it. outperforming doctors in the most urgent. first-glance situations where every minute matters.
The trial examined how an AI reasoning model performed when it received standard clinical text. including vital-sign data. basic patient details. and brief notes about why the patient was brought in.. Across a group of patients evaluated in one experiment. the system’s diagnostic results were consistently stronger than those of human doctors working from the same records. with an advantage most noticeable when triage required rapid judgments with minimal context.
A key insight here is that emergency triage is not just about knowing medicine, but about making sense of incomplete, high-stakes information quickly. If AI can reliably improve that first pass, it could change how clinical teams prioritize patients before more data arrives.
Misryoum also highlights that the study explored whether the AI’s performance held up when more detail was available.. In those cases. the AI continued to perform well. while human clinicians’ accuracy also varied depending on what was included in the record.. The researchers further tested longer-term planning tasks—such as recommending treatment approaches or outlining end-of-life processes—where the AI again scored higher than doctors using conventional tools for additional information.
Still, the findings come with important limits.. The evaluation did not test how the AI would interpret non-text signals like a patient’s visible distress.. In practice. that means the system functioned more like a second reader of the paperwork rather than a full replacement for the bedside assessment clinicians deliver.
That’s why Misryoum notes the tone of the study’s message: this is not an argument to sideline emergency physicians.. Instead. researchers describe a shift in technology that could reshape medicine. potentially embedding AI into care workflows where doctors remain responsible for judgment. communication. and ethical decisions.
At the end of the day. Misryoum says the central question is not only whether AI can be accurate. but whether it can be deployed safely and transparently in real-world care.. Systems that outperform humans in controlled comparisons could still struggle with edge cases. and accountability frameworks for clinical responsibility remain a critical gap.