Research suggests Epic Sepsis Model is lacking in predictive power

A retrospective study in JAMA Internal Medicine finds that the model did not identify two-thirds of sepsis patients and frequently issued false alarms.
By Kat Jercich
12:44 PM

Photo: Luis Alvarez/Getty Images

A new study in JAMA Internal Medicine found that a sepsis prediction model included as part of Epic's electronic health record may poorly predict sepsis.

Using retrospective data, University of Michigan Medical School researchers found that the predictor did not identify two-thirds of sepsis patients.  

"In this external validation study, we found the ESM to have poor discrimination and calibration in predicting the onset of sepsis at the hospitalization level," UM researchers wrote.   

Epic disputed the study's findings, saying that the authors used a hypothetical approach that did not take into account the analysis and required tuning that needs to occur prior to real-world deployment to get optimal results.  

"In their hypothetical configuration, the authors picked a low threshold value that would be appropriate for a rapid response team that wants to cast a wide net to assess more patients," said a statement provided by the company.  

"A higher threshold value, reducing false positives, would be appropriate for attending physicians and nurses," it continued.  

WHY IT MATTERS

As the researchers note, early detection and treatment of sepsis have been associated with less mortality in hospitalized patients.

One of the most widely implemented early warning systems for sepsis in U.S. hospitals is the ESM, a penalized logistic regression model included in Epic's EHR.   

Although Epic developed and validated the model based on data from 405,000 patient encounters, the researchers raised concerns about its opacity as a proprietary model.  

"An improved understanding of how well the ESM performs has the potential to inform care for the several hundred thousand patients hospitalized for sepsis in the U.S. each year," wrote the researchers.

Using the data of all patients older than 18 admitted to Michigan Medicine between December 6, 2018, and October 20, 2019, researchers found that sepsis occurred in 7% of the hospitalizations. The ESM had a hospitalization-level operating characteristic curve, or AUC, of 0.63 – "substantially worse," than that reported by Epic, they said.

When alerting at a score threshold of 6 or higher, which is within Epic's recommended range, the model identified only 7% of patients with sepsis who were missed by a clinician.  

It did not identify two-thirds of patients with sepsis – despite generating alerts on 18% of all hospitalized patients, creating a large burden of alert fatigue.  

In its statement, Epic argued that the purpose of the model is to identify harder-to-recognize patients who otherwise might have been missed. It pointed to previous research that found the model could accurately predict sepsis, and said customers have "complete transparency" into the model.  

According to Epic: "Each health system needs to set thresholds to balance false negatives against false positives for each type of user. When set to reduce false positives, it may miss some patients who will become septic. If set to reduce false negatives, it will catch more septic patients, however it will require extra work from the health system, because it will also catch some patients who are deteriorating, but not becoming septic.  

"In the example given in this paper, if the Epic model was used in real time, it would likely have identified 183 patients who otherwise might have been missed," the statement added.  

WHY IT MATTERS  

Health systems have increasingly turned to machine learning and predictive analytics to detect sepsis in patients in an effort to decrease mortality.  

In 2019, researchers from Geisinger and IBM developed a new predictive algorithm to detect sepsis risk, aimed at helping clinicians create a more personal care plan for at-risk patients.  

But the JAMA study reiterates that models have their own challenges, such as alert fatigue or, conversely, defaulting to computer-generated assessments as infallible.  

ON THE RECORD  

"Medical professional organizations constructing national guidelines should be cognizant of the broad use of these algorithms and make formal recommendations about their use," wrote researchers.

 

Kat Jercich is senior editor of Healthcare IT News.
Twitter: @kjercich
Email: kjercich@himss.org
Healthcare IT News is a HIMSS Media publication.

Want to get more stories like this one? Get daily news updates from Healthcare IT News.
Your subscription has been saved.
Something went wrong. Please try again.