A conceptual framework for improved analyses of 72-hour return cases
a b s t r a c t
For more than 25 years, emergency medicine researchers have examined 72-hour return visits as a marker for high-risk patient visits and as a surrogate measure for quality of care. Individual emergency departments frequently use 72-hour returns as a screening tool to identify deficits in care, although comprehensive departmental reviews of this nature may consume considerable resources. We discuss the lack of published data supporting the use of 72-hour return frequency as an overall performance measure and examine why this is not a valid use, describe a Conceptual framework for reviewing 72-hour return cases as a screening tool, and call for future studies to test various models for conducting such quality assurance reviews of patients who return to the emergency department within 72 hours.
(C) 2014
Background
Since as early as 1987, emergency medicine (EM) researchers have examined the concept of 72-hour returns, defined as visits by patients who return to the emergency department (ED) for a second visit within a 72-hour period [1,2]. Intuitively, the initial “index” patient visits may involve lapses in care that resulted in second visits. Indeed, a recent systematic review of ED Performance metrics noted that early return visits are often promoted as a performance measure that may “indirectly reflect a missed diagnosis or inadequate treatment.” [3] The frequency of 72-hour ED returns may also reflect a lack of alternative, non-ED health care options for a certain segment of the patient population. Distinct from the use of the frequency of 72-hour ED returns as a quality metric, many EDs use 72-hour returns data as a screening tool to identify high-risk cases as part of their ongoing Quality assurance processes [4].
The earliest published article studying 72-hour return visits makes
no reference to why this time interval was chosen, so it is unclear how a 3-day period became the standard duration for ED return visit tracking [2]. Although a handful of author groups have used 2-day or 7-day time intervals, the large majority have chosen to evaluate 3-day (72-hour) return visits [2,4-11]. Anecdotally, many departments conduct similar 72-hour return reviews; yet no studies exist that record the prevalence of this practice.
* Corresponding author. Department of Emergency Medicine, Icahn School of Medicine at Mount Sinai, One Gustave L Levy Place-Box 1620, New York, NY 10029- 6574. Tel.: +1 917 232 4915; fax: +1 212 659 1660.
E-mail address: brad.shy@gmail.com (B.D. Shy).
Contrasting 2 applications of 72-hour returns: performance measure vs screening tool
Substantial EM research has questioned the use of 72-hour return frequency as an overall performance measure. Most fundamentally, many articles have reported associations between the demographics of the local ED population and propensity for patients to return within 72 hours [8,12]. Other associations may be more obscure: one pediatric ED switched from free-text to templated discharge instructions and reported an immediate 30% increase in return visits, without any measurable decrease in patient safety [13]. Another pediatric study found physician- specific 72-hour return frequency to be the surrogate marker least correlated with clinical performance of the 7 markers studied [14].
Two recent articles conclude that ED return visit frequency–as an overall performance measure–does not correlate with deficits in care. A large retrospective, cross-sectional analysis using data from the National Hospital Ambulatory Medical Care Survey demonstrated that patients returning within 72 hours for a second ED visit were not sicker or more likely to be admitted than other ED patients [15]. A smaller, 3-hospital analysis from 2013 found that 95% of 72-hour return visits resulting in admission had no deficit of care seen on review of the initial visit; this author group suggested that 72-hour return data would not be useful at measuring overall quality of care without a successive, manual chart review [5].
Notwithstanding the lack of evidence supporting 72-hour returns as an overall performance measure, there is considerable value in the use of this measure as a screening tool for ongoing QA programs that seek to identify system-wide and provider-level quality issues and correct them. To use 72-hour returns as a screening tool, some degree of chart abstraction of both the initial and the repeat visit must take place.
http://dx.doi.org/10.1016/j.ajem.2014.08.005
0735-6757/(C) 2014
Abnormal vital signs“>B.D. Shy et al. / American Journal of Emergency Medicine 33 (2015) 104–107 105
Despite the abundance of articles evaluating 72-hour returns to the ED, we found no published conceptual frameworks or best practices for abstracting charts or evaluating these cases. This represents an important gap in the EM literature, as we believe that the review of 72-hour returns provides an opportunity to identify those cases that actually do result from lapses in care. Ideally, 72-hour returns would be used as a screening tool to identify individual cases that may benefit from a programmatic continuous Quality improvement intervention. In this article, we describe the potential importance and limitations of reviewing 72-hour returns and outline a novel framework for carrying out this practice. Within this framework, we describe the selection, training, and monitoring of QA reviewers; the attention reviewers should give to returning ED patients who are subsequently admitted; the importance of abnormal vital signs to returning patient cases; the potential benefits of electronic health records (EHRs) to these analyses; and finally the resource allocation of conducting these reviews.
Novel conceptual model for conducting 72-hour return analyses
Selection of reviewers
The ideal reviewer would be clinically experienced in EM, be trained in QA, and have dedicated time to conduct thorough case reviews. The number of reviewers would be great enough to ensure that this work could be conducted in an efficient and equitable manner. Until 3 years ago, our department’s 72-hour return QA was conducted by 3 attending physicians on the departmental QA team who reviewed monthly reports of all returning patients. This design had several innate problems. The workload was substantial; and thus, individual chart review was conducted for only a small sample of cases.
For the last 3 years, we have instead used a bilevel 72-hour return Review process. The 60 EM residents at our institution conduct the primary review of 72-hour returns, referring cases of potential concern to the faculty QA group for further investigation. We have found reports of similar programs using solitary residents to conduct all 72-hour return reviews but have not found references to similar comprehensive 72-hour return programs using the entire residency [16]. Based on the second-level review that occurs by our QA faculty, a variety of outcomes occur, including referral of the case to hospital risk management, systems-based changes to the ED to prevent similar subOptimal care, e-mail reminders to specific providers about minor lapses in care, and a monthly list of teaching points that is distributed to providers via e-mail and 2 social media applications (Twitter and Facebook.)
This resident-driven model for conducting 72-hour return review has several advantages. It allows for review of all 72-hour returns while reducing the burden on the QA faculty and introduces residents to the QA process while giving them experience in chart review. This model also has several important limitations. Like other current 72-hour return models, its ability to identify correctable deficits in care has not been tested. The reviewers’ knowledge and experience are variable, limiting their ability to identify cases containing a deficit in care; and important cases may be missed with this system. Finally, residents with busier schedules or less enthusiasm may neglect thorough evaluation of their cases. This model would clearly not be feasible in nonacademic settings, although in these departments, other ED staff such as attendings, nurses. or physician assistants may fulfill a similar reviewer role.
QA training modules and continuous reviewer sampling validation
Regardless of the type of reviewers used, training modules should be developed when introducing the 72-hour return QA process to new or veteran reviewers. These materials should explain the value of reviewing these patient visits to provide reviewers with a context for devoting time to this task. Common types of 72-hour return cases should be described including expected returns such as schedulED revisits to evaluate minor traumatic wounds, as well as serious lapses such as
misdiagnoses resulting in admission on the second visit. The training modules should highlight examples of past process improvements made to the ED because of reviews of previous 72-hour return cases. Reviewers should be incentivized in this QA training to perform accurate reviews by explaining the value that these reviews provide for future patient safety as well as a description of the reviewer validation process. The 72-hour return QA process should be subject to validation testing and ongoing monitoring for the quality of the reviews themselves. Ideally, this could be done by using 2 reviewers for every chart and then calculating interrater reliability, although this would require significant resources. Another approach is to choose a random sample of cases every month to have reviewed by a second reviewer and then to calculate interrater reliability for this sample. Where issues are identified for a particular reviewer, additional training should be used.
Admitted patients
We suggest that the EDs focus on returning patients who are admitted on their second visit, as these visits are intuitively more likely to increase yields of clinically important adverse outcomes and errors. In reviewing 72-hour returns, 2 author groups have used this strategy by reviewing exclusively those patients admitted on the second visit [5,6]. However neither of these author groups measured outcomes of the nonadmitted return patients. Although this narrow review strategy seems prudent because important adverse outcomes should logically result in hospital admission, this inference has not been validated. If this strategy is used, it is important to also include returning patients who die in the ED or are transferred to another hospital, as these may also represent high-risk subsets of returning patients.
Abnormal vital signs
Attention to abnormal vital signs can be an important part of the review of patients who return to the ED within 72 hours. Several studies have highlighted the frequency with which abnormal vital signs are associated with 72-hour return cases, although the basis of this association is weak [17,18]. Indeed, a recent analysis of more than 200,000 ED visits using a large national database suggested that 72-hour return cases were no more likely to include abnormal vital signs than other ED patients [19]. Although abnormal vital signs only occasionally identify high-risk revisit cases that would have been missed by our resident screeners, we still have found it useful to have our reviewers record if unexpected abnormal vital signs were addressed (via an explanation of the vital signs in the physician notes or by recording the normalizing of the aberrant vital sign.) Of note, there are many benign diagnoses (eg, pharyngitis) that typically present with abnormal vital signs; thus, we instruct our reviewers to focus only on unexpected abnormal vital signs.
Standardized reviewer template
Departments should create a standardized template for re- viewers to record concerning cases of patients returning to the ED. In addition to standard patient and ED visit identifiers, we suggest that data fields for abnormal vital signs, the presence of physician reexamination documentation, and any communication between the chart reviewers and the specific providers for each case. The 72-hour return process could be further standardized by the inclusion of disease- specific QA templates (eg, asthma or gastroenteritis), which may specify a series of appropriate clinical interventions that should have occurred during the initial visit.
Thebenefits of electronic health record systems in 72-hour return reviews
Now that most US hospitals are adopting EHRs as part of the Meaningful Use program funded through the American Reinvestment
106 B.D. Shy et al. / American Journal of Emergency Medicine 33 (2015) 104–107
and Recovery Act, the efficiency of 72-hour return analyses will likely increase [20,21], as EHR queries to produce a report of return visits can be easily automated and distributed. Still, the conceptual framework we describe here can also be used in EDs with handwritten charts. Many earlier studies have investigated ED 72-hour returns in just this manner, retrospectively collecting handwritten data before EHR systems were prevalent [1,2,7,11]. Even in hospitals where ED medical records are paper based, billing systems are almost exclusively computerized and can often be used to quickly identify 72-hour return patients to facilitate the retrieval of paper charts for manual review [21].
Even with EHRs, it can take considerable time to open and read the medical record for both the initial and subsequent visits of each case. We found that this process could be streamlined by including the attending’s free-text note among the fields in the EHR report of 72-hour return visits. This EHR note type is a short free-text narrative, documented by the attending physician, which describes the case and medical decision making in sufficient detail, allowing reviewers to quickly evaluate many aspects of the cases. When the second-visit attending note clearly explains why there is no concern for the patient’s initial quality of care, the reviewer can presumptively pass over the case without scrutinizing the chart and focus instead on cases where the notes suggest deficiencies in the initial visit’s care.
Resource utilization
When choosing how to customize a 72-hour return QA program for an individual ED’s needs, it is important to consider the resources required. Though our particular department relies on a combination of resident and attending physicians as reviewers, other EDs may instead use less costly, nonphysician chart reviewers such as nurses or physician assistants. However, because of the complexity in critiquing
the provider actions that led to the initial discharge, attending physician review may yield different results compared with using nonphysician or resident-physician reviewers. Also, some attending physicians may be resistant to review of their work by nonattend- ing evaluators. Regardless of the available resources, there is likely some value in any amount of QA screening of return visits that can be implemented.
Many studies have reported that roughly 3% of discharged ED patients will return within 72 hours [1,7-10]. Even after using the timesaving measures described in this framework, reviewing this fraction of charts may be a considerable burden. For some depart- ments, limited resources will necessitate that reviewers examine only a randomized sample of ED return visits, although this compromise may overlook important and correctable errors.
We present, in the Figure, a summarizED representation of this novel conceptual model for performing 72-hour return analyses.
Discussion
We believe that using a well-run 72-hour return QA program as a screening tool can uncover important, correctable systems-wide and provider-level errors, which can improve the quality of care for future patients. We found little evidence, however, that 72-hour ED returns frequency can be used as a valid overall performance measure. It is important not to conflate the merits of these 2 uses of 72-hour return analysis–that 72-hour retuRN monitoring may improve patient safety by identifying quality improvement areas does not mean that the number of 72-hour returns is a reliable surrogate for ED quality. As we have described, the abundance of EM literature on the latter suggests that data collected are heterogeneous and may not lend themselves to valid comparisons across sites.
Figure. Proposed bilevel conceptual model for qa screening of 72-hour return visits.
B.D. Shy et al. / American Journal of Emergency Medicine 33 (2015) 104–107 107
Notably, in a 2011 advisory report, the National Quality Forum–a nongovernmental organization that endorses health care quality metrics for federal agencies such as the Centers for Medicare & Medicaid Services and the Agency for Healthcare Research and Quality–proposed that 72-hour ED return frequency be used as a future quality metric [22]. In this report, the National Quality Forum credited this metric a score of “medium” or “high” for each of the 6 criteria of an ideal performance measure [22]. As noted earlier in this article, the literature does not support this assessment; and we do not believe that 72-hour return frequency correlates with overall quality of care. Furthermore, we note that increased attention, incentivization, and censure related to return visits may lead to the unintended consequence of ED physicians simply admitting more patients (or more ominously, discouraging discharged patients from re-presenting to the ED) to improve this metric. A similar phenomenon of unintended target repercussion occurred in 2004 with the implementation of the 4-hour length-of-stay goal in England; this program was ultimately rolled back after concerns that ED providers were gaming the system by focusing on the median instead of mean wait times and neglecting other patient safety duties [23-25].
It is unclear why 72 hours has become the standard metric for tracking ED return visits. Although offhand this amount of time would seem to capture many of the patients inappropriately discharged from the ED, we believe that that this measure was arrived at arbitrarily. Other author groups have previously studied different metrics, such as 2- and 7-day return intervals [18,26]. One ED that expanded the time of its return visit analysis to 7 days found cases of appendicitis, myocardial infarction, and meningitis that would have been missed with a shorter, 48-hour review period [11], although the evidence presented was qualitative. An earlier study noted that patients returning within 48 hours may be slightly more likely to have had their return visit deemed “avoidable” than those returning within 72 hours (35% vs 32%); however, no formal statistical comparison was made in this case [1]. With the widespread adoption of automated return visit reports through EHR and a standardized approach to evaluating these cases, an optimal review period may be determined through additional study. Finally, we wish to note that any single-center review of 72-hour returns will not capture patients who return–circumstantially or by patient choice–to an ED different from that of the initial visit. Especially in urban centers, where ED density and patient choice are greater, patients may use more than one hospital ED. Regional health information exchange systems may be useful to more comprehensively
track and study 72-hour return data in these settings [27].
Conclusion
We believe that reviewing ED return visits is valuable to QA in EM, although using return visit frequency as a marker for overall quality of care is not evidence based and may create unintended negative consequences. Despite the common practice of 72-hour return QA reviews, there is a lack of data demonstrating the best way to carry out these programs. We hope that this conceptual framework provides a basis for improving QA in EM and highlights 72-hour returns as a vital and underappreciated area for future study.
Jason Shapiro was supported in part by the Agency for Healthcare Research and Quality (Grant No. 5R01HS021261). The content of this article is solely the responsibility of the authors and does not
necessarily represent the official views of the Agency for Healthcare Research and Quality.
References
- Keith KD, Bocka JJ, Kobernick MS, Krome RL, Ross MA. Emergency department revisits. Ann Emerg Med 1989;18(9):964-8.
- Lerman B, Kobernick MS. Return visits to the emergency department. J Emerg Med
Sorup CM, Jacobsen P, Forberg JL. Evaluation of emergency department performance–a systematic review on recommended performance and quality- in-care measures. Scand J Trauma Resusc Emerg Med 2013;21:62-75.
- Graff L, Stevens C, Spaite D, Foody J. Measuring and improving quality in emergency medicine. Acad Emerg Med 2002;9(11):1091-107.
- Abualenain J, Frohna WJ, Smith M, Pipkin M, Webb C, Milzman D, et al. The prevalence of quality issues and adverse outcomes among 72-hour return admissions in the emergency department. J Emerg Med 2013;45(2):281-8.
- Gordon JA, An LC, Hayward RA, Williams BC. Initial emergency department diagnosis and return visits: risk versus perception. Ann Emerg Med 1998;32:569.
- Pierce JM, Kellerman AL, Oster C. “Bounces”: an analysis of short-term return visits to a public hospital emergency department. Ann Emerg Med 1990;19(7):752-7.
- Gabayan GZ, Asch SM, Hsia RY, Zingmond D, Liang LJ, Han W, et al. Factors associated with short-term bounce-back admissions after emergency department discharge. Ann Emerg Med 2013;62(2):136-144.e1.
- Cho CS, Shapiro DJ, Cabana MD, Maselli JH, Hersh AL. A national depiction of children with return visits to the emergency department within 72 hours, 2001- 2007. Pediatr Emerg Care 2012;28(7):606-10.
- Alessandrini EA, Lavelle JM, Grenfell SM, Jacobstein CR, Shaw KN. Return visits to a pediatric emergency department. Pediatr Emerg Care 2004;20(3):166-71.
- Mariani PJ. Auditing emergency department return visits. Ann Emerg Med 1990; 19(8):952.
- Lee EK, Yuan F, Hirsh DA, Mallory MD, Simon HK. A clinical decision tool for predicting patient care characteristics: patients returning within 72 hours in the emergency department. AMIA Annu Symp Proc 2012;2012:495-504.
- Lawrence LM, Jenkins CA, Zhou C, Givens TG. The effect of diagnosis-specific computerized discharge instructions on 72-hour return visits to the pediatric emergency department. Pediatr Emerg Care 2009;25(11):733-8.
- Mittal MK, Zorc JJ, Garcia-Espana JF, Shaw KN. An assessment of clinical performance measures for pediatric emergency physicians. Am J Med Qual 2013;28(1):33-9.
- Pham JC, Kirsch TD, Hill PM, DeRuggerio K, Hoffmann B. The prevalence of quality issues and adverse outcomes among 72-hour return admissions in the emergency department. J Emerg Med 2013;45(2):281-8.
- Platt MA, Coleman R. Integration of the 72-hour return into resident education. [abstract] Ann Emerg Med 2012;60(5):S172.
- Depiero AD, Ochsenschlager DW, Chamberlain JM. Analysis of pediatric hospital- izations after emergency department release as quality improvement tool. Ann Emerg Med 2002;39(2):159-63.
- Sklar DP, Crandall CS, Loeliger E, Edmunds K, Paul I, Helitzer DL. Unanticipated death after Discharge home from the emergency department. Ann Emerg Med 2007;49:735-45.
- Pham JC, Kirsch TD, Hill PM, DeRuggerio K, Hoffmann B. Seventy-two-hour returns may not be a good indicator of safety in the emergency department: a national study. Acad Emerg Med 2011;18(4):390-7.
- Centers for Medicare & Medicaid Services. EHR incentive programs–meaningful use. retrieved January 14, 2014 from https://www.cms.gov/Regulations-and- Guidance/Legislation/EHRIncentivePrograms/Meaningful_Use.html.
- Jha AK, DesRoches CM, Campbell EG, Donelan K, Rao SR, Ferris TG, et al. Use of electronic health records in U.S. hospitals. N Engl J Med 2009;360(16):1628-38.
- National Quality Forum. Identification of potential 2013 e-quality measures. Published February 2011; retrieved March 10, 2014 from http://www.quality- forum.org/Publications/2011/02/Meaningful_Use_Final_Report.aspx/.
- Weber EJ, Mason S, Carter A, Hew RL. Emptying the corridors of shame: organizational lessons from England’s four-hour emergency throughput target. Ann Emerg Med 2011;57:79-88.
- Gunal MM, Pidd M. Understanding target-driven action in emergency department performance using simulation. Emerg Med J 2009;26(10):724-7.
- Mason S, Weber EJ, Coster J, Freeman J, Locker T. Time patients spend in the emergency department: England’s 4-hour rule-a case of hitting the target but missing the point? Ann Emerg Med 2012;59(5):341-9.
- Welch SJ, Asplin BR, Stone-Griffith S, Davidson SJ, Augustine J, Schuur J, Emergency Department Benchmarking Alliance. Emergency department operational metrics, measures and definitions: results of the second performance measures and benchmarking summit. Ann Emerg Med 2011;58(1):33-40.
- Shapiro JS, Onyile A, Patel V, Strayer RJ, Kuperman G. Enabling 72-hour ED returns measurement with regional data from a health information exchange. Ann Emerg Med 2011;58(4 Suppl):S295.