Uncategorized

Improving emergency physician performance using audit and feedback: a systematic review

a b s t r a c t

Background: Audit and feedback can decrease variation and improve the quality of care in a variety of health care settings. There is a growing literature on audit and feedback in the emergency department (ED) setting. Because most studies have been small and not focused on a single Clinical process, systematic assessment could determine the effectiveness of audit and feedback interventions in the ED and which specific characteristics improve the quality of emergency care.

Objective: The objective of the study is to assess the effect of audit and feedback on emergency physician perfor- mance and identify features critical to success.

Methods: We adhered to the PRISMA statement to conduct a systematic review of the literature from January 1994 to January 2014 related to audit and feedback of physicians in the ED. We searched Medline, EMBASE, PsycINFO, and PubMed databases. We included studies that were conducted in the ED and reported quantitative outcomes with interventions using both audit and feedback. For included studies, 2 reviewers independently assessed methodological quality using the validated Downs and Black checklist for nonrandomized studies. treatment effect and heterogeneity were to be reported via meta-analysis and the I2 inconsistency index.

Results: The search yielded 4332 articles, all of which underwent title review; 780 abstracts and 131 full-text articles were reviewed. Of these, 24 studies met inclusion criteria with an average Downs and Black score of

15.6 of 30 (range, 6-22). Improved performance was reported in 23 of the 24 studies. Six studies reported suffi- cient outcome data to conduct summary analysis. Pooled data from studies that included 41124 patients yielded an average treatment effect among physicians of 36% (SD, 16%) with high heterogeneity (I2 = 83%). Conclusion: The literature on audit and feedback in the ED reports positive results for interventions across numer- ous clinical conditions but without standardized reporting sufficient for meta-analysis. Characteristics of audit and feedback interventions that were used in a majority of studies were feedback that targeted Errors of omission and that was explicit with measurable instruction and a plan for change delivered in the clinical setting greater than 1 week after the audited performance using a combination of media and types at both the individual and group levels. Future work should use standardized reporting to identify the specific aspects of audit or feedback that drive effectiveness in the ED.

(C) 2015

  1. Introduction

? Support: Dr Venkatesh is supported by the Emergency Medicine Foundation Health Policy Scholar Award. He also works under contract with the Centers for Medicare and Medicaid Services in the development of hospital outcome and efficiency measures. Dr Melnick is supported, in part, by grant number K08HS021271 from the Agency for Healthcare Research and Quality. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health or the Agency for Healthcare Research and Quality.

?? Meetings: SAEM New England Regional Meeting, April 2015; SAEM Annual Meeting,

May 2015.

* Corresponding author at: Department of Emergency Medicine, Yale School of Medi- cine, 464 Congress Ave, Suite 260, New Haven, CT 06519.

Practice variation between health care providers can substantially impact the quality, efficiency, effectiveness, and safety of health care services [1,2]. Approaches to improve quality of care and reduce varia- tion in the emergency department (ED) have included guideline imple- mentation, pay-for-performance, computerized Clinical decision support, and clinical decision rules [3-5]. Audit and feedback interven- tions have received increasing attention as a strategy to reduce variation and improve the quality of care [6]. A recent Cochrane review of audit and feedback interventions in diverse practice settings across multiple

http://dx.doi.org/10.1016/j.ajem.2015.07.039

0735-6757/(C) 2015

Table 1

MEDLINE search strategy

Search strategy task MEDLINE search strategy

Intervention subject terms and key words 1 (audit? adj3 feedback).tw.

  1. clinical audit/
  2. exp management audit/
  3. exp “commission on professional and hospital activities”/
  4. feedback/
  5. feedback, psychological/
  6. exp “utilization review”/
  7. peer review, health care/
  8. (audit or audits or auditing).tw.
  9. feedback.tw.
  10. (review adj3 record?).tw.
  11. (practice data or hospital* data).tw.
  12. benchmark.tw.
  13. exp professional practice/
  14. physician’s practice patterns/
  15. (practice pattern? or pattern of practice).tw.
  16. (performance adj2 (health* personnel or health care personnel or physician? or doctor? or clinician? or provider? or practioner? or resident? or professional? or clinical)).tw.
  17. ((influence* or chang*) adj3 (behaviour* or behavior*)).tw.
  18. 14 or 15 or 16 or 17 or 18 Outcomes of the intervention subject terms and key words 20 quality assurance, health care/
  19. “quality of health care”/
  20. (quality adj (assurance or improvement or control)).tw.
  21. (health care quality or healthcare quality or quality of healthcare or quality of health care or quality of care).tw.
  22. exp “outcome and process assessment (health care)”/
  23. exp patient admission/
  24. 20 or 21 or 22 or 23 or 24 or 25

Setting subject terms and key words 27 emergency medical services/

  1. emergency service, hospital/or trauma centers/
  2. (emergency adj2 (care or healthcare or department? or unit or units or room? or treatment?)).ti,ab.
  3. (emergency adj service?).ti,ab.
  4. (urgent adj2 (care or healthcare or health care)).ti,ab.
  5. emergency medicine/
  6. 27 or 28 or 29 or 30 or 31 or 32 Personnel involved in intervention subject terms and key words 34 physicians/
  7. (physician$ or doctor$ or specialist$).tw.
  8. “internship and residency”/
  9. exp health personnel/
  10. 34 or 35 or 36 or 37

Articles or study types 39 (clinical adj2 polic?).tw.

  1. (clinical adj2 guideline?).tw.
  2. (practice adj2 guideline?).tw.
  3. exp Practice Guideline/
  4. 1 or 2 or 3 or 4 or 5 or6 or 7 or8 or 9 or 10 or 11 or 12 or 13 or 39 or 40 or 41 or 42
  5. 19 or 26 or 38
  6. 33 and 43 and 44

(See Appendix A for full search strategy including EMBASE, PsycINFO, and PubMed search strategies.)

specialties concluded that utilization of audit and feedback can improve clinician performance if baseline performance was low, the source of feedback was a supervisor or colleague, and audit and feedback was provided more than once, delivered in both verbal and written formats, and included both explicit targets and an action plan [7].

Currently, the specific features of audit and feedback interventions that can improve emergency care are not well understood [2]. The Cochrane review focused only on randomized controlled trials and did not examine the effectiveness of audit and feedback specific to the ED setting [7]. There is a rapidly growing literature on audit and feedback in the ED setting that has not been systematically reviewed in 6 years [8].

The ED environment is unique, highly complex, and particularly vulnerable to variation in quality of care and patient safety [9,10]. Furthermore, distinctive features of the ED environment such as high- volume throughput, unpredictable patient acuity, frequent handoffs, overcrowding, noise, and interruptions may present barriers to implementation of audit and feedback necessitating specifically tailored solutions [10].

The aim of this systematic review was to assess the effect of audit and feedback on emergency physician performance and to identify

features of audit and feedback implementations associated with improvements in Clinical performance. We hypothesized that these fea- tures and their effectiveness to improve emergency physician practice would be sensitive to the ED environment.

  1. Methods
    1. Study design

We conducted a systematic review of the medical literature from January 1994 to January 2014 related to audit and feedback in the ED. The study protocol was prospectively registered in the PROSPERO inter- national prospective register of systematic reviews [11]. This protocol guided the review in adherence with the Preferred Reporting Items for Systematic Reviews and Meta-analyses statement [12].

    1. Search strategy

Our main objective was to assess the effect of audit and feedback on emergency physician performance and to identify features associated with improvements in clinical performance. In collaboration with our

reviewer. Any discrepancy arising at any stage throughout the study Selection process was resolved via consensus among the authors at both levels of screening. Screening reviews and full-text reviews were not blinded to authorship, journal, or year.

131 full-text articles assessed for eligibility

780 abstracts screened for eligibility

24 full-text articles included in qualitative syntesis

4,332 articles identified through search of Medline, EMBASE, PsycINFO, and PubMed by search performed by professional librarian

Screening

Identification

Fig. 1. Flow of information through the different phases of the systematic review.

Included

Eligibility

institution’s librarians (DH and LS), we developed a comprehensive search strategy using medical subject headings and text words for the general concepts of emergency medicine and feedback (Table 1 for Medline search strategy and Appendix A for full search strategy). We used this strategy to search MEDLINE (OvidSP 1946 to April Week 3 2014), EMBASE (OvidSP 1974 to 2014 April 21), PsycINFO (OvidSP 1967

to April Week 4 2014), and PubMed (unindexed references). All searches were performed by a professional librarian and conducted on April 21, 2014, except for PubMed, which was conducted on December 11, 2013.

    1. Study selection

Studies were only included if they met the following eligibility/ inclusion criteria: (1) the study was conducted in the ED or it was conducted in multiple clinical environments with distinct ED outcomes reported, (2) the intervention used both audit and feedback, and (3) the study reported quantitative outcome measures of the impact of the in- tervention. Study eligibility was confined to articles published in English language and only published studies (abstracts or full text). We also reviewed the reference lists of selected articles to identify additional studies for inclusion. The results of these searches were entered into EndNote citation management software (version 7.0; Thompson Reuters, New York, NY), and duplicates were removed.

    1. Data collection and validity assessment

Fig. 1 illustrates the identification, screening, and inclusion processes. Studies were screened for inclusion criteria at a title, abstract, and full-text levels. Title and abstract review were individually performed.

The full-text review was performed by 2 independent reviewers who independently screened for inclusion criteria, assessed methodo- logical quality using the validated Downs and Black checklist for nonrandomized studies (Appendix B), and extracted data elements to describe included articles and perform summary assessment [13]. Using a standardized data collection form, 2 authors independently extracted data on performance (baseline [low 25%, moderate 50%, high 75%], goal, and treatment effect), intervention (type, medium, and instruction), feedback (format, setting, source, and timing), and the treatment effect and its SD. If they were reported, the number of pa- tients and physicians in the control and intervention was also collected. The subsequent Downs and Black checklist score and data were reviewed and reconciled for agreement by a third independent, blinded

    1. Data processing and analysis

Treatment effect was calculated as a Percentage change in the targeted process before and after the intervention as aggregate group data. When time was the primary outcome, the treatment effect was re- ported as a percentage change calculated as 1 - (postintervention time/preintervention time). When more than 1 outcome (n = 3 stud- ies) was reported postintervention, the treatment effect was reported across outcomes. Statistical heterogeneity was assessed using the I2 in- consistency index, which describes the percentage of variability in the effect that is due to underlying differences between the studies rather than chance with values greater than 75% indicating substantial hetero- geneity [14]. After formally reviewing the included articles and selective information available on outcomes, sample sizes, and SEs, it was deter- mined that the included studies were too dissimilar and heterogeneous to perform the originally intended meta-analysis. Summary statistics are reported for the 6 studies with clinically similar results.

  1. Results

The combined searches yielded 4332 articles, including 324 dupli- cate articles, all of which underwent title review. Of these, 780 abstracts and 131 full texts were reviewed with 24 studies meeting inclusion criteria. The evidentiary table (Table 2) outlines the characteristics of the included studies as well as the clinical behaviors and practice varia- tions that were targeted by the intervention-most commonly time to treatment for acute myocardial infarction and pneumonia, guideline compliance (particularly in sepsis), and rates of a range of behaviors such as testing, return visits, documentation errors, handoff errors, pa- tient complaints, adverse events, and discharge [15-38]. All of the stud- ies but 1 used a preintervention/postintervention study design. Metlay et al [22] performed a multicenter cluster-randomized trial. Ten of the 24 articles reported multicenter studies [15,20-22,25,26,30,31,36,38].

There was substantial heterogeneity in the included studies with an I2 index of 83%. The included studies had an average Downs and Black score of 15.6 of 30 (range, 6-22). None of the included studies met checklist criteria of (1) blinding those measuring the main outcomes of the intervention or (2) concealing the intervention assignment from both the subjects and the investigators. Only 1 study made an attempt to blind the study subjects to the intervention they received [18]. Similarly, only 1 study randomized study subjects to intervention groups [22]. Furthermore, Metlay et al [22] was the only study to account for loss to follow-up. The only checklist criteria met by all the studies were that the staff, places, and facilities were representative of the treatment the majority of subjects received.

Distribution of included articles published per year is provided in Fig. 2. The majority of the studies were conducted in the United States, 4 in Australia, 2 in China, 1 in Canada, 1 in Denmark, 1 in Switzerland, and 1 in the United Kingdom. Fig. 2 shows the distribution of published studies over time.

    1. Intervention characteristics

Three types of interventions were used: reminders, educational outreach, and Organizational interventions. Six of the 24 studies used reminders exclusively as the intervention [17,27,32,34,37,38]. Four of the 24 studies used the educational outreach approach [15,18,19,22]. Only 2 of the 24 studies used the organizational intervention [26,28]. Twelve of the 24 studies used a combination of the different types of interventions [16,20,21,23-25,29-31,33,35,36].

Table 2

Evidentiary table

Study

Quality

Performance

Intervention

Feedback

Author year [Ref] Targeted behavior

Downs and Black

In summary analysis

Baseline

Goal

Treatment effect (%)

Type

Medium

Measurable Instructions with plan

Group or individual Format

Clinical setting

Source

Timing (d)

Krall 1995 [15]

16

?

Moderate

?

44.4

Educational

Combo

?

Both

?

External

N 7

Triage to AMI treatment

(50%)

review

Guidry 1998 [16]

14

Moderate

?

21.7

Multiple

Combo

?

Group

Supervisor

N 7

Door to lytics time in

(50%)

AMI

Ramoska 1998 [17]

6

UTD

?

17.8

Reminders

Written

No

Group

Peer/

N 7

lab testing rate

colleague

Hadjianastassiou 2001

16

Moderate

?

UTD

Educational

Combo

?

Both

UTD

Supervisor

N 7

[18]

(50%)

Adequacy of

documentation

Chern 2005 [19]

17

Low (25%)

?

2.6

Educational

Verbal

?

1 on 1

?

Investigators

1-7

Return visit/adverse

event rates

Katz 2006 [20]

18

Moderate

Mixed

2

Multiple

Combo

  • Group

Investigators

N 7

ACS decision-making

(50%)

Doherty 2007 [21]

17

Low (25%)

?

41.1

Multiple

Combo

  • Both
  • Investigators

N 7

Asthma guideline

compliance

Metlay 2007 [22]

22

Moderate

?

10

Educational

Combo

  • Both
  • Investigators

N 7

URI antibiotic use rate

(50%)

Nguyen 2007 [23]

17

Low (25%)

?

51.2

Multiple

Combo

  • Both

UTD

N 7

bundle compliance

Buising 2008 [24]

18

Moderate

?

6.8

Multiple

Electronic

  • Both
  • UTD

b1

CAP guideline

(50%)

compliance

Bessen 2009 [25]

16

Moderate

?

O1: 37.2%

Multiple

Combo

  • Both
  • UTD

N 7

Ottawa Ankle Rule use

(50%)

O2: 29.2%

Burstin 2009 [26]

17

  • Moderate

?

4.5

Organizational

Combo

  • Group

Supervisor

N 7

Departmental guideline

(50%)

compliance

Weiner 2009 [27]

18

High (75%)

?

8.6

Reminders

Electronic

Measurable Group

UTD

1-7

PNA time to antibiotics

without plan

Rudiger-Sturchler 2010

18

  • Moderate

?

26

Organizational

Verbal

  • Both
  • Supervisor

b1

[28]

(50%)

Handoff error and time

Hill 2011 [29]

15

High (75%)

?

18

Multiple

Combo

  • Both

UTD UTD

1-7

PNA time to antibiotics

McIntosh 2011 [30]

18

Low (25%)

?

O1: 10%

Multiple

Combo

Both

Supervisor

N 7

CAP guideline

O2: 19%

compliance

Welch 2011 [31]

9

Moderate

?

O1: 39.2%

Multiple

Combo

Both

UTD

b1

door-to-doc time

(50%)

O2: 42.5%

Capraro 2012 [32]

17

  • Low (25%)

?

37.3

Reminders

Electronic

  • Both

Supervisor

N 7

Documentation error rate

Plambech 2012 [33]

11

Moderate

?

28

Multiple

Combo

  • Group
  • Peer/

N 7

Sepsis guideline

(50%)

colleague

compliance

Ratnapalan 2012 [34]

15

Low (25%)

?

6.5

Reminders

Combo

Measurable Both

  • Supervisor

N 7

Documentation error

without plan

rate

Volpe 2012 [35]

18

  • Moderate

?

50.5

Multiple

Combo

  • Both
  • Peer/

N 7

Peds fever time to

(50%)

colleague

antibiotics

Pichert 2013 [36]

12

UTD

?

50

Multiple

Combo

  • Both

Peer/

N 7

Patient complaint rate

colleague

Wu 2013 [37]

17

Moderate

?

5.3

Reminders

Electronic

Measurable Group

  • Peer/

N 7

discharge rate

(50%)

without plan

colleague

Nguyen 2014 [38]

12

Moderate

?

53

Reminders

Electronic

  • 1 on 1

Investigators

1-7

Documentation

(50%)

of stool heme

Abbreviations: AMI, acute myocardial infarction; ACS, acute coronary syndrome; URI, upper respiratory tract infection; CAP, community-acquired pneumonia; PNA, pneumonia; O1, out- come 1; O2, outcome 2; UTD, unable to determine; Combo, combination.

The medium of the intervention was identified in each the 24 studies which were categorized as: (1) written, (2) verbal, (3) electronic, and

(4) a combination of media. In 2 of the 24 studies, verbal feedback was provided [19,28]. In 5 of the 24 studies, electronic feedback was provided

[24,27,32,37,38]. The remaining 16 studies used a combination of verbal, written, and electronic feedback [15,16,18,20-23,25,26,29-31,33-36,38]. Twenty of 24 provided feedback with explicit, measurable instruc- tions and with a plan for change [15,16,18-26,28-33,35,36,38]. In 3 of

the 24 studies, feedback instructions were explicit and measurable but without a plan [27,34,37].

    1. Feedback characteristics

In the 24 studies, feedback was given as one-on-one, as a group, or in both manners. Only 2 studies used one-on-one feedback alone [19,38]. Seven of the 24 studies used the group method to provide feedback [16,17,20,26,27,33,37]. Fifteen of the 24 studies used both the one-on- one and group methods to provide feedback [15,18,21-25,28-32,34-36]. In 7 studies, feedback was provided by a supervisor [16,18,26,28,30,32,34].

In 5 studies, feedback was provided by a peer or colleague [17,33,35-37]. Six studies did not explicitly identify a source of feedback [23-25,27,29,31]. In5 studies, feedback was provided by the investigators [19-22,38]. One study described feedback given by a professional standards review organization [15]. The timing of feedback in relation to when the original audit was performed differed among the studies. Seventeen of 24 studies had an interval greater than 1 week [15-18,20-23,25,26,30,32-37]. In 4 of the 24 studies, the interval between audit and feedback ranged from 1 day to 1 week [19,27,29,38]. Three studies described an interval less than

24 hours between audit and feedback [24,28,31].

In 12 of the studies, feedback was given in the clinical setting [15,19,21,22,24,25,28,31,33-35,37]. Two studies did not specify the set- ting for feedback [18,29]. The remaining 10 studies described a nonclin- ical setting for feedback [16,17,20,21,23-25,29,37,38].

    1. Performance characteristics

Fifteen studies encouraged health care providers to increase or im- prove upon their baseline behaviors. Eight studies required providers to decrease current behaviors [15,17,22,28,31,32,34,36]. Only 1 study, Katz et al [20], required the increase of particular physician behaviors and decrease of other behaviors.

All studies, with the exception of one, measured treatment effect based on provider performance (as opposed to patient-related outcomes) [19]. Six studies reported sufficient outcome data to conduct summary analysis [15,26,28,30,32,35]. Pooled data from these 6 studies included 41 124 patients and yielded an average treatment effect among physicians of 36% (SD, 16%). Fourteen of the 24 studies reported a moderate baseline performance [15,16,18,20,22,24-26,28,31,33,35,37,38]. Six studies report- ed a low baseline performance [19,21,23,30,32,34]. In addition, 2 reported high baseline performance [27,29]. Baseline performance was undeter- mined for 2 studies [17,36]. Notably, 23 of the 24 studies resulted in im- provement of the measured outcomes [15-19,21-38].

  1. Discussion

This systematic review sought to evaluate the effectiveness and characteristics of audit and feedback interventions used to improve quality, safety, efficiency, and overall performance in emergency care. Drawing upon the framework of Ivers et al [7] but focusing on the spe- cialty of emergency medicine, this review was able to identify specific forms of audit and feedback currently in use in emergency care that were broadly effective across numerous clinical conditions despite use of different forms of intervention. Characteristics of audit and feedback interventions that were used in a majority of studies were (1) feedback that is explicit with measurable instruction and a plan for change (in 20 of 24 studies), (2) timing of feedback greater than 1 week after audited performance (in 17 of 24 studies), (3) combined used of feedback media (eg, some combination of verbal, electronic, or written, in 16 of 24 stud- ies), (4) intervention intends to increase or improve current behavior (ie, error of omission as opposed to error of commission, in 15 of 24 studies), (5) feedback given at both the individual and group levels (in 15 of 24 studies), (6) feedback given in the clinical setting (in 15 of 24 studies), and (7) combination of feedback types (eg, some combination of reminders, education, or organizational level intervention, in 15 of 24 studies). Even when a majority of studies used particular features, treat- ment effects were highly variable and overlapping across groups limit- ing the ability to differentiate which features of audit and feedback are most effective in the ED environment.

Ivers et al [7] found audit and feedback to be most effective when baseline performance was low, the source was a supervisor or colleague, the intervention was provided more than once, delivered in verbal and written formats as well as including explicit targets and action plans. Distinctive features of audit and feedback that are critical to success in the ED may be different than in the clinical settings reported by Ivers et al due to the different clinical environment or insufficient research in this area or a lack of integration between operational quality im- provement and academic research efforts.

This review was also suggestive that audit and feedback was most effective with low or moderate baseline performance and was delivered in verbal and written formats as well as including explicit targets and action plans. Of note, the large treatment effect (17.8%-50.5%) identified in this review when colleagues were used as a source for feedback suggests that, in the ED, feedback from a peer or colleague may be better than supervisor feedback. It is difficult to conclude whether these differences are specialty specific or a product of limited evidence or an observation limited by our inability to conduct meta-analysis. Future research on audit and feedback in the ED could study this effect by seeking to identify specific aspects of audit or feedback that drive effectiveness.

5

4

*

3

Number of Articles

2

1

0

1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013

Year Published

Fig. 2. Distribution of included articles published per year. Asterisk (*) notes a 13-month year through January 2014.

Given the growing literature on audit and feedback in the ED, the broad effectiveness of the studies included in this review, and the relative paucity of randomized studies on the topic, the included studies’ Methodological rigor was assessed systematically such that the findings of nonrandomized studies could be interpreted reliably. The Downs and Black scores reported

Appendix A. Full search strategy

Ovid MEDLINE(R) 1946 to February Week 1 2014

finished 11/26/13

here (as well as the fact that only 1 study was randomized and the remain-

der used historical controls without clear evidence of controlling for con- founding) are suggestive of low methodological quality for audit and feedback studies in emergency medicine. In particular, lack of blinding is concerning for performance and detection bias; lack of randomized study subjects to intervention groups is concerning for selection bias. However, because all the studies attempted to be generalizable, there may be some external validity to their findings.

#

Search

Results

Search type

1

(audit? adj3 feedback).tw.

506

Advanced

2

clinical audit/

721

Advanced

3

medical audit/

14512

Advanced

4

exp management audit/

12152

Advanced

5

exp “commission on professional

and hospital activities”/

222

Advanced

6

feedback/

25995

Advanced

7

feedback, psychological/

2133

Advanced

8

exp “utilization review”/

10114

Advanced

9

peer review, health care/

1274

Advanced

10

(audit or audits or auditing).tw.

23526

Advanced

11

feedback.tw.

71992

Advanced

12

(review adj3 record?).tw.

9411

Advanced

13

chart review.tw.

18916

Advanced

14

(practice data or hospital* data).tw.

3012

Advanced

15

benchmark.tw.

8079

Advanced

16

exp evidence-based medicine/

52292

Advanced

17

exp professional practice/

214666

Advanced

18

physician’s practice patterns/

39907

Advanced

19

(practice pattern? or pattern of practice).tw.

4319

Advanced

20

(performance adj2 (health* personnel or health care

8723

Advanced

21

personnel or physician? or doctor? or clinician? or provider? or practioner? or resident? or professional? or clinical)).tw.

((influence* or chang*) adj3 (behaviour* or

43322

Advanced

22

behavior*)).tw.

17 or 18 or 19 or 20 or 21

302263

Advanced

23

quality assurance, health care/

48130

Advanced

24

“quality of health care”/

55779

Advanced

25

(quality adj (assurance or improvement or

53364

Advanced

26

control)).tw.

(health care quality or healthcare quality or quality of

3235

Advanced

27

healthcare or quality of health care or quality of care).tw.

exp “outcome and process assessment (health care)”/

68942

Advanced

28

“Length of Stay”/

56493

Advanced

29

Treatment outcome/

602036

Advanced

30

patient discharge/

18307

Advanced

31

exp patient admission/

18111

Advanced

32

23 or 24 or 25 or 26 or 27 or 28 or 29 or 30 or 31

894212

Advanced

33

emergency medical services/

31114

Advanced

34

emergency service, hospital/or trauma centers/

47896

Advanced

35

(emergency adj2 (care or healthcare or department? or

63924

Advanced

36

unit or units or room? or treatment?)).ti,ab.

(emergency adj service?).ti,ab.

3763

Advanced

37

(urgent adj2 (care or healthcare or health care)).ti,ab.

1108

Advanced

38

emergency medicine/

9650

Advanced

39

33 or 34 or 35 or 36 or 37 or 38

118185

Advanced

40

physicians/

59587

Advanced

41

(physician$ or doctor$ or specialist$).tw.

365969

Advanced

42

“internship and residency”/

34103

Advanced

43

exp health personnel/

362270

Advanced

44

40 or 41 or 42 or 43

679264

Advanced

45

(clinical adj2 polic?).tw.

787

Advanced

46

(clinical adj2 guideline?).tw.

14005

Advanced

47

(practice adj2 guideline?).tw.

13510

Advanced

48

exp Practice Guideline/

18527

Advanced

49

1 or 2 or 3 or 4 or 5 or 6 or 7 or 8 or 9 or 10 or 11 or 12

253191

Advanced

50

or 13 or 14 or 15 or 16 or 45 or 46 or 47 or 48

22 or 32 or 44

1704295

Advanced

51

39 and 49 and 50

4319

Advanced

52

limit 51 to (english language and yr = “1994 -Current”)

3814

Advanced

Audit and feedback is increasingly being used by organizational and

ED leaders to affect change in physician behavior in the ED. Differences in the physical environment, practice setting, patient population, and treatment goals between the ED and other clinical settings may contrib- ute to differences in the utility of specific interventions. To differentiate which features of audit and feedback are most effective in the ED environ- ment, future research should report both positive and negative findings, follow the highest standards of methodological rigor (eg, blinding and randomizing subjects), and compare the effectiveness of promising strat- egies (eg, combination vs single types or media or timing of feedback).

    1. Limitations

This systematic review enabled synthesis of many nonrandomized studies previously not included or assessed by systematic review. Howev- er, inclusion of this broad literature across a variety of clinical conditions likely precluded the ability to conduct meta-analysis. Research in quality and safety, performance improvement, and audit and feedback is often qualitative and descriptive. In fact, only 6 of the included 24 studies re- ported clinically similar data allowing for comparison. The remaining 18 yielded heterogeneous data too dissimilar to incorporate into the previ- ously planned meta-analysis. Thus, this review focused on the identifica- tion and synthesis of specific characteristics. Furthermore, Quality improvement interventions like audit and feedback are targeted toward physician behavior to improve patient outcomes. However, studies rarely reported the number of physicians being studied or any patient-specific outcomes, making the assessment of the effectiveness of these audit and feedback interventions on meaningful outcomes challenging.

The fact that 23 of the 24 studies resulted in improvement of measured

outcomes suggests a strong positive results publication bias in the literature on ED audit and feedback. As a result of this publication bias, it is unclear what audit and feedback characteristics are unsuccessful or may even neg- atively impact care. This bias also limits interpretation of which characteris- tics of the interventions in the available literature are actually contributing to the improvement in outcomes. The literature on audit and feedback in the ED has positive results publication bias and low methodological quality with substantial heterogeneity making it difficult to draw conclusions about the efficacy of specific intervention characteristics.

    1. Conclusion

This systematic review describes numerous publications demonstrating the effectiveness of a broad range of audit and feedback interventions in the ED setting. Interventions appear to be most efficacious for interventions to improve time to treatment, guideline compliance, and rates of behaviors such as testing, errors, and discharge, when feedback targets errors of omis- sion and is explicit with measurable instruction and a plan for change deliv- ered in the clinical setting greater than 1 week after the audited performance using a combination of media and types at both the individual and group levels. More research in this area with more standardized reporting methods and comparative effectiveness of different intervention characteristics are necessary to optimize the future use of audit and feed- back to improve emergency physician performance and patient outcomes.

PubMed (finished 12/12/13)

(((audit* AND (feedback OR clinical OR medical OR management)) OR feedback OR audit* OR Benchmark* OR (assess*) OR (chart AND review))) AND ((outcome* OR (quality AND improvement) OR (practice AND pattern*) OR (influence* OR (change* OR changing))) AND (behavior* OR behaviour* OR competence)) AND (emergency AND (care OR medicine OR healthcare OR department* OR service* OR unit OR units OR room* OR treatment*)) AND ((publisher[sb] NOT pubstatusnihms NOT pubstatuspmcsd NOT pmcbook) OR inprocess[sb] OR pubmednotmedline[sb] OR oldmedline[sb] OR ((pubstatusnihms OR pubstatuspmcsd) AND publisher[sb]))

Appendix B. Downs and Black score

Embase 11-22

      1. (audit* adj3 feedback).tw.
      2. exp medical audit/
      3. exp feedback system/
      4. exp negative feedback/
      5. exp positive feedback/
      6. exp “utilization review”/
      7. exp “medical record review”/
      8. audit*.tw.
      9. feedback.tw.
      10. (record adj3 review).tw.
      11. chart review.tw.
      12. benchmark*.tw.
      13. exp Evidence based medicine/
      14. or/1-13
      15. exp professional practice/
      16. exp professional competence/
      17. exp clinical competence/
      18. exp diagnostic accuracy/
      19. exp diagnostic error/
      20. (practice pattern? or pattern of practice).tw.
      21. ((clinical or medical or professional or hospital?) adj practice?).tw.
      22. ((physician? or doctor? or clinician? or nurse or resident? or professional? or nursing or clinical) adj3 performance).tw.
      23. exp “length of stay”/
      24. exp follow up/
      25. exp health care quality/
      26. exp quality control/
      27. exp total Quality management/
      28. exp health care planning/
      29. ((clinical or medical or professional or hospital?) adj practice?).tw.
      30. (quality adj (assurance or improve$ or control)).tw.
      31. exp treatment outcome/
      32. exp hospital discharge/
      33. exp hospital admission/
      34. ((physician? or doctor? or clinician? or nurse or residen? or professional? or nursing or clinical) adj3 (skill? or behavior or behaviour or competence)).tw.
      35. ((influenc* or chang*) adj3 (behavior* or behaviour*)).tw.
      36. exp emergency ward/
      37. exp emergency medicine/
      38. exp emergency physician/
      39. exp emergency health service/
      40. exp physician/
      41. exp resident/
      42. or/40-41
      43. or/36-39
      44. or/23-28,30-35
      45. 14 and 42 and 44 and 43
      46. or/15-22
      47. 14 and 46 and 44 and 43
      48. 14 and 42 and 44 and 43
      49. 47 or 48
      50. limit 49 to (english language and yr = “1994 -Current”)

#

PsycInfo Search

Results

Search Type

1

(audit* adj3 feedback).tw.

1248

Advanced

2

clinical audits/

239

Advanced

3

feedback.tw.

43589

Advanced

4

feedback.ti.

9539

Advanced

5

feedback/

12177

Advanced

6

exp Utilization Reviews/

103

Advanced

7

audit*.tw.

53246

Advanced

8

(record adj3 review$).tw.

6102

Advanced

9

chart review.tw.

2375

Advanced

10

benchmark*.tw.

4428

Advanced

11

exp Evidence Based Practice/

10075

Advanced

12

exp Clinical Practice/

11193

Advanced

13

exp Professional Competence/

4498

Advanced

14

clinical competence.mp.

473

Advanced

15

diagnos$ accura$.tw.

1436

Advanced

16

diagnos$ error.tw.

110

Advanced

17

(practice pattern* or pattern of practice).tw.

594

Advanced

18

((clinical or medical or professional or hospital?) adj practice?).tw.

37203

Advanced

19

((physician? or doctor? or clinician? or nurse or residen? or professional? or nursing or clinical) adj3 (skill? or behavior or behaviour or competence)).tw.

12055

Advanced

(continued on next page)

Appendix B. (continued)

(continued)

#

PsycInfo Search

Results

Search Type

20

exp Treatment Duration/

6325

Advanced

21

exp Followup Studies/

12300

Advanced

22

exp “Quality of Care”/

8316

Advanced

23

exp Quality Control/

1162

Advanced

24

(quality adj (assurance or improve$ or control)).tw.

5432

Advanced

25

exp Hospital Discharge/

2433

Advanced

26

exp Hospital Admission/

3806

Advanced

27

((influenc* or chang*) adj3 (behavior* or behaviour*)).tw.

46196

Advanced

28

exp Health Care Services/

76565

Advanced

29

exp Professional Standards/

7851

Advanced

30

decision support.mp.

3316

Advanced

31

exp Evaluation Criteria/

862

Advanced

32

emergency services/

5247

Advanced

33

emergency medicine.mp.

323

Advanced

34

(emergency adj2 (physician$ or residen$)).mp. [mp = title, abstract, heading word, table of contents, key concepts, original title, tests & measures]

343

Advanced

35

or/1-11

117642

Advanced

36

or/12-31

209909

Advanced

37

or/32-34

5567

Advanced

38

35 and 36 and 37

71

Advanced

39

limit 38 to english language

71

Advanced

40

limit 39 to yr = “1994 -Current”

64

Advanced

41

(practice adj3 pattern$).tw.

893

Advanced

42

exp treatment guidelines/

4161

Advanced

43

(clinical adj3 guideline$).tw.

3045

Advanced

44

(practice adj3 guideline$).tw.

3323

Advanced

45

(clinical adj2 polic$).tw.

609

Advanced

46

or/12-16,18-31,41

210117

Advanced

47

35 and 46 and 37

71

Advanced

48

or/42-45

8912

Advanced

49

48 and 46 and 37

40

Advanced

50

47 or 49

106

Advanced

51

limit 50 to (english language and yr = “1994 -Current”)

96

Advanced

Is the hypothesis/aim/objective of the study clearly described? Must be explicit

Are the main outcomes to be measured clearly described in the Introduction or Methods section?

If the main outcomes are first mentioned in the Results section, the question should be answered no. ALL primary outcomes should be described for YES

Are the characteristics of the study subjects included in the study clearly described?

In cohort studies and trials, inclusion and/or exclusion criteria should be given. In Case-control studies, a case definition and the source for controls should be given. Single-case studies must state source of patient

Are the interventions of interest clearly described? Treatments and placebo (where relevant) that are to be compared should be clearly described.

Are the distributions of principal confounders in each group of subjects to be compared clearly described? A list of principal confounders is provided. YES = age, severity

Are the main findings of the study clearly described? Simple outcome data (including denominators and numerators) should be reported for all major findings so that the reader can check the major analyses and conclusions.

Does the study provide estimates of the random variability in the data for the main outcomes? In nonnormally distributed data, the interquartile range of results should be reported. In normally distributed data, the standard error, standard deviation, or confidence intervals should be reported

Have the characteristics of study subjects lost to follow-up been described? If not explicit = NO. RETROSPECTIVE - if not described = UTD; if not explicit re: numbers agreeing to participate = NO. Needs to be N 85%

Have actual probability values been reported (eg, 0.035 rather than b0.05) for the main outcomes except where the probability value is less than 0.001? Were the subjects asked to participate in the study representative of the entire population from which they were recruited?

The study must identify the source population for patients and describe how the patients were selected.

Were those subjects who were prepared to participate representative of the entire population from which they were recruited?

The proportion of those asked who agreed should be stated.

Were the staff, places, and facilities where the study subjects practiced, representative of the majority of practice environments? For the question to be answered yes, the study should demonstrate that the intervention was representative of that in use in the source population. Must state type of hospital and country for YES.

Was an attempt made to blind study subjects to the intervention they have received? For studies where the patients would have no way of knowing which intervention they received, this should be answered yes. Retrospective, single group = NO; UTD if N 1 group and blinding not explicitly stated

Was an attempt made to blind those measuring the main outcomes of the intervention? Must be explicit

If any of the results of the study were based on “data dredging”, was this made clear?

Any analyses that had not been planned at the outset of the study should be clearly indicated. Retrospective = NO. Prospective = YES

In trials and cohort studies, do the analyses adjust for different lengths of follow-up of study subjects, or in case-control studies, is the time period between the intervention and outcome the same for cases and controls? Where follow-up was the same for all study patients the answer should yes. Studies where differences in follow-up are ignored should be answered no. Acceptable range 1-year follow-up = 1 month each way; 2-year follow-up = 2 months; 3-year follow-up = 3 months … 10-year follow-up = 10 months

Were the statistical tests used to assess the main outcomes appropriate? The statistical techniques used must be appropriate to the data. If no tests done, but would have been appropriate to do = NO

Was compliance with the intervention/s reliable? Where there was noncompliance with the allocated treatment or where there was contamination of one group, the question should be answered no. Surgical studies will be YES unless procedure not completed.

Were the main outcome measures used accurate (valid and reliable)? Where outcome measures are clearly described, which refer to other work or that demonstrates the outcome measures are accurate = YES. ALL primary outcomes valid and reliable for YES

Were the study subjects in different intervention groups (trials and cohort studies) or were the cases and controls (case-control studies) recruited from the same population? Patients for all comparison groups should be selected from the same hospital. The question should be answered UTD for cohort and case-control studies where there is no information concerning the source of patients

Were study subjects in different intervention groups (trials and cohort studies) or were the cases and controls (case-control studies) recruited over the same time? For a study which does not specify the time period over which patients were recruited, the question should be answered as UTD. Surgical studies must be b10 years for YES, if N 10 years then NO

Were study subjects randomized to intervention groups? Studies which state that subjects were randomized should be answered yes except where method of randomization would not ensure random allocation.

Was the randomized intervention assignment concealed from both patients and health care staff until recruitment was complete and irrevocable? All nonrandomized studies should be answered no. If assignment was concealed from patients but not from staff, it should be answered no.

Was there adequate adjustment for confounding in the analyses from which the main findings were drawn? In nonrandomized studies if the effect of the main confounders was not investigated or no adjustment was made in the final analyses the question should be answered as no.

If no significant difference between groups shown then YES

Were losses of study subjects to follow-up taken into account? If the numbers of patients lost to follow-up are not reported = unable to determine.

References

  1. McGlynn EA, Asch SM, Adams J, Keesey J, Hicks J, DeCristofaro A, et al. The quality of health care delivered to adults in the United States. N Engl J Med 2003;348:2635-45.
  2. Lang ES, Wyer PC, Haynes RB. Knowledge translation: closing the evidence-to- practice gap. Ann Emerg Med 2007;49:355-63.
  3. Lindenauer PK, Remus D, Roman S, Rothberg MB, Benjamin EM, Ma A, et al. Public reporting and pay for performance in hospital quality improvement. N Engl J Med 2007;356:486-96.
  4. Kawamoto K, Houlihan CA, Balas EA, Lobach DF. Improving clinical practice using clinical decision support systems: a systematic review of trials to identify features critical to success. BMJ 2005;330(7494):1-8 [765, Epub ahead of print].
  5. Stiell IG, Wells GA. Methodologic standards for the development of clinical decision rules in emergency medicine. Ann Emerg Med 1999;33:437-47.
  6. Croskerry P. The feedback sanction. Acad Emerg Med 2000;7:1232-8.
  7. Ivers N, Jamtvedt G, Flottorp S, Young JM, Odgaard-Jensen J, French SD, et al. Audit and feedback: effects on professional practice and Healthcare outcomes. Cochrane Database Syst Rev 2012;6:1-227 [Cd000259].
  8. Lavoie CF, Schachter H, Stewart AT, McGowan J. Does outcome feedback make you a better emergency physician? A systematic review and research framework proposal. Can J Emerg Med 2009;11:545-52.
  9. Bernstein SL, Aronsky D, Duseja R, Epstein S, Handel D, Hwang U, et al. The effect of emergen- cy department crowding on clinically oriented outcomes. Acad Emerg Med 2009;16:1-10.
  10. Westbrook JI, Coiera E, Dunsmuir WT, Brown BM, Kelk N, Paoloni R, et al. The impact of interruptions on clinical task completion. Qual Saf Health Care 2010;19:284-9.
  11. Improving emergency physician performance using audit and feedback: a systematic review of trials to identify features critical to success. http://www.crd.york.ac.uk/PROS- PERO/display_record.asp?ID=CRD42014009385; 2014. [Accessed March 18, 2015].
  12. Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Int J Surg 2010;8:336-41.
  13. Downs SH, Black N. The feasibility of creating a checklist for the assessment of the methodological quality both of randomised and non-randomised studies of health care interventions. J Epidemiol Community Health 1998;52:377-84.
  14. Higgins JP, Thompson SG, Deeks JJ, Altman DG. Measuring inconsistency in meta- analyses. BMJ 2003;327:557-60.
  15. Krall SP, Reesse 4th CL, Donahue L. Effect of continuous quality improvement methods on reducing triage to thrombolytic interval for acute myocardial infarction. Acad Emerg Med 1995;2:603-9.
  16. Guidry UA, Paul SD, Vega J, Harris C, Chaturvedi R, O’Gara PT, et al. Impact of a simple inexpensive quality assurance effort on physician’s choice of thrombolytic agents and Door-to-needle time: implication for costs of management. J Thromb Thrombol- ysis 1998;5:151-7.
  17. Ramoska EA. Information sharing can reduce laboratory use by emergency physi- cians. Am J Emerg Med 1998;16:34-6.
  18. Hadjianastassiou VG, Karadaglis D, Gavalas M. A comparison between different for- mats of educational feedback to junior doctors: a prospective pilot intervention study. J R Coll Surg Edinb 2001;46:354-7.
  19. Chern CH, How CK, Wang LM, Lee CH, Graff L. Decreasing clinically significant ad- verse events using feedback to emergency physicians of telephone follow-up out- comes. Ann Emerg Med 2005;45:15-23.
  20. Katz DA, Aufderheide TP, Bogner M, Rahko PR, Brown RL, Brown LM, et al. The im- pact of unstable angina guidelines in the triage of emergency department patients with possible acute coronary syndrome. Med Decis Mak 2006;26:606-16.
  21. Doherty S, Jones P, Stevens H, Davis L, Ryan N, Treeve V. “Evidence-based implemen- tation” of paediatric asthma guidelines in a rural emergency department. J Paediatr Child Health 2007;43:611-6.
  22. Metlay JP, Camargo Jr CA, MacKenzie T, McCulloch C, Maselli J, Levin SK, et al. Cluster-randomized trial to improve antibiotic use for adults with acute respiratory infections treated in emergency departments. Ann Emerg Med 2007;50:221-30.
  23. Nguyen HB, Corbett SW, Steele R, Banta J, Clark RT, Hayes SR, et al. Implementation of a bundle of quality indicators for the early management of severe sepsis and sep- tic shock is associated with decreased mortality. Crit Care Med 2007;35:1105-12.
  24. Buising KL, Thursky KA, Black JF, MacGregor L, Street AC, Kennedy MP, et al. Improv- ing Antibiotic prescribing for adults with community acquired pneumonia: does a computerised Decision support system achieve more than academic detailing alone?-a Time series analysis. BMC Med Inform Decis Mak 2008;8:1-10 [35].
  25. Bessen T, Clark R, Shakib S, Hughes G. A multifaceted strategy for implementation of the Ottawa Ankle Rules in two emergency departments. BMJ 2009;339:b3056.
  26. Burstin HR, Conn A, Setnik G, Rucker DW, Cleary PD, O’Neil AC, et al. Benchmarking and quality improvement: the Harvard Emergency Department Quality Study. Am J Med 1999;107:437-49.
  27. Weiner SG, Brown SF, Goetz JD, Webber CA. Weekly E-mail reminders influence emergency physician behavior: a case study using the Joint Commission and Centers for Medicare and Medicaid Services Pneumonia Guidelines. Acad Emerg Med 2009;16:626-31.
  28. Rudiger-Sturchler M, Keller DI, Bingisser R. Emergency physician intershift handover-can a dINAMO checklist speed it up and improve quality? Swiss Med Wkly 2010;140:1-5 [w13085].
  29. Hill PM, Rothman R, Saheed M, Deruggiero K, Hsieh YH, Kelen GD. A comprehensive approach to achieving near 100% compliance with the Joint Commission Core Measures for pneumonia antibiotic timing. Am J Emerg Med 2011;29:989-98.
  30. McIntosh KA, Maxwell DJ, Pulver LK, Horn F, Robertson MB, Kaye KI, et al. A quality improvement initiative to improve adherence to national guidelines for empiric management of community-acquired pneumonia in emergency departments. Int J Qual Health Care 2011;23:142-50.
  31. Welch S, Dalto J. Improving door-to-physician times in 2 community hospital emer- gency departments. Am J Med Qual 2011;26:138-44.
  32. Capraro A, Stack A, Harper MB, Kimia A. Detecting unapproved abbreviations in the electronic medical record. Jt Comm J Qual Patient Saf 2012;38:178-83.
  33. Plambech MZ, Lurie AI, Ipsen HL. Initial, successful implementation of sepsis guide- lines in an emergency department. Dan Med J 2012;59:1-5 [A4545].
  34. Ratnapalan S, Brown K, Cieslak P, Cohen-Silver J, Jarvis A, Mounstephen W. Charting errors in a teaching hospital. Pediatr Emerg Care 2012;28:268-71.
  35. Volpe D, Harrison S, Damian F, Rachh P, Kahlon PS, Morrissey L, et al. Improving timeliness of antibiotic delivery for patients with fever and suspected neutropenia in a pediatric emergency department. Pediatrics 2012;130:e201-10.
  36. Pichert JW, Moore IN, Karrass J, Jay JS, Westlake MW, Catron FT, et al. An interven- tion model that promotes accountability: peer messengers and patient/family com- plaints. Jt Comm J Qual Patient Saf 2013;39:435-46.
  37. Wu KH, Cheng FJ, Li CJ, Cheng HH, Lee WH, Lee CW. Evaluation of the effectiveness of peer pressure to change Disposition decisions and Patient throughput by emergency physician. Am J Emerg Med 2013;31:535-9.
  38. Nguyen MC, Richardson DM, Hardy SG, Cookson RM, MacKenzie RS, Greenberg MR, et al. Computer-based reminder system effectively impacts physician documentation. Am J Emerg Med 2014;32:104-6.

->

American Journal of Emergency Medicine 33 (2015) 1505-1514

Contents lists available at ScienceDirect

American Journal of Emergency Medicine

journal homepage: www. elsevier. com/ locate/ajem

Review

Improving emergency physician performance using audit and feedback: a systematic review?,??

R. Le Grand Rogers, MD a, Yizza Narvaez, MD a, Arjun K. Venkatesh, MD, MBA a,b, William Fleischman, MD a,c,

M. Kennedy Hall, MD a, R. Andrew Taylor, MD a, Denise Hersey, MLS, MA d,

Lynn Sette, MLS, AHIP d, Edward R. Melnick, MD, MHS a,?

a Department of Emergency Medicine, Yale School of Medicine, New Haven, CT

b Center for Outcomes Research and Evaluation, Yale School of Medicine, New Haven, CT

c Robert Wood Johnson Clinical Scholar Program, Yale School of Medicine, New Haven, CT

d Harvey Cushing/John Hay Whitney Medical Library, Yale School of Medicine, New Haven, CT

a r t i c l e i n f o

Article history:

Received 11 May 2015

Received in revised form 20 July 2015

Accepted 22 July 2015

a b s t r a c t

Background: Audit and feedback can decrease variation and improve the quality of care in a variety of health care settings. There is a growing literature on audit and feedback in the emergency department (ED) setting. Because most studies have been small and not focused on a single clinical process, systematic assessment could determine the effectiveness of audit and feedback interventions in the ED and which specific characteristics improve the quality of emergency care.

Objective: The objective of the study is to assess the effect of audit and feedback on emergency physician perfor- mance and identify features critical to success.

Methods: We adhered to the PRISMA statement to conduct a systematic review of the literature from January 1994 to January 2014 related to audit and feedback of physicians in the ED. We searched Medline, EMBASE, PsycINFO, and PubMed databases. We included studies that were conducted in the ED and reported quantitative outcomes with interventions using both audit and feedback. For included studies, 2 reviewers independently assessed methodological quality using the validated Downs and Black checklist for nonrandomized studies. Treatment effect and heterogeneity were to be reported via meta-analysis and the I2 inconsistency index.

Results: The search yielded 4332 articles, all of which underwent title review; 780 abstracts and 131 full-text articles were reviewed. Of these, 24 studies met inclusion criteria with an average Downs and Black score of

15.6 of 30 (range, 6-22). Improved performance was reported in 23 of the 24 studies. Six studies reported suffi- cient outcome data to conduct summary analysis. Pooled data from studies that included 41124 patients yielded an average treatment effect among physicians of 36% (SD, 16%) with high heterogeneity (I2 = 83%). Conclusion: The literature on audit and feedback in the ED reports positive results for interventions across numer- ous clinical conditions but without standardized reporting sufficient for meta-analysis. Characteristics of audit and feedback interventions that were used in a majority of studies were feedback that targeted errors of omission and that was explicit with measurable instruction and a plan for change delivered in the clinical setting greater than 1 week after the audited performance using a combination of media and types at both the individual and group levels. Future work should use standardized reporting to identify the specific aspects of audit or feedback that drive effectiveness in the ED.

(C) 2015

  1. Introduction

? Support: Dr Venkatesh is supported by the Emergency Medicine Foundation Health Policy Scholar Award. He also works under contract with the Centers for Medicare and Medicaid Services in the development of hospital outcome and efficiency measures. Dr Melnick is supported, in part, by grant number K08HS021271 from the Agency for Healthcare Research and Quality. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health or the Agency for Healthcare Research and Quality.

?? Meetings: SAEM New England Regional Meeting, April 2015; SAEM Annual Meeting,

May 2015.

* Corresponding author at: Department of Emergency Medicine, Yale School of Medi- cine, 464 Congress Ave, Suite 260, New Haven, CT 06519.

Practice variation between health care providers can substantially impact the quality, efficiency, effectiveness, and safety of health care services [1,2]. Approaches to improve quality of care and reduce varia- tion in the emergency department (ED) have included guideline imple- mentation, pay-for-performance, computerized clinical decision support, and clinical decision rules [3-5]. Audit and feedback interven- tions have received increasing attention as a strategy to reduce variation and improve the quality of care [6]. A recent Cochrane review of audit and feedback interventions in diverse practice settings across multiple

http://dx.doi.org/10.1016/j.ajem.2015.07.039

0735-6757/(C) 2015

Table 1

MEDLINE search strategy

Search strategy task MEDLINE search strategy

Intervention subject terms and key words 1 (audit? adj3 feedback).tw.

  1. clinical audit/
  2. exp management audit/
  3. exp “commission on professional and hospital activities”/
  4. feedback/
  5. feedback, psychological/
  6. exp “utilization review”/
  7. peer review, health care/
  8. (audit or audits or auditing).tw.
  9. feedback.tw.
  10. (review adj3 record?).tw.
  11. (practice data or hospital* data).tw.
  12. benchmark.tw.
  13. exp professional practice/
  14. physician’s practice patterns/
  15. (practice pattern? or pattern of practice).tw.
  16. (performance adj2 (health* personnel or health care personnel or physician? or doctor? or clinician? or provider? or practioner? or resident? or professional? or clinical)).tw.
  17. ((influence* or chang*) adj3 (behaviour* or behavior*)).tw.
  18. 14 or 15 or 16 or 17 or 18 Outcomes of the intervention subject terms and key words 20 quality assurance, health care/
  19. “quality of health care”/
  20. (quality adj (assurance or improvement or control)).tw.
  21. (health care quality or healthcare quality or quality of healthcare or quality of health care or quality of care).tw.
  22. exp “outcome and process assessment (health care)”/
  23. exp patient admission/
  24. 20 or 21 or 22 or 23 or 24 or 25

Setting subject terms and key words 27 emergency medical services/

  1. emergency service, hospital/or trauma centers/
  2. (emergency adj2 (care or healthcare or department? or unit or units or room? or treatment?)).ti,ab.
  3. (emergency adj service?).ti,ab.
  4. (urgent adj2 (care or healthcare or health care)).ti,ab.
  5. emergency medicine/
  6. 27 or 28 or 29 or 30 or 31 or 32 Personnel involved in intervention subject terms and key words 34 physicians/
  7. (physician$ or doctor$ or specialist$).tw.
  8. “internship and residency”/
  9. exp health personnel/
  10. 34 or 35 or 36 or 37

Articles or study types 39 (clinical adj2 polic?).tw.

  1. (clinical adj2 guideline?).tw.
  2. (practice adj2 guideline?).tw.
  3. exp Practice Guideline/
  4. 1 or 2 or 3 or 4 or 5 or6 or 7 or8 or 9 or 10 or 11 or 12 or 13 or 39 or 40 or 41 or 42
  5. 19 or 26 or 38
  6. 33 and 43 and 44

(See Appendix A for full search strategy including EMBASE, PsycINFO, and PubMed search strategies.)

specialties concluded that utilization of audit and feedback can improve clinician performance if baseline performance was low, the source of feedback was a supervisor or colleague, and audit and feedback was provided more than once, delivered in both verbal and written formats, and included both explicit targets and an action plan [7].

Currently, the specific features of audit and feedback interventions that can improve emergency care are not well understood [2]. The Cochrane review focused only on randomized controlled trials and did not examine the effectiveness of audit and feedback specific to the ED setting [7]. There is a rapidly growing literature on audit and feedback in the ED setting that has not been systematically reviewed in 6 years [8].

The ED environment is unique, highly complex, and particularly vulnerable to variation in quality of care and patient safety [9,10]. Furthermore, distinctive features of the ED environment such as high- volume throughput, unpredictable patient acuity, frequent handoffs, overcrowding, noise, and interruptions may present barriers to implementation of audit and feedback necessitating specifically tailored solutions [10].

The aim of this systematic review was to assess the effect of audit and feedback on emergency physician performance and to identify

features of audit and feedback implementations associated with improvements in clinical performance. We hypothesized that these fea- tures and their effectiveness to improve emergency physician practice would be sensitive to the ED environment.

  1. Methods
    1. Study design

We conducted a systematic review of the medical literature from January 1994 to January 2014 related to audit and feedback in the ED. The study protocol was prospectively registered in the PROSPERO inter- national prospective register of systematic reviews [11]. This protocol guided the review in adherence with the Preferred Reporting Items for Systematic Reviews and Meta-analyses statement [12].

    1. Search strategy

Our main objective was to assess the effect of audit and feedback on emergency physician performance and to identify features associated with improvements in clinical performance. In collaboration with our

reviewer. Any discrepancy arising at any stage throughout the study selection process was resolved via consensus among the authors at both levels of screening. Screening reviews and full-text reviews were not blinded to authorship, journal, or year.

131 full-text articles assessed for eligibility

780 abstracts screened for eligibility

24 full-text articles included in qualitative syntesis

4,332 articles identified through search of Medline, EMBASE, PsycINFO, and PubMed by search performed by professional librarian

Screening

Identification

Fig. 1. Flow of information through the different phases of the systematic review.

Included

Eligibility

institution’s librarians (DH and LS), we developed a comprehensive search strategy using medical subject headings and text words for the general concepts of emergency medicine and feedback (Table 1 for Medline search strategy and Appendix A for full search strategy). We used this strategy to search MEDLINE (OvidSP 1946 to April Week 3 2014), EMBASE (OvidSP 1974 to 2014 April 21), PsycINFO (OvidSP 1967

to April Week 4 2014), and PubMed (unindexed references). All searches were performed by a professional librarian and conducted on April 21, 2014, except for PubMed, which was conducted on December 11, 2013.

    1. Study selection

Studies were only included if they met the following eligibility/ inclusion criteria: (1) the study was conducted in the ED or it was conducted in multiple clinical environments with distinct ED outcomes reported, (2) the intervention used both audit and feedback, and (3) the study reported quantitative outcome measures of the impact of the in- tervention. Study eligibility was confined to articles published in English language and only published studies (abstracts or full text). We also reviewed the reference lists of selected articles to identify additional studies for inclusion. The results of these searches were entered into EndNote citation management software (version 7.0; Thompson Reuters, New York, NY), and duplicates were removed.

    1. Data collection and validity assessment

Fig. 1 illustrates the identification, screening, and inclusion processes. Studies were screened for inclusion criteria at a title, abstract, and full-text levels. Title and abstract review were individually performed.

The full-text review was performed by 2 independent reviewers who independently screened for inclusion criteria, assessed methodo- logical quality using the validated Downs and Black checklist for nonrandomized studies (Appendix B), and extracted data elements to describe included articles and perform summary assessment [13]. Using a standardized data collection form, 2 authors independently extracted data on performance (baseline [low 25%, moderate 50%, high 75%], goal, and treatment effect), intervention (type, medium, and instruction), feedback (format, setting, source, and timing), and the treatment effect and its SD. If they were reported, the number of pa- tients and physicians in the control and intervention was also collected. The subsequent Downs and Black checklist score and data were reviewed and reconciled for agreement by a third independent, blinded

    1. Data processing and analysis

Treatment effect was calculated as a percentage change in the targeted process before and after the intervention as aggregate group data. When time was the primary outcome, the treatment effect was re- ported as a percentage change calculated as 1 - (postintervention time/preintervention time). When more than 1 outcome (n = 3 stud- ies) was reported postintervention, the treatment effect was reported across outcomes. Statistical heterogeneity was assessed using the I2 in- consistency index, which describes the percentage of variability in the effect that is due to underlying differences between the studies rather than chance with values greater than 75% indicating substantial hetero- geneity [14]. After formally reviewing the included articles and selective information available on outcomes, sample sizes, and SEs, it was deter- mined that the included studies were too dissimilar and heterogeneous to perform the originally intended meta-analysis. Summary statistics are reported for the 6 studies with clinically similar results.

  1. Results

The combined searches yielded 4332 articles, including 324 dupli- cate articles, all of which underwent title review. Of these, 780 abstracts and 131 full texts were reviewed with 24 studies meeting inclusion criteria. The evidentiary table (Table 2) outlines the characteristics of the included studies as well as the clinical behaviors and practice varia- tions that were targeted by the intervention-most commonly time to treatment for acute myocardial infarction and pneumonia, guideline compliance (particularly in sepsis), and rates of a range of behaviors such as testing, return visits, documentation errors, handoff errors, pa- tient complaints, adverse events, and discharge [15-38]. All of the stud- ies but 1 used a preintervention/postintervention study design. Metlay et al [22] performed a multicenter cluster-randomized trial. Ten of the 24 articles reported multicenter studies [15,20-22,25,26,30,31,36,38].

There was substantial heterogeneity in the included studies with an I2 index of 83%. The included studies had an average Downs and Black score of 15.6 of 30 (range, 6-22). None of the included studies met checklist criteria of (1) blinding those measuring the main outcomes of the intervention or (2) concealing the intervention assignment from both the subjects and the investigators. Only 1 study made an attempt to blind the study subjects to the intervention they received [18]. Similarly, only 1 study randomized study subjects to intervention groups [22]. Furthermore, Metlay et al [22] was the only study to account for loss to follow-up. The only checklist criteria met by all the studies were that the staff, places, and facilities were representative of the treatment the majority of subjects received.

Distribution of included articles published per year is provided in Fig. 2. The majority of the studies were conducted in the United States, 4 in Australia, 2 in China, 1 in Canada, 1 in Denmark, 1 in Switzerland, and 1 in the United Kingdom. Fig. 2 shows the distribution of published studies over time.

    1. Intervention characteristics

Three types of interventions were used: reminders, educational outreach, and organizational interventions. Six of the 24 studies used reminders exclusively as the intervention [17,27,32,34,37,38]. Four of the 24 studies used the educational outreach approach [15,18,19,22]. Only 2 of the 24 studies used the organizational intervention [26,28]. Twelve of the 24 studies used a combination of the different types of interventions [16,20,21,23-25,29-31,33,35,36].

Table 2

Evidentiary table

Study

Quality

Performance

Intervention

Feedback

Author year [Ref] Targeted behavior

Downs and Black

In summary analysis

Baseline

Goal

Treatment effect (%)

Type

Medium

Measurable Instructions with plan

Group or individual Format

Clinical setting

Source

Timing (d)

Krall 1995 [15]

16

?

Moderate

?

44.4

Educational

Combo

?

Both

?

External

N 7

Triage to AMI treatment

(50%)

review

Guidry 1998 [16]

14

Moderate

?

21.7

Multiple

Combo

?

Group

Supervisor

N 7

Door to lytics time in

(50%)

AMI

Ramoska 1998 [17]

6

UTD

?

17.8

Reminders

Written

No

Group

Peer/

N 7

Lab testing rate

colleague

Hadjianastassiou 2001

16

Moderate

?

UTD

Educational

Combo

?

Both

UTD

Supervisor

N 7

[18]

(50%)

Adequacy of

documentation

Chern 2005 [19]

17

Low (25%)

?

2.6

Educational

Verbal

?

1 on 1

?

Investigators

1-7

Return visit/adverse

event rates

Katz 2006 [20]

18

Moderate

Mixed

2

Multiple

Combo

  • Group

Investigators

N 7

ACS decision-making

(50%)

Doherty 2007 [21]

17

Low (25%)

?

41.1

Multiple

Combo

  • Both
  • Investigators

N 7

Asthma guideline

compliance

Metlay 2007 [22]

22

Moderate

?

10

Educational

Combo

  • Both
  • Investigators

N 7

URI antibiotic use rate

(50%)

Nguyen 2007 [23]

17

Low (25%)

?

51.2

Multiple

Combo

  • Both

UTD

N 7

Bundle compliance

Buising 2008 [24]

18

Moderate

?

6.8

Multiple

Electronic

  • Both
  • UTD

b1

CAP guideline

(50%)

compliance

Bessen 2009 [25]

16

Moderate

?

O1: 37.2%

Multiple

Combo

  • Both
  • UTD

N 7

Ottawa ankle rule use

(50%)

O2: 29.2%

Burstin 2009 [26]

17

  • Moderate

?

4.5

Organizational

Combo

  • Group

Supervisor

N 7

Departmental guideline

(50%)

compliance

Weiner 2009 [27]

18

High (75%)

?

8.6

Reminders

Electronic

Measurable Group

UTD

1-7

PNA time to antibiotics

without plan

Rudiger-Sturchler 2010

18

  • Moderate

?

26

Organizational

Verbal

  • Both
  • Supervisor

b1

[28]

(50%)

Handoff error and time

Hill 2011 [29]

15

High (75%)

?

18

Multiple

Combo

  • Both

UTD UTD

1-7

PNA time to antibiotics

McIntosh 2011 [30]

18

Low (25%)

?

O1: 10%

Multiple

Combo

Both

Supervisor

N 7

CAP guideline

O2: 19%

compliance

Welch 2011 [31]

9

Moderate

?

O1: 39.2%

Multiple

Combo

Both

UTD

b1

Door-to-doc time

(50%)

O2: 42.5%

Capraro 2012 [32]

17

  • Low (25%)

?

37.3

Reminders

Electronic

  • Both

Supervisor

N 7

Documentation error rate

Plambech 2012 [33]

11

Moderate

?

28

Multiple

Combo

  • Group
  • Peer/

N 7

Sepsis guideline

(50%)

colleague

compliance

Ratnapalan 2012 [34]

15

Low (25%)

?

6.5

Reminders

Combo

Measurable Both

  • Supervisor

N 7

Documentation error

without plan

rate

Volpe 2012 [35]

18

  • Moderate

?

50.5

Multiple

Combo

  • Both
  • Peer/

N 7

Peds fever time to

(50%)

colleague

antibiotics

Pichert 2013 [36]

12

UTD

?

50

Multiple

Combo

  • Both

Peer/

N 7

Patient complaint rate

colleague

Wu 2013 [37]

17

Moderate

?

5.3

Reminders

Electronic

Measurable Group

  • Peer/

N 7

Discharge rate

(50%)

without plan

colleague

Nguyen 2014 [38]

12

Moderate

?

53

Reminders

Electronic

  • 1 on 1

Investigators

1-7

Documentation

(50%)

of stool heme

Abbreviations: AMI, acute myocardial infarction; ACS, acute coronary syndrome; URI, upper respiratory tract infection; CAP, community-acquired pneumonia; PNA, pneumonia; O1, out- come 1; O2, outcome 2; UTD, unable to determine; Combo, combination.

The medium of the intervention was identified in each the 24 studies which were categorized as: (1) written, (2) verbal, (3) electronic, and

(4) a combination of media. In 2 of the 24 studies, verbal feedback was provided [19,28]. In 5 of the 24 studies, electronic feedback was provided

[24,27,32,37,38]. The remaining 16 studies used a combination of verbal, written, and electronic feedback [15,16,18,20-23,25,26,29-31,33-36,38]. Twenty of 24 provided feedback with explicit, measurable instruc- tions and with a plan for change [15,16,18-26,28-33,35,36,38]. In 3 of

the 24 studies, feedback instructions were explicit and measurable but without a plan [27,34,37].

    1. Feedback characteristics

In the 24 studies, feedback was given as one-on-one, as a group, or in both manners. Only 2 studies used one-on-one feedback alone [19,38]. Seven of the 24 studies used the group method to provide feedback [16,17,20,26,27,33,37]. Fifteen of the 24 studies used both the one-on- one and group methods to provide feedback [15,18,21-25,28-32,34-36]. In 7 studies, feedback was provided by a supervisor [16,18,26,28,30,32,34].

In 5 studies, feedback was provided by a peer or colleague [17,33,35-37]. Six studies did not explicitly identify a source of feedback [23-25,27,29,31]. In5 studies, feedback was provided by the investigators [19-22,38]. One study described feedback given by a professional standards review organization [15]. The timing of feedback in relation to when the original audit was performed differed among the studies. Seventeen of 24 studies had an interval greater than 1 week [15-18,20-23,25,26,30,32-37]. In 4 of the 24 studies, the interval between audit and feedback ranged from 1 day to 1 week [19,27,29,38]. Three studies described an interval less than

24 hours between audit and feedback [24,28,31].

In 12 of the studies, feedback was given in the clinical setting [15,19,21,22,24,25,28,31,33-35,37]. Two studies did not specify the set- ting for feedback [18,29]. The remaining 10 studies described a nonclin- ical setting for feedback [16,17,20,21,23-25,29,37,38].

    1. Performance characteristics

Fifteen studies encouraged health care providers to increase or im- prove upon their baseline behaviors. Eight studies required providers to decrease current behaviors [15,17,22,28,31,32,34,36]. Only 1 study, Katz et al [20], required the increase of particular physician behaviors and decrease of other behaviors.

All studies, with the exception of one, measured treatment effect based on provider performance (as opposed to patient-related outcomes) [19]. Six studies reported sufficient outcome data to conduct summary analysis [15,26,28,30,32,35]. Pooled data from these 6 studies included 41 124 patients and yielded an average treatment effect among physicians of 36% (SD, 16%). Fourteen of the 24 studies reported a moderate baseline performance [15,16,18,20,22,24-26,28,31,33,35,37,38]. Six studies report- ed a low baseline performance [19,21,23,30,32,34]. In addition, 2 reported high baseline performance [27,29]. Baseline performance was undeter- mined for 2 studies [17,36]. Notably, 23 of the 24 studies resulted in im- provement of the measured outcomes [15-19,21-38].

  1. Discussion

This systematic review sought to evaluate the effectiveness and characteristics of audit and feedback interventions used to improve quality, safety, efficiency, and overall performance in emergency care. Drawing upon the framework of Ivers et al [7] but focusing on the spe- cialty of emergency medicine, this review was able to identify specific forms of audit and feedback currently in use in emergency care that were broadly effective across numerous clinical conditions despite use of different forms of intervention. Characteristics of audit and feedback interventions that were used in a majority of studies were (1) feedback that is explicit with measurable instruction and a plan for change (in 20 of 24 studies), (2) timing of feedback greater than 1 week after audited performance (in 17 of 24 studies), (3) combined used of feedback media (eg, some combination of verbal, electronic, or written, in 16 of 24 stud- ies), (4) intervention intends to increase or improve current behavior (ie, error of omission as opposed to error of commission, in 15 of 24 studies), (5) feedback given at both the individual and group levels (in 15 of 24 studies), (6) feedback given in the clinical setting (in 15 of 24 studies), and (7) combination of feedback types (eg, some combination of reminders, education, or organizational level intervention, in 15 of 24 studies). Even when a majority of studies used particular features, treat- ment effects were highly variable and overlapping across groups limit- ing the ability to differentiate which features of audit and feedback are most effective in the ED environment.

Ivers et al [7] found audit and feedback to be most effective when baseline performance was low, the source was a supervisor or colleague, the intervention was provided more than once, delivered in verbal and written formats as well as including explicit targets and action plans. Distinctive features of audit and feedback that are critical to success in the ED may be different than in the clinical settings reported by Ivers et al due to the different clinical environment or insufficient research in this area or a lack of integration between operational quality im- provement and academic research efforts.

This review was also suggestive that audit and feedback was most effective with low or moderate baseline performance and was delivered in verbal and written formats as well as including explicit targets and action plans. Of note, the large treatment effect (17.8%-50.5%) identified in this review when colleagues were used as a source for feedback suggests that, in the ED, feedback from a peer or colleague may be better than supervisor feedback. It is difficult to conclude whether these differences are specialty specific or a product of limited evidence or an observation limited by our inability to conduct meta-analysis. Future research on audit and feedback in the ED could study this effect by seeking to identify specific aspects of audit or feedback that drive effectiveness.

5

4

*

3

Number of Articles

2

1

0

1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013

Year Published

Fig. 2. Distribution of included articles published per year. Asterisk (*) notes a 13-month year through January 2014.

Given the growing literature on audit and feedback in the ED, the broad effectiveness of the studies included in this review, and the relative paucity of randomized studies on the topic, the included studies’ methodological rigor was assessed systematically such that the findings of nonrandomized studies could be interpreted reliably. The Downs and Black scores reported

Appendix A. Full search strategy

Ovid MEDLINE(R) 1946 to February Week 1 2014

finished 11/26/13

here (as well as the fact that only 1 study was randomized and the remain-

der used historical controls without clear evidence of controlling for con- founding) are suggestive of low methodological quality for audit and feedback studies in emergency medicine. In particular, lack of blinding is concerning for performance and detection bias; lack of randomized study subjects to intervention groups is concerning for selection bias. However, because all the studies attempted to be generalizable, there may be some external validity to their findings.

#

Search

Results

Search type

1

(audit? adj3 feedback).tw.

506

Advanced

2

clinical audit/

721

Advanced

3

medical audit/

14512

Advanced

4

exp management audit/

12152

Advanced

5

exp “commission on professional

and hospital activities”/

222

Advanced

6

feedback/

25995

Advanced

7

feedback, psychological/

2133

Advanced

8

exp “utilization review”/

10114

Advanced

9

peer review, health care/

1274

Advanced

10

(audit or audits or auditing).tw.

23526

Advanced

11

feedback.tw.

71992

Advanced

12

(review adj3 record?).tw.

9411

Advanced

13

chart review.tw.

18916

Advanced

14

(practice data or hospital* data).tw.

3012

Advanced

15

benchmark.tw.

8079

Advanced

16

exp evidence-based medicine/

52292

Advanced

17

exp professional practice/

214666

Advanced

18

physician’s practice patterns/

39907

Advanced

19

(practice pattern? or pattern of practice).tw.

4319

Advanced

20

(performance adj2 (health* personnel or health care

8723

Advanced

21

personnel or physician? or doctor? or clinician? or provider? or practioner? or resident? or professional? or clinical)).tw.

((influence* or chang*) adj3 (behaviour* or

43322

Advanced

22

behavior*)).tw.

17 or 18 or 19 or 20 or 21

302263

Advanced

23

quality assurance, health care/

48130

Advanced

24

“quality of health care”/

55779

Advanced

25

(quality adj (assurance or improvement or

53364

Advanced

26

control)).tw.

(health care quality or healthcare quality or quality of

3235

Advanced

27

healthcare or quality of health care or quality of care).tw.

exp “outcome and process assessment (health care)”/

68942

Advanced

28

“Length of Stay”/

56493

Advanced

29

treatment outcome/

602036

Advanced

30

patient discharge/

18307

Advanced

31

exp patient admission/

18111

Advanced

32

23 or 24 or 25 or 26 or 27 or 28 or 29 or 30 or 31

894212

Advanced

33

emergency medical services/

31114

Advanced

34

emergency service, hospital/or trauma centers/

47896

Advanced

35

(emergency adj2 (care or healthcare or department? or

63924

Advanced

36

unit or units or room? or treatment?)).ti,ab.

(emergency adj service?).ti,ab.

3763

Advanced

37

(urgent adj2 (care or healthcare or health care)).ti,ab.

1108

Advanced

38

emergency medicine/

9650

Advanced

39

33 or 34 or 35 or 36 or 37 or 38

118185

Advanced

40

physicians/

59587

Advanced

41

(physician$ or doctor$ or specialist$).tw.

365969

Advanced

42

“internship and residency”/

34103

Advanced

43

exp health personnel/

362270

Advanced

44

40 or 41 or 42 or 43

679264

Advanced

45

(clinical adj2 polic?).tw.

787

Advanced

46

(clinical adj2 guideline?).tw.

14005

Advanced

47

(practice adj2 guideline?).tw.

13510

Advanced

48

exp Practice Guideline/

18527

Advanced

49

1 or 2 or 3 or 4 or 5 or 6 or 7 or 8 or 9 or 10 or 11 or 12

253191

Advanced

50

or 13 or 14 or 15 or 16 or 45 or 46 or 47 or 48

22 or 32 or 44

1704295

Advanced

51

39 and 49 and 50

4319

Advanced

52

limit 51 to (english language and yr = “1994 -Current”)

3814

Advanced

Audit and feedback is increasingly being used by organizational and

ED leaders to affect change in physician behavior in the ED. Differences in the physical environment, practice setting, patient population, and treatment goals between the ED and other clinical settings may contrib- ute to differences in the utility of specific interventions. To differentiate which features of audit and feedback are most effective in the ED environ- ment, future research should report both positive and negative findings, follow the highest standards of methodological rigor (eg, blinding and randomizing subjects), and compare the effectiveness of promising strat- egies (eg, combination vs single types or media or timing of feedback).

    1. Limitations

This systematic review enabled synthesis of many nonrandomized studies previously not included or assessed by systematic review. Howev- er, inclusion of this broad literature across a variety of clinical conditions likely precluded the ability to conduct meta-analysis. Research in quality and safety, performance improvement, and audit and feedback is often qualitative and descriptive. In fact, only 6 of the included 24 studies re- ported clinically similar data allowing for comparison. The remaining 18 yielded heterogeneous data too dissimilar to incorporate into the previ- ously planned meta-analysis. Thus, this review focused on the identifica- tion and synthesis of specific characteristics. Furthermore, Quality improvement interventions like audit and feedback are targeted toward physician behavior to improve patient outcomes. However, studies rarely reported the number of physicians being studied or any patient-specific outcomes, making the assessment of the effectiveness of these audit and feedback interventions on meaningful outcomes challenging.

The fact that 23 of the 24 studies resulted in improvement of measured

outcomes suggests a strong positive results publication bias in the literature on ED audit and feedback. As a result of this publication bias, it is unclear what audit and feedback characteristics are unsuccessful or may even neg- atively impact care. This bias also limits interpretation of which characteris- tics of the interventions in the available literature are actually contributing to the improvement in outcomes. The literature on audit and feedback in the ED has positive results publication bias and low methodological quality with substantial heterogeneity making it difficult to draw conclusions about the efficacy of specific intervention characteristics.

    1. Conclusion

This systematic review describes numerous publications demonstrating the effectiveness of a broad range of audit and feedback interventions in the ED setting. Interventions appear to be most efficacious for interventions to improve time to treatment, guideline compliance, and rates of behaviors such as testing, errors, and discharge, when feedback targets errors of omis- sion and is explicit with measurable instruction and a plan for change deliv- ered in the clinical setting greater than 1 week after the audited performance using a combination of media and types at both the individual and group levels. More research in this area with more standardized reporting methods and comparative effectiveness of different intervention characteristics are necessary to optimize the future use of audit and feed- back to improve emergency physician performance and patient outcomes.

PubMed (finished 12/12/13)

(((audit* AND (feedback OR clinical OR medical OR management)) OR feedback OR audit* OR Benchmark* OR (assess*) OR (chart AND review))) AND ((outcome* OR (quality AND improvement) OR (practice AND pattern*) OR (influence* OR (change* OR changing))) AND (behavior* OR behaviour* OR competence)) AND (emergency AND (care OR medicine OR healthcare OR department* OR service* OR unit OR units OR room* OR treatment*)) AND ((publisher[sb] NOT pubstatusnihms NOT pubstatuspmcsd NOT pmcbook) OR inprocess[sb] OR pubmednotmedline[sb] OR oldmedline[sb] OR ((pubstatusnihms OR pubstatuspmcsd) AND publisher[sb]))

Appendix B. Downs and Black score

Embase 11-22

      1. (audit* adj3 feedback).tw.
      2. exp medical audit/
      3. exp feedback system/
      4. exp negative feedback/
      5. exp positive feedback/
      6. exp “utilization review”/
      7. exp “medical record review”/
      8. audit*.tw.
      9. feedback.tw.
      10. (record adj3 review).tw.
      11. chart review.tw.
      12. benchmark*.tw.
      13. exp evidence based medicine/
      14. or/1-13
      15. exp professional practice/
      16. exp professional competence/
      17. exp clinical competence/
      18. exp diagnostic accuracy/
      19. exp diagnostic error/
      20. (practice pattern? or pattern of practice).tw.
      21. ((clinical or medical or professional or hospital?) adj practice?).tw.
      22. ((physician? or doctor? or clinician? or nurse or resident? or professional? or nursing or clinical) adj3 performance).tw.
      23. exp “length of stay”/
      24. exp follow up/
      25. exp health care quality/
      26. exp quality control/
      27. exp total quality management/
      28. exp health care planning/
      29. ((clinical or medical or professional or hospital?) adj practice?).tw.
      30. (quality adj (assurance or improve$ or control)).tw.
      31. exp treatment outcome/
      32. exp hospital discharge/
      33. exp hospital admission/
      34. ((physician? or doctor? or clinician? or nurse or residen? or professional? or nursing or clinical) adj3 (skill? or behavior or behaviour or competence)).tw.
      35. ((influenc* or chang*) adj3 (behavior* or behaviour*)).tw.
      36. exp emergency ward/
      37. exp emergency medicine/
      38. exp emergency physician/
      39. exp emergency health service/
      40. exp physician/
      41. exp resident/
      42. or/40-41
      43. or/36-39
      44. or/23-28,30-35
      45. 14 and 42 and 44 and 43
      46. or/15-22
      47. 14 and 46 and 44 and 43
      48. 14 and 42 and 44 and 43
      49. 47 or 48
      50. limit 49 to (english language and yr = “1994 -Current”)

#

PsycInfo Search

Results

Search Type

1

(audit* adj3 feedback).tw.

1248

Advanced

2

clinical audits/

239

Advanced

3

feedback.tw.

43589

Advanced

4

feedback.ti.

9539

Advanced

5

feedback/

12177

Advanced

6

exp Utilization Reviews/

103

Advanced

7

audit*.tw.

53246

Advanced

8

(record adj3 review$).tw.

6102

Advanced

9

chart review.tw.

2375

Advanced

10

benchmark*.tw.

4428

Advanced

11

exp Evidence Based Practice/

10075

Advanced

12

exp Clinical Practice/

11193

Advanced

13

exp Professional Competence/

4498

Advanced

14

clinical competence.mp.

473

Advanced

15

diagnos$ accura$.tw.

1436

Advanced

16

diagnos$ error.tw.

110

Advanced

17

(practice pattern* or pattern of practice).tw.

594

Advanced

18

((clinical or medical or professional or hospital?) adj practice?).tw.

37203

Advanced

19

((physician? or doctor? or clinician? or nurse or residen? or professional? or nursing or clinical) adj3 (skill? or behavior or behaviour or competence)).tw.

12055

Advanced

(continued on next page)

Appendix B. (continued)

(continued)

#

PsycInfo Search

Results

Search Type

20

exp Treatment Duration/

6325

Advanced

21

exp Followup Studies/

12300

Advanced

22

exp “Quality of Care”/

8316

Advanced

23

exp Quality Control/

1162

Advanced

24

(quality adj (assurance or improve$ or control)).tw.

5432

Advanced

25

exp Hospital Discharge/

2433

Advanced

26

exp Hospital Admission/

3806

Advanced

27

((influenc* or chang*) adj3 (behavior* or behaviour*)).tw.

46196

Advanced

28

exp Health Care Services/

76565

Advanced

29

exp Professional Standards/

7851

Advanced

30

decision support.mp.

3316

Advanced

31

exp Evaluation Criteria/

862

Advanced

32

emergency services/

5247

Advanced

33

emergency medicine.mp.

323

Advanced

34

(emergency adj2 (physician$ or residen$)).mp. [mp = title, abstract, heading word, table of contents, key concepts, original title, tests & measures]

343

Advanced

35

or/1-11

117642

Advanced

36

or/12-31

209909

Advanced

37

or/32-34

5567

Advanced

38

35 and 36 and 37

71

Advanced

39

limit 38 to english language

71

Advanced

40

limit 39 to yr = “1994 -Current”

64

Advanced

41

(practice adj3 pattern$).tw.

893

Advanced

42

exp Treatment Guidelines/

4161

Advanced

43

(clinical adj3 guideline$).tw.

3045

Advanced

44

(practice adj3 guideline$).tw.

3323

Advanced

45

(clinical adj2 polic$).tw.

609

Advanced

46

or/12-16,18-31,41

210117

Advanced

47

35 and 46 and 37

71

Advanced

48

or/42-45

8912

Advanced

49

48 and 46 and 37

40

Advanced

50

47 or 49

106

Advanced

51

limit 50 to (english language and yr = “1994 -Current”)

96

Advanced

Is the hypothesis/aim/objective of the study clearly described? Must be explicit

Are the main outcomes to be measured clearly described in the Introduction or Methods section?

If the main outcomes are first mentioned in the Results section, the question should be answered no. ALL primary outcomes should be described for YES

Are the characteristics of the study subjects included in the study clearly described?

In cohort studies and trials, inclusion and/or exclusion criteria should be given. In case-control studies, a case definition and the source for controls should be given. Single-case studies must state source of patient

Are the interventions of interest clearly described? Treatments and placebo (where relevant) that are to be compared should be clearly described.

Are the distributions of principal confounders in each group of subjects to be compared clearly described? A list of principal confounders is provided. YES = age, severity

Are the main findings of the study clearly described? Simple outcome data (including denominators and numerators) should be reported for all major findings so that the reader can check the major analyses and conclusions.

Does the study provide estimates of the random variability in the data for the main outcomes? In nonnormally distributed data, the interquartile range of results should be reported. In normally distributed data, the standard error, standard deviation, or confidence intervals should be reported

Have the characteristics of study subjects lost to follow-up been described? If not explicit = NO. RETROSPECTIVE - if not described = UTD; if not explicit re: numbers agreeing to participate = NO. Needs to be N 85%

Have actual probability values been reported (eg, 0.035 rather than b0.05) for the main outcomes except where the probability value is less than 0.001? Were the subjects asked to participate in the study representative of the entire population from which they were recruited?

The study must identify the source population for patients and describe how the patients were selected.

Were those subjects who were prepared to participate representative of the entire population from which they were recruited?

The proportion of those asked who agreed should be stated.

Were the staff, places, and facilities where the study subjects practiced, representative of the majority of practice environments? For the question to be answered yes, the study should demonstrate that the intervention was representative of that in use in the source population. Must state type of hospital and country for YES.

Was an attempt made to blind study subjects to the intervention they have received? For studies where the patients would have no way of knowing which intervention they received, this should be answered yes. Retrospective, single group = NO; UTD if N 1 group and blinding not explicitly stated

Was an attempt made to blind those measuring the main outcomes of the intervention? Must be explicit

If any of the results of the study were based on “data dredging”, was this made clear?

Any analyses that had not been planned at the outset of the study should be clearly indicated. Retrospective = NO. Prospective = YES

In trials and cohort studies, do the analyses adjust for different lengths of follow-up of study subjects, or in case-control studies, is the time period between the intervention and outcome the same for cases and controls? Where follow-up was the same for all study patients the answer should yes. Studies where differences in follow-up are ignored should be answered no. Acceptable range 1-year follow-up = 1 month each way; 2-year follow-up = 2 months; 3-year follow-up = 3 months … 10-year follow-up = 10 months

Were the statistical tests used to assess the main outcomes appropriate? The statistical techniques used must be appropriate to the data. If no tests done, but would have been appropriate to do = NO

Was compliance with the intervention/s reliable? Where there was noncompliance with the allocated treatment or where there was contamination of one group, the question should be answered no. Surgical studies will be YES unless procedure not completed.

Were the main outcome measures used accurate (valid and reliable)? Where outcome measures are clearly described, which refer to other work or that demonstrates the outcome measures are accurate = YES. ALL primary outcomes valid and reliable for YES

Were the study subjects in different intervention groups (trials and cohort studies) or were the cases and controls (case-control studies) recruited from the same population? Patients for all comparison groups should be selected from the same hospital. The question should be answered UTD for cohort and case-control studies where there is no information concerning the source of patients

Were study subjects in different intervention groups (trials and cohort studies) or were the cases and controls (case-control studies) recruited over the same time? For a study which does not specify the time period over which patients were recruited, the question should be answered as UTD. Surgical studies must be b10 years for YES, if N 10 years then NO

Were study subjects randomized to intervention groups? Studies which state that subjects were randomized should be answered yes except where method of randomization would not ensure random allocation.

Was the randomized intervention assignment concealed from both patients and health care staff until recruitment was complete and irrevocable? All nonrandomized studies should be answered no. If assignment was concealed from patients but not from staff, it should be answered no.

Was there adequate adjustment for confounding in the analyses from which the main findings were drawn? In nonrandomized studies if the effect of the main confounders was not investigated or no adjustment was made in the final analyses the question should be answered as no.

If no significant difference between groups shown then YES

Were losses of study subjects to follow-up taken into account? If the numbers of patients lost to follow-up are not reported = unable to determine.

References

  1. McGlynn EA, Asch SM, Adams J, Keesey J, Hicks J, DeCristofaro A, et al. The quality of health care delivered to adults in the United States. N Engl J Med 2003;348:2635-45.
  2. Lang ES, Wyer PC, Haynes RB. Knowledge translation: closing the evidence-to- practice gap. Ann Emerg Med 2007;49:355-63.
  3. Lindenauer PK, Remus D, Roman S, Rothberg MB, Benjamin EM, Ma A, et al. Public reporting and pay for performance in hospital quality improvement. N Engl J Med 2007;356:486-96.
  4. Kawamoto K, Houlihan CA, Balas EA, Lobach DF. Improving clinical practice using clinical decision support systems: a systematic review of trials to identify features critical to success. BMJ 2005;330(7494):1-8 [765, Epub ahead of print].
  5. Stiell IG, Wells GA. Methodologic standards for the development of clinical decision rules in emergency medicine. Ann Emerg Med 1999;33:437-47.
  6. Croskerry P. The feedback sanction. Acad Emerg Med 2000;7:1232-8.
  7. Ivers N, Jamtvedt G, Flottorp S, Young JM, Odgaard-Jensen J, French SD, et al. Audit and feedback: effects on professional practice and healthcare outcomes. Cochrane Database Syst Rev 2012;6:1-227 [Cd000259].
  8. Lavoie CF, Schachter H, Stewart AT, McGowan J. Does outcome feedback make you a better emergency physician? A systematic review and research framework proposal. Can J Emerg Med 2009;11:545-52.
  9. Bernstein SL, Aronsky D, Duseja R, Epstein S, Handel D, Hwang U, et al. The effect of emergen- cy department crowding on clinically oriented outcomes. Acad Emerg Med 2009;16:1-10.
  10. Westbrook JI, Coiera E, Dunsmuir WT, Brown BM, Kelk N, Paoloni R, et al. The impact of interruptions on clinical task completion. Qual Saf Health Care 2010;19:284-9.
  11. Improving emergency physician performance using audit and feedback: a systematic review of trials to identify features critical to success. http://www.crd.york.ac.uk/PROS- PERO/display_record.asp?ID=CRD42014009385; 2014. [Accessed March 18, 2015].
  12. Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Int J Surg 2010;8:336-41.
  13. Downs SH, Black N. The feasibility of creating a checklist for the assessment of the methodological quality both of randomised and non-randomised studies of health care interventions. J Epidemiol Community Health 1998;52:377-84.
  14. Higgins JP, Thompson SG, Deeks JJ, Altman DG. Measuring inconsistency in meta- analyses. BMJ 2003;327:557-60.
  15. Krall SP, Reesse 4th CL, Donahue L. Effect of continuous quality improvement methods on reducing triage to thrombolytic interval for acute myocardial infarction. Acad Emerg Med 1995;2:603-9.
  16. Guidry UA, Paul SD, Vega J, Harris C, Chaturvedi R, O’Gara PT, et al. Impact of a simple inexpensive quality assurance effort on physician’s choice of thrombolytic agents and door-to-needle time: implication for costs of management. J Thromb Thrombol- ysis 1998;5:151-7.
  17. Ramoska EA. Information sharing can reduce laboratory use by emergency physi- cians. Am J Emerg Med 1998;16:34-6.
  18. Hadjianastassiou VG, Karadaglis D, Gavalas M. A comparison between different for- mats of educational feedback to junior doctors: a prospective pilot intervention study. J R Coll Surg Edinb 2001;46:354-7.
  19. Chern CH, How CK, Wang LM, Lee CH, Graff L. Decreasing clinically significant ad- verse events using feedback to emergency physicians of telephone follow-up out- comes. Ann Emerg Med 2005;45:15-23.
  20. Katz DA, Aufderheide TP, Bogner M, Rahko PR, Brown RL, Brown LM, et al. The im- pact of unstable angina guidelines in the triage of emergency department patients with possible acute coronary syndrome. Med Decis Mak 2006;26:606-16.
  21. Doherty S, Jones P, Stevens H, Davis L, Ryan N, Treeve V. “Evidence-based implemen- tation” of paediatric asthma guidelines in a rural emergency department. J Paediatr Child Health 2007;43:611-6.
  22. Metlay JP, Camargo Jr CA, MacKenzie T, McCulloch C, Maselli J, Levin SK, et al. Cluster-randomized trial to improve antibiotic use for adults with acute respiratory infections treated in emergency departments. Ann Emerg Med 2007;50:221-30.
  23. Nguyen HB, Corbett SW, Steele R, Banta J, Clark RT, Hayes SR, et al. Implementation of a bundle of quality indicators for the early management of severe sepsis and sep- tic shock is associated with decreased mortality. Crit Care Med 2007;35:1105-12.
  24. Buising KL, Thursky KA, Black JF, MacGregor L, Street AC, Kennedy MP, et al. Improv- ing antibiotic prescribing for adults with community acquired pneumonia: does a computerised decision support system achieve more than academic detailing alone?-a time series analysis. BMC Med Inform Decis Mak 2008;8:1-10 [35].
  25. Bessen T, Clark R, Shakib S, Hughes G. A multifaceted strategy for implementation of the Ottawa ankle rules in two emergency departments. BMJ 2009;339:b3056.
  26. Burstin HR, Conn A, Setnik G, Rucker DW, Cleary PD, O’Neil AC, et al. Benchmarking and quality improvement: the Harvard Emergency Department Quality Study. Am J Med 1999;107:437-49.
  27. Weiner SG, Brown SF, Goetz JD, Webber CA. Weekly E-mail reminders influence emergency physician behavior: a case study using the Joint Commission and Centers for Medicare and Medicaid Services Pneumonia Guidelines. Acad Emerg Med 2009;16:626-31.
  28. Rudiger-Sturchler M, Keller DI, Bingisser R. Emergency physician intershift handover-can a dINAMO checklist speed it up and improve quality? Swiss Med Wkly 2010;140:1-5 [w13085].
  29. Hill PM, Rothman R, Saheed M, Deruggiero K, Hsieh YH, Kelen GD. A comprehensive approach to achieving near 100% compliance with the Joint Commission Core Measures for pneumonia antibiotic timing. Am J Emerg Med 2011;29:989-98.
  30. McIntosh KA, Maxwell DJ, Pulver LK, Horn F, Robertson MB, Kaye KI, et al. A quality improvement initiative to improve adherence to national guidelines for empiric management of community-acquired pneumonia in emergency departments. Int J Qual Health Care 2011;23:142-50.
  31. Welch S, Dalto J. Improving door-to-physician times in 2 community hospital emer- gency departments. Am J Med Qual 2011;26:138-44.
  32. Capraro A, Stack A, Harper MB, Kimia A. Detecting unapproved abbreviations in the electronic medical record. Jt Comm J Qual Patient Saf 2012;38:178-83.
  33. Plambech MZ, Lurie AI, Ipsen HL. Initial, successful implementation of sepsis guide- lines in an emergency department. Dan Med J 2012;59:1-5 [A4545].
  34. Ratnapalan S, Brown K, Cieslak P, Cohen-Silver J, Jarvis A, Mounstephen W. Charting errors in a teaching hospital. Pediatr Emerg Care 2012;28:268-71.
  35. Volpe D, Harrison S, Damian F, Rachh P, Kahlon PS, Morrissey L, et al. Improving timeliness of antibiotic delivery for patients with fever and suspected neutropenia in a pediatric emergency department. Pediatrics 2012;130:e201-10.
  36. Pichert JW, Moore IN, Karrass J, Jay JS, Westlake MW, Catron FT, et al. An interven- tion model that promotes accountability: peer messengers and patient/family com- plaints. Jt Comm J Qual Patient Saf 2013;39:435-46.
  37. Wu KH, Cheng FJ, Li CJ, Cheng HH, Lee WH, Lee CW. Evaluation of the effectiveness of peer pressure to change Disposition decisions and patient throughput by emergency physician. Am J Emerg Med 2013;31:535-9.
  38. Nguyen MC, Richardson DM, Hardy SG, Cookson RM, MacKenzie RS, Greenberg MR, et al. Computer-based reminder system effectively impacts physician documentation. Am J Emerg Med 2014;32:104-6.