Article, Cardiology

Variable methodological quality and use found in systematic reviews referenced in STEMI clinical practice guidelines

a b s t r a c t

Background: The objective of this study was to assess the Methodological quality and clarity of reporting of the systematic reviews (SRs) supporting clinical practice guideline (CPG) recommendations in the management of ST-elevation myocardial infarction across international CPGs.

Methods: We searched 13 guideline clearinghouses including the National Guideline Clearinghouse and Guide- lines International Network (GIN). To meet inclusion criteria CPGs must be pertinent to the management of STEMI, endorsed by a governing body or national organization, and written in English. We retrieved SRs from the reference sections using a combination of keywords and hand searching. Two investigators scored eligible SRs using AMSTAR and PRISMA. Results: We included four CPGs. We extracted 71 unique SRs. These SRs received AMSTAR scores ranging from 1 (low) to 9 (high) on an 11-point scale. All CPGs consistently underperformed in areas including disclosure of funding sources, risk of bias, and publication bias according to AMSTAR. PRISMA checklist completeness ranged from 44% to 96%. The PRISMA scores indicated that SRs did not provide a full search strategy, study protocol and registration, assessment of publication bias or report funding sources. Only one SR was referenced in all four CPGs. All CPGs omitted a large subset of available SRs cited by other guidelines.

Conclusions: Our study demonstrates the variable quality of SRs used to establish recommendations within guide- lines included in our sample. Although guideline developers have acknowledged this variability, it remains a significant finding that needs to be addressed further.

Funding: This research did not receive any specific grant from funding agencies in the public, commercial, or not- for-profit sectors.

(C) 2017

  1. Introduction

Clinical practice guidelines (CPGs) have influenced clinical practice for several decades and, regardless of their near universal use, questions are continually raised concerning the quality of the underlying evidence that underpins these guidelines. In particular, several recently published Randomized controlled trials have called into question the valid- ity of some recommendations in the American College of Cardiology

? Funding sources/disclosures: This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.

?? Previous presentations: 14th Annual AMA Research Symposium, Orlando, FL,

November 11th, 2016.

* Corresponding author.

E-mail addresses: [email protected] (J. Scott), [email protected] (B. Howard), [email protected] (P. Sinnett), [email protected] (M. Schiesel), [email protected] (J. Baker), [email protected] (P. Henderson), [email protected] (M. Vassar).

(ACC)/American Heart Association (AHA) and European Society of Cardiology (ESC) ST-elevation myocardial infarction clinical practice guidelines (CPGs). The CvLPRIT, PRAMI, and DANAMI3- PRIMULTI trials sparked debate on current recommendations for percu- taneous coronary intervention [1,2]. Results from these trials (Level B evidence) support multivessel revascularization over the currently rec- ommended culprit only and staged multivessel revascularization be- cause of a reduction in the incidence of adverse events. Multivessel revascularization is currently an ACC/AHA and ESC Class 2b (may be considered) recommendation, but in light of more current evidence, this recommendation may need to be re-evaluated [1,2]. Furthermore, low levels of evidence and low-quality evidence are used to form many recommendations [3,4]. Oxygen therapy, recommended by ACC/ AHA and ESC guidelines, is supported by Class C evidence, suggesting that large RCTs and meta-analyses are needed to evaluate the effects of oxygen and form an appropriate recommendation for its use [4]. Al- though a statement of directives published in 2013 by the ACC/AHA

[5] emphasizes formulating recommendations that are supported by

http://dx.doi.org/10.1016/j.ajem.2017.06.010

0735-6757/(C) 2017

higher-quality evidence, clear shortcomings are evident in this process moving forward. While much debate centers on lower levels of evi- dence, no attempt has been made to evaluate the quality of the highest, Level A, evidence formed by systematic reviews (SRs) and meta- analyses [5]. Our study aims to identify the methodological quality and transparency of reporting in systematic reviews used in the devel- opment of CPGs for STEMI, assess the variation in quality of the SRs included in international STEMI guidelines, and evaluate the use of SRs referenced in STEMI CPGs.

  1. Methods
    1. Protocol development and registration

Search strategies, eligibility criteria, and data abstraction were prespecified in the research protocol that was developed and piloted a priori. This study did not meet the regulatory definition of human sub- ject research as defined in 45 CFR 46.102(d) and (f) of the Department of Health and Human Services’ Code of Federal Regulations [6] and therefore was not subject to Institutional Review Board oversight. PRISMA (Preferred Reporting Items for Systematic Reviews and Meta- Analyses) guidelines [7] for SRs and SAMPL (Statistical Analyses and Methods in the Published Literature) guidelines [8] for descriptive statistics were applied when relevant. This study is registered on the University hospital Medical Information Network Clinical Trial Registry (UMIN-CTR, UMIN000023003), and data from this study are publicly available on figshare (https://figshare.com/projects/STEMI_ Guidelines_Quality/17279).

Identification of eligible clinical practice guidelines

One author (J.S.) searched the National Guideline Clearinghouse, Scottish Intercollegiate Guidelines Network (SIGN), Guidelines Interna- tional Network (GIN), National Library of Health (UK), Academy of Medicine Malaysia, and other countries’ national healthcare organiza- tions to retrieve CPGs relevant to the study topic. We defined the term Clinical Practice Guideline a priori using the National Academies of Sci- ence, Engineering and Medicine’s definition [9]. To be included in the study, CPGs must have been specifically about STEMI (no guidelines on acute coronary syndrome or non-ST-elevation myocardial infarction were included) and recognized by a national or governmental body and/or professional organization. For CPGs with multiple versions, the most recent version was included. To reduce translation errors, only guidelines published in English were included. The documentED flow of the search strategy can be found in Fig. 1.

Study selection

Following retrieval of relevant CPGs, a team of 2 reviewers (J.S., B.H.) searched the reference section of each guideline using a combination of keyword and hand searching to identify all articles likely to repre- sent SRs in full text. Those studies, which could not be identified by title alone, were located and the abstract was reviewed for potential inclusion. Disagreements were solved by consensus including another review of title and abstract by both authors. A priori, we used the National Academies of Science, Engineering and Medicine’s definition for an SR [9]. To be considered for inclusion, an SR must have been ref- erenced in a guideline. If the version of the SR used in a guideline had since been updated, or was otherwise unavailable, that SR was excluded.

Data abstraction and scoring

Two authors (J.S., B.H.) independently completed abstraction and scoring on a subset of SRs, using piloted abstraction forms. These ab- straction forms required each author to locate and directly quote the

excerpt of the manuscript that contained the information needed to sat- isfy each element of the PRISMA and AMSTAR checklists. Following scoring, each score was verified by a second author by carefully reviewing the score assigned and the associated excerpts from the man- uscript that satisfied the criteria to assign such a score. Disagreements were resolved by consensus between the pair on the grounds of inter- pretation of previous manuscript excerpts or the location of a more re- vealing excerpt that strengthens or weakens the necessary criteria to assign a given score. A third-party adjudication process was established in the protocol, but it was not needed. For each SR, the authors abstract- ed the following study characteristics: year of publication, participant population, intervention, number of primary studies, sample size of pri- mary studies, and study design of primary studies. Authors then inde- pendently scored each SR using the PRISMA checklist and the AMSTAR (A Measurement Tool to Assess the Methodological Quality of Systemat- ic Reviews) tool, described in following sections. Upon completion of scoring the individual SRs, we synthesized the scores using parametric statistics to evaluate the PRISMA and AMSTAR data in accordance with previous analyses [10-12].

AMSTAR tool

We used the 11-item tool, AMSTAR to assess methodological quality of the SRs [13]. AMSTAR has been acknowledged as a valid and reliable tool with high interrater reliability, construct validity, and feasibility [13-15]. We used AMSTAR, rather than R-AMSTAR, because AMSTAR is more easily applied [16]. We applied recommended revisions made by Burda et al. [17] to our AMSTAR tool. These changes focus on im- proving validity, reliability, and usability in assessing methodologi- cal quality and include changes in order of items, wording of item titles and instructions, and item scope. These recommendations also address aspects noted to be problematic in numerous studies [17-19], and improve specificity to methodological quality over quality of reporting or risk of bias [16,17]. However, a recommended additional item described by Burda et al. was not included. This item proposed to include subgroup analyses as a quality measure. Subgroup analyses are neither necessary nor sufficient to define a SR with or without a meta-analysis. While this is important to consider if it was performed, it has no impact on methodological quality, and a SR cannot be discounted for not performing a subgroup analysis. Scoring of the AMSTAR tool was performed according to the method described by Sharif et al. [20]. Each item was answered with “criteria met,” “criteria not met,” “criteria partially met,” and “not applicable.” The answer “not applicable” was only available on item 10 (concerning publication bias), and it was selected if the SR included fewer than 10 primary stud- ies because funnel plot methods lack the power to detect true asymme- try when the number of primary studies is fewer than 10. Points were then awarded for each answer as follows: 1 point for criteria met and 0 points for other answers. Specific items assessed with the AMSTAR tool are in Table 3.

PRISMA checklist

We assessed the clarity of reporting in eligible SRs using the 27- item PRISMA checklist. It has been acknowledged for its usefulness in critically appraising SRs and meta-analyses, even though it was originally developed for authors to improve the quality of their re- views [21-25]. The scoring protocol for this tool was based on guide- lines presented in Liberati et al. [21]. Each checklist item was answered with “criteria met,” “criteria partially met,” or “criteria not met” based on the completeness of reporting. Points were then awarded for each answer as follows: 1 point for criteria met and 0 points for other answers. Specific items assessed with the PRISMA tool are in Table 2.

Fig. 1. Search strategy and study inclusion.

Analysis of systematic review use across guidelines

We evaluated the frequency of citation of each SR in the included guidelines (Table 4). For all SRs we extracted information regarding the topic of the SR, the total sample size of all patients, and the study de- signs of primary studies constituting the SR (Supplemental Table 1).

  1. Results
    1. Search results

Our search for CPGs for STEMI yielded 30 guidelines: 9 from the Na- tional Guideline Clearinghouse, 9 from GIN, 9 from the National Elec- tronic Library of Health (UK), 2 from the Academy of Medicine of Malaysia, and 1 from SIGN. Searches of other countries’ health care insti- tutions produced no additional guidelines. Of the 30 guidelines located during our search, several were excluded: 12 were duplicates, 7 focused on nonspecific acute coronary syndrome, 4 focused on NSTEMI, and 3

were not retrievable in English. Our final sample contained 4 guidelines. After obtaining full texts of the 4 included guidelines, 1698 references were extracted from the reference sections of these guidelines. These references were reviewed by title, and when necessary, by title and ab- stract to identify SRs. We excluded 1589 references that were not SRs. A total of 109 SRs were retained for full-text screening. Of these, 2 SRs were not retrievable (Cucherat M, Bonnefoy E, Tremeau G. Primary an- gioplasty vs. intravenous thrombolysis for acute myocardial infarction. Cochrane Database Syst Rev. 2003; (3): CD001560. & Sethi A, Bajaj A, Bahekar A, Bhuriya R, Singh P, Singh M et al. Glycoprotein IIB/IIIA inhib- itors with upstream or downstream thienopyridine use improve out- comes after primary percutaneous coronary intervention in high risk patients with ST-elevation myocardial infarction-a meta-regression of randomized controlled trials. Journal of the American College of Cardiol- ogy. 2012; 59(13 SUPPL. 1): E260) and 12 were Individual patient data analyses. Seventy-one unique SRs were included in the final quality analysis (Fig. 1). Twenty-four SRs were used in more than 1 guideline and were retained for our analysis of SR use across guidelines.

Characteristics of the included guidelines and systematic reviews

Guidelines were published by the ACC/AHA, the ESC, NICE, and Malaysia between 2012 and 2014 (Table 1). These guidelines contained between 340 and 656 individual references and between 9 and 31 SRs. SRs were primarily focused on pharmaceutical and procedural interven- tions, followed by behavioral or lifestyle modifications (Supplemental Table 1, Appendix 1). The number of primary studies analyzed in the in- cluded SRs ranged from 3 to 1496 (median = 14; IQR = 17.5), and the number of participants in the included SRs ranged from 100 to 100,000,000 (median = 7414; IQR = 23,786). The majority of SRs were comprised of RCTs.

Results from the PRISMA analysis

Mean PRISMA scores for the SRs were calculated across guidelines, and all means exceeded 0.70 (Table 2). There were several notable defi- ciencies regarding adherence to PRISMA checklist items. SRs often failed to report a repeatable, full-search strategy for at least 1 database (PRISMA Item 8). Across guidelines, there was 8% adherence. None of the reviews in the ACC/AHA guideline reported a detailed search strate- gy to meet PRISMA specifications. SRs also failed to report the use of a prespecified protocol or provide a registration number (PRISMA Item 5). Across guidelines, there was 10% adherence. SRs in the ESC guideline had 4% adherence. Funding sources (PRISMA Item 27), Information sources (PRISMA Item 7), and Risk of Bias in Methods sections (PRISMA Item 15) and in Results sections (PRISMA Item 22) were all below 60% adherence.

Several PRISMA items were more consistently reported across guidelines. Eleven items on the PRISMA checklist had 90% or greater ad- herence (Table 2). All SRs included the study design in the title (PRISMA Item 1), reported a list of study characteristics of the primary studies (PRISMA Item 18), and provided a rationale for their SR in the introduc- tion (PRISMA Item 3).

Results from the AMSTAR analysis

Total AMSTAR scores for the SRs were calculated across guidelines, and all totals exceeded 5.8 of 11 criteria (Table 3). As with PRISMA, there were several notable deficiencies. Reviewers often failed to use a standardized tool and approach to assessing the quality of the body of evidence (AMSTAR Item 8). Across guidelines, there was 1.75% adher- ence. None of the reviews in the ACC/AHA, ESC, and NICE guidelines met sufficient criteria for AMSTAR Item 8. SRs also often failed to reveal conflicts of interest and the funding source for the review and all prima- ry studies (AMSTAR Item 11). Across guidelines, there was 2.75% adher- ence. None of the reviews in the NICE guideline met sufficient criteria for AMSTAR Item 11. Finally, reviews often failed to provide a list of in- cluded and excluded studies (AMSTAR Item 5). Across guidelines, there was 3.75% adherence. None of the reviews in the ACC/AHA and ESC guidelines met sufficient criteria for AMSTAR Item 5. Duplicate study se- lection and data extraction (AMSTAR Item 4) and assessing publication bias (AMSTAR Item 10) also had an adherence rate of 60% or less.

Only AMSTAR Item 6, providing characteristics of the included stud- ies, had complete adherence across guidelines. Providing prespecified

review questions or inclusion and exclusion criteria (AMSTAR Item 1), performing a Comprehensive literature search (AMSTAR Item 2), in- cluding gray literature in the search strategy (AMSTAR Item 3), and using appropriate data synthesis (AMSTAR Item 9) had greater than 80% adherence.

The PRISMA and AMSTAR scores of the 71 SRs included in the final analysis as well as a summary of the SR use across guidelines are provid- ed in Table 4. A visual representation of all 71 SRs PRISMA and AMSTAR scores and associated frequency of citation in the four included guide- lines is provided in Fig. 2. Only 1 SR, Bavry et al. [26] (PRISMA = 0.93, AMSTAR = 7/11), was included all 4 included guidelines. Bavry et al.

[26] is an SR of 30 RCTS detailing the use of adjunctive thrombectomy and embolic protection devices in 6415 patients with STEMI. There were 3 SRs included across the ACC/AHA, Malaysian, and ESC guidelines. D’Souza et al. [27] (PRISMA = 0.72, AMSTAR = 6/11) is an SR of 8 RCTs

comparing routine early angioplasty with ischemia-guided angioplasty after thrombolysis in 3195 acute STEMI patients. Taylor et al. [28] (PRISMA = 8.5, AMSTAR = 8/11) is an SR of 48 RCTs explaining the ef- fects of exercise-based rehabilitation in 8940 patients with current MI or whose status is post revascularization. Borgia et al. [29] (PRISMA = 0.91, AMSTAR = 8/11) is an SR of 7 RCTs examining the effects of early routine percutaneous intervention in 2961 STEMI patients. There was 1 SR included across the ACC/AHA, ESC, and NICE guidelines. Mor- rison et al. [30] (PRISMA = 0.80, AMSTAR = 7/11) is a systematic re- view of 6 RCTs reporting the effects of prehospital thrombolysis in 6434 acute MI patients. There were 13 SRs used in at least 2 guidelines. There were 18 SRs unique to the ACC/AHA guideline, 14 to the Malaysian guideline, 14 to the ESC guideline, and 7 to the NICE guideline.

  1. Discussion

Guidelines are a significant component to clinical practice, patient care, and board certification for physicians. These guidelines are often assumed to be the definitive source of evidence-based medicine and the authority on providing patient care. There are numerous countries with different guidelines and recommendations; some of the most heavily researched guidelines are in the cardiovascular field [31,32]. The body of evidence for these guidelines is quite large, suggesting a very selective process is required to find the best information and incor- porate it into recommendations. Since guidelines have been established, there have been critiques of their writers, expert opinion, evidence, and quality of the data [33]. Among the SRs assessed, the lack of transparen- cy with regard to funding or conflicts of interest, the risk of bias across studies, and the overall quality of the SRs between the different CPGs raises important questions regarding the nature of evidence underpin- ning STEMI guidelines.

The PRISMA and AMSTAR criteria for disclosure of funding sources and conflicts of interest garnered some of the lowest scores. To receive credit in our review, simple acknowledgment of the funding source or conflicts of interest was required, yet nearly all of the SRs used in these guidelines failed to do so. The acknowledgment of funding has a significant impact on readers, outcomes, and prescribers [34]. In 17 CPGs between 2004 and 2008 for cardiovascular medicine, over half of the authors involved reported some form of conflict of interest with

Table 1

Characteristics of the included STEMI CPGs.

Guideline organization

Year of publication

Geographical area of impact

References Per guideline

Systematic review frequency

SR as a proportion of All studies cited by the CPG

American College of Cardiology & American Heart Association (ACC/AHA)

2013

United States of America

656

31

4.73

European Society of Cardiologists (ESC)

2012

Europe

346

27

7.80

National Institute for Health and Care Excellence (NICE)

2013

United Kingdom

340

9

2.65

Ministry of Health Malaysia, National Heart Association Malaysia,

2014

Malaysia

356

28

7.87

Academy of Medicine Malaysia (Malaysia)

STEMI = ST-elevation myocardial infarction; CPGs = clinical practice guidelines.

Table 2

Summary of PRISMA completeness scores across the four included guidelines.

PRISMA Item

ACC/AHA (n = 31)

ESC (n = 27)

NICE (n = 9)

Malaysia (n = 28)

Mean

1. Title: systematic review, meta-analysis, both?

1

1

1

1

1

2. Abstract: structured summary?

0.92

0.93

0.94

0.98

0.9425

3. Introduction: rationale for review?

1

1

1

1

1

4. Introduction: explicit statement of objectives?

1

0.97

1

0.91

0.97

5. Methods: protocol and registration?

0.05

0.04

0.22

0.09

0.1

6. Methods: eligibility criteria?

0.95

0.94

1

0.93

0.955

7. Methods: information sources?

0.55

0.56

0.56

0.54

0.5525

8. Methods: full search strategy?

0

0.07

0.11

0.14

0.08

9. Methods: study Selection process?

0.97

1

1

0.95

0.98

10. Methods: data collection process?

0.66

0.93

0.72

0.75

0.765

11. Methods: data items to be extracted?

0.9

0.96

0.89

0.89

0.91

12. Methods: risk of bias in individual studies?

0.66

0.74

0.78

0.77

0.7375

13. Methods: summary measures?

0.97

1

1

1

0.9925

14. Methods: synthesis of results?

0.94

0.94

0.78

0.95

0.9025

15. Methods: risk of bias across studies?

0.58

0.63

0.33

0.71

0.5625

16. Methods: additional analyses?

0.68

0.74

0.83

0.7

0.7375

17. Results: study selection?

0.63

0.85

0.83

0.73

0.76

18. Results: study characteristics?

1

1

1

1

1

19. Results: risk of bias within studies?

0.52

0.65

0.67

0.59

0.6075

20. Results: results of individual studies?

0.98

1

1

1

0.995

21. Results: synthesis of results?

1

1

0.89

1

0.9725

22. Results: risk of bias across studies?

0.53

0.63

0.56

0.66

0.595

23. Results: additional analyses?

0.74

0.67

0.89

0.79

0.7725

24. Discussion: summary of evidence?

0.63

0.94

0.89

0.77

0.8075

25. Discussion: study limitations?

0.87

0.85

0.78

0.91

0.8525

26. Discussion: conclusions?

0.94

0.83

0.83

0.91

0.8775

27. Funding: funding sources and roles of funders?

0.34

0.61

0.44

0.5

0.4725

Overall PRISMA completeness:

0.74

0.80

0.78

0.78

0.77

ACC/AHA = American College of Cardiology & American Heart Association; ESC = European Society of Cardiologists; NICE = National Institute for Health and Care Excellence; Malaysia = Ministry of Health Malaysia, National Heart Association Malaysia, Academy of Medicine Malaysia.

commercial entities [35]. This level of participation can affect readers’ opinions regarding the integrity of these CPGs, and some studies have even documented that disclosure of industry funding may cause readers to downgrade the quality of research and question the outcomes [34-36]. Research has also demonstrated that pharmaceutical industry-funded research was more likely to have outcomes favoring the sponsor, but was occasionally less likely to be published [33,36]. The lack of transparency in the CPGs and their writers has generated a response from the ACC to help encourage reporting conflicts of interest and promote transparency. An example of this measure is seen in a chart outlining the financial disclosures of the authors in the 2015 STEMI update guidelines, but the SRs utilized in earlier CPGs were much less complete [37-40]. Our review of these SRs and resources pre- vious utilized for CPGs (during our time period) failed to show the

disclosure of financial support, whether pharmaceutical industry or not, based on the specific grading system.

Acknowledgement of SRs as high level evidence is well documented, and the limited number of these studies used to establish guidelines was notable. In the 4 guidelines assessed, the use of SRs ranged from 2.65% to 7.80%. Even more interesting was the lack of overlap among the guidelines when using the SRs. Only 1 study, Bavry et al. was utilized by all 4 guidelines [26]. This meta-analysis was used by all 4 guidelines including the original 2013 AHA/ACC guidelines for STEMI along with its update in 2015 in which a class recommendation was changed [40,41]. A well-reviewed SR by Kumbhani et al., with a PRISMA score of 0.72 and AMSTAR rating of 7/11, was used in the Malaysian guidelines, but not in the AHA/ACC, ESC, or NICE guidelines [42]. A reader or health care pro- vider might think an SR of this quality, as measured by AMSTAR and

Table 3

Summary of AMSTAR completeness scores across the four included guidelines.

AMSTAR item

ACC/AHA (n = 31)

ESC

(n = 27)

NICE

(n = 9)

Malaysia (n = 28)

Mean

1. Were the review questions and Inclusion/exclusion criteria clearly delineated prior to executing the search strategy?

0.97

0.96

0.78

0.96

0.9175

2. Was a comprehensive literature search performed?

0.81

0.85

1

0.82

0.87

3. Was relevant gray literature included in the review?

0.84

0.74

0.89

0.75

0.805

4. Was there duplicate study selection and data extraction?

0.42

0.52

0.33

0.54

0.4525

5. Was a list of studies (included and excluded) provided?

0

0

0.11

0.04

0.0375

6. Were the characteristics of the included studies provided?

1

1

1

1

1

7. Was the risk of bias assessed for each included study, taking into account important potential confounders

and other sources of bias relevant to the review question?

0.55

0.7

0.67

0.64

0.64

8. Was the quality of the body of evidence appropriately assessed and considered in formulating the

0

0

0

0.07

0.0175

conclusions of the review?

9. Were the data appropriately synthesized in a qualitative manner and if applicable, was heterogeneity assessed?

0.81

0.89

0.78

0.96

0.86

If a meta-analysis was performed, was it appropriate?

10. Was the likelihood of publication bias assessed?

0.58

0.59

0.33

0.64

0.535

11. Were conflicts of interest disclosed for all of the review authors and were the funding source of the review

0.03

0.04

0

0.04

0.0275

and of each study within the review reported?

Total score 6.01 6.29 5.89 6.46 6.1625

AMSTAR methodological quality level Moderate Moderate Moderate Moderate Moderate

ACC/AHA = American College of Cardiology & American Heart Association; ESC = European Society of Cardiologists; NICE = National Institute for Health and Care Excellence; Malaysia = Ministry of Health Malaysia, National Heart Association Malaysia, Academy of Medicine Malaysia; AMSTAR = A Measurement Tool to Assess Systematic Reviews.

Table 4

Study inclusion across the guidelines included in our analysis [full citation Appendix 1].

Study author

PRISMA total score

AMSTAR total score

ACC/AHA

Malaysia

ESC

NICE (UK)

Bavry et al. [1]

0.93

7/11

?

?

?

?

D’Souza et al. [2]

0.72

6/11

?

?

?

Taylor et al. [3]

0.85

8/11

?

?

?

Borgia et al. [4]

0.91

8/11

?

?

?

Capes et al. [5]

0.54

4/11

?

?

Collet et al. [6]

0.57

6/11

?

?

Dalby et al. [7]

0.70

5/11

?

?

Freemantle et al. [8]

0.70

5/11

?

?

Wijeysundera et al. [9]

0.76

6/10

?

?

Morrison et al. [10]

0.80

7/11

?

?

?

Sjauw et al. [11]

0.80

8/11

?

?

Nordmann et al. [12]

0.91

8/11

?

?

Vlaar 2011 et al. [13]

0.91

8/11

?

?

Honan et al. [14]

0.54

2/11

?

Desai et al. [15]

0.56

4/11

?

Zhu et al. [16]

0.59

4/10

?

Kearney et al. [17]

0.59

4/11

?

Cannon et al. [18]

0.63

2/10

?

Montalescot et al. [19]

0.69

7/11

?

Bangalore et al. [20]

0.70

6/11

?

Patti et al. [21]

0.72

5/11

?

Shimada et al. [22]

0.72

6/11

?

Dolovich et al. [23]

0.72

7/11

?

Wilson et al. [24]

0.72

7/11

?

Agostoni et al. [25]

0.74

6/11

?

Nijjer et al. [26]

0.76

6/11

?

Andreotti et al. [27]

0.80

7/11

?

Appleton et al. [28]

0.83

6/11

?

Koreny et al. [29]

0.85

7/11

?

Navarese et al. [30]

0.85

7/11

?

Palmerini et al. [31]

0.85

7/11

?

Friedland et al. [32]

0.63

4/11

?

?

De Luca et al. [33]

0.81

6/11

?

?

Lawler et al. [34]

0.87

5/11

?

?

Cabello et al. [35]

0.91

7/10

?

?

Silvain et al. [36]

0.91

8/11

?

?

Domanski et al. [37]

0.57

3/11

?

Afilalo et al. [38]

0.69

6/11

?

Pasceri et al. [39]

0.69

6/11

?

Fortmann et al. [40]

0.70

5/11

?

Kumbhani et al. [41]

0.72

7/11

?

Coventry et al. [42]

0.78

6/11

?

Zhao et al. [43]

0.81

8/11

?

Kwak et al. [44]

0.83

7/11

?

Brar et al. [45]

0.85

7/11

?

Critchley et al. [46]

0.85

8/11

?

Briel et al. [47]

0.87

8/11

?

Palmer et al. [48]

0.89

8/11

?

Palmer et al. [49]

0.91

9/11

?

Upadhyay et al. [50]

0.96

8/11

?

Schmitt et al. [51]

0.44

1/11

?

Hine et al. [52]

0.54

2/11

?

Thackray et al. [53]

0.63

5/11

?

Bahekar et al. [54]

0.67

4/11

?

McAlister et al. [55]

0.67

7/11

?

Cheng et al. [56]

0.69

7/11

?

Lee et al. [57]

0.76

5/11

?

Shah et al. [58]

0.83

5/11

?

Dalal et al. [59]

0.85

7/11

?

De Luca et al. [60]

0.85

7/11

?

Jabre et al. [61]

0.89

7/11

?

Navarese et al. [62]

0.89

8/11

?

Valgimigli et al. [63]

0.91

8/11

?

Piccolo et al. [64]

0.93

7/11

?

Berdowski et al. [65]

0.59

3/11

?

Antithrombotic Trialists’ Collaboration [66]

0.61

4/11

?

Sethi et al. [67]

0.63

5/11

?

Jolly et al. [68]

0.78

6/11

?

Sethi et al. [69]

0.83

8/11

?

Vorobcsuk et al. [70]

0.87

6/11

?

Hartwell et al. [71]

0.94

7/11

?

Average PRISMA of all included studies

0.76

AMSTAR averaged total score:

6.01

6.46

6.29

5.89

Average AMSTAR of all included studies

6.07

AMSTAR methodological quality level:

Moderate

Moderate

Moderate

Moderate

Overall PRISMA completeness:

0.74

0.78

0.80

0.78

ACC/AHA = American College of Cardiology & American Heart Association; ESC = European Society of Cardiologists; NICE = National Institute for Health and Care Excellence; Malaysia = Ministry of Health Malaysia, National Heart Association Malaysia, Academy of Medicine Malaysia; AMSTAR = A Measurement Tool to Assess Systematic Reviews.

Fig. 2. PRISMA & AMSTAR completeness and use across guidelines.

PRISMA, would be used in other CPGs in changing class adjustments. The 2015 update from the AHA/ACC changed the guideline regarding thrombectomy or embolic protection from class IIA to class III (from the 2013 guidelines) and had a Level of evidence range from C-LD to a

The concept that only a few of these well-scored SRs were applied across different guidelines is of interest. Only 9 SRs were used between the Malaysian guidelines and the AHA/ACC, while 8 were used in the ESC guidelines with the ACC/AHA. Even less overlap was seen in the NICE guidelines, with only 2 SRs used in other guidelines. This finding raises the question as to why the different guidelines and countries would not use such high quality SRs, especially those covering some controversial topics. Non-culprit intervention during STEMI has been another topic with significant controversy and a recent class guideline change from the AHA/ACC. This topic carries a wide range of recommen- dations, level of evidence, and specific situations. The AHA/ACC was the only group to use the well-scored Navarese et al. meta-analysis in the 2013 guidelines, but it did not use this analysis in the 2015 update fo- cusing on non-culprit lesions in STEMI, nor did the ESC in their 2015 up- date [43]. The Vlaar et al. review also received a high score of 8 in the AMSTAR rating and 0.91 in the PRISMA [44]. This meta-analysis was used in both the AHA/ACC and ESC 2013 guidelines but only in the AHA/ACC 2015 STEMI update. A provider would think a controversial topic such as this, with variable class indications and level of evidence, should use all available data to help guide the different recommenda- tions and support the level of evidence.

Study limitations

This study has some limitations in methodology. First, methodolog- ical quality and completeness of reporting do not maintain mutual exclusivity when scored from information in the body of a report or

supplementary material. That is, scoring methodological quality from a publication is dependent upon the completeness and clarity of reporting present. Although it may be possible to grade methodological quality when clarity in reporting is high, as reporting quality decreases, it becomes increasingly difficult to detect even sound methodological quality because the indicators of good methodological quality (as repre- sented by AMSTAR items) may not have been properly reported [25]. Second, the AMSTAR tool was specifically designed without guidance on summary measures to find a total AMSTAR score [13,17,20]. Our use of a summary quality scale with ratings of low, moderate, and high is thus relatively arbitrary. Lastly, although the AMSTAR tool is a validated tool to assess methodological quality, our use of modified AMSTAR items based on recent recommendations has yet to be empir- ically validated [17]. The recommended changes theoretically improve specificity to methodological quality and address known issues present in the original AMSTAR tool, but the degree to which this tool, when modified, measures methodological quality is as yet unknown.

Because of the nature of the search performed, only SRs included in CPGs were scored. If articles did not clearly identify themselves as SRs or meta-analyses, they may have been missed despite measures taken to limit such omissions. In addition, these reviews were identified from the reference section of the CPG, rather than from specific recommenda- tions. Therefore, results are generalizable only to the CPG as a whole and cannot be extrapolated to specific recommendations unless otherwise noted. It should be noted that our study was restricted to SRs used with- in CPGs. This study did not include RCTs, which are often tiered as level A or level B evidence in guideline systems. Likewise, this study did not evaluate included observational studies, although these studies may be the only available evidence to support Guideline recommendations. Furthermore, it is important to recognize the limitations of SRs them- selves, which are at high risk of bias because of a host of factors as we have addressed. SRs have contributed greatly to scientific knowledge, and as primary studies continue to be conducted at increasingly higher rates, there will be a need to synthesize findings from these studies in an informed manner.

Conclusions

The limited use of quality SRs, both overall and between different guidelines, was an interesting and concerning discovery. The idea that CPG writers from different countries would use lower-quality SRs for their own guidelines and not use quality data used in different countries is something that should be addressed in the future. The seemingly in- dividualized use of specific SRs for different CPGs should be something carefully evaluated or considered in future CPGs. With the overall lack of evidence supporting many of the guidelines and many developed from lower levels of evidence or expert opinion as outlined in Tricoci et al. [33], who showed 11% of class indications had level of evidence A and nearly 50% at level of evidence C. Our findings and those in studies such as Tricoci et al. [33], demonstrate the continued need for improve- ment in the processes of study design at the primary level and search strategies for published works at the guideline creation level.

Funding

This research did not receive any specific grant from funding agen- cies in the public, commercial, or not-for-profit sectors.

Acknowledgments

N/A

Appendix A. Supplementary data

Supplementary material.

References

  1. Kern M. Limitations of FFR (or any physiologic measurement) during STEMI: Impli- cations for FFR-guided revascularization in the ACS patient. Cath Lab Digest April 1 2015;23(4) [Internet]. [cited 24 Oct 2016]. [Available from] http://www. cathlabdigest.com/article/Limitations-FFR-or-any-Physiologic-Measurement-Dur- ing-STEMI-Implications-FFR-Guided.
  2. Dalton K. Trials prompt interventionalists to reconsider complete revascularization for STEMI. TCTMD - the source for interventional cardiovascular news and educa- tion [Internet]. Mar 9 2015 [cited 24 Oct 2016] [Available from]: https://www. tctmd.com/news/trials-prompt-interventionalists-reconsider-complete-revasculari- zation-stemi.
  3. Terkelsen CJ, Pinto D, Thiele H, Clemmensen P, Nikus K, Lassen JF, et al. The diver- gence between European STEMI guidelines and evidence: a potential threat to optimising reperfusion therapy for patients with ST-elevation myocardial infarction. Heart 2013. http://dx.doi.org/10.1136/heartjnl-2013-304117.
  4. Shuvy M, Atar D, Gabriel Steg P, Halvorsen S, Jolly S, Yusuf S, et al. Oxygen therapy in acute coronary syndrome: are the benefits worth the risk? Eur Heart J 2013 Jun; 34(22):1630-5. http://dx.doi.org/10.1093/eurheartj/eht110.
  5. Jacobs AK, Kushner FG, Ettinger SM, Guyton RA, Anderson JL, Ohman EM, et al. ACCF/ AHA clinical practice guideline methodology summit report: a report of the American College of Cardiology Foundation/American Heart Association task force on practice guidelines. J Am Coll Cardiol Jan 15 2013;61(2):213-65. http://dx.doi. org/10.1016/j.jacc.2012.09.025.
  6. 45 CFR 46.102(d) and (f). Department of Health and Human Services’ Code of Fed- eral Regulations. [Internet]. Revised January 15, 2009. Accessed on July 7, 2016. [Available from] http://www.hhs.gov/ohrp/regulations-and-policy/regulations/45- cfr-46/#46.102.
  7. Moher D, Liberati A, Tetzlaff J, Altman DG. The PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med 2009;6(7):e1000097. http://dx.doi.org/10.1371/journal.pmed1000097.
  8. Lang TA, Altman DG. Basic statistical reporting for articles published in biomedical journals: The SAMPL guidelines. In: Smart P, Maisonneuve H, Polderman A, editors. Science editors’ handbook. European Association of Science Editors; 2013.
  9. Institute of Medicine (US) Committee on Standards for Developing Trustworthy Clinical Practice Guidelines, Graham R, Mancher M, Miller Wolman D, et al. Clinical practice guidelines we can trust. Washington, DC: National Academies Press (US); 2011[1, Introduction]. [Available from] https://www.ncbi.nlm.nih.gov/books/ NBK209546/.
  10. Pollock M, Fernandes RM, Hartling L. Evaluation of AMSTAR to assess the methodo- logical quality of systematic reviews in overviews of reviews of healthcare interven- tions. BMC Med Res Methodol 2017;17:48. http://dx.doi.org/10.1186/s12874-017- 0325-5.
  11. Tian J, Zhang J, Ge L, Yang K, Song F. The methodological and reporting quality of sys- tematic reviews from China and the USA are similar. J Clin Epidemiol Jan 4 2017 [pii: S0895-4356(16)30816-2] http://dx.doi.org/10.1016/j.jclinepi.2016.12.004.
  12. Ge L, Wang J, Li J, et al. In: Thombs B, editor. The assessment of the quality of reporting of systematic reviews/meta-analyses in diagnostic tests published by au- thors in china, 9(1). PLoS ONE; 2014:e85908. http://dx.doi.org/10.1371/journal. pone.0085908.
  13. Shea BJ, Grimshaw JM, Wells GA, Boers M, Andersson N, Hamel C, et al. Development of AMSTAR: a measurement tool to assess the methodological quality of systematic reviews. BMC Med Res Methodol Feb 15 2007;7:10. http://dx.doi.org/10.1186/1471- 2288-7-10.
  14. Shea BJ, Hamel C, Wells GA, Bouter LM, Kristjansson E, Grimshaw J, et al. AMSTAR is a reliable and valid measurement tool to assess the methodological quality of sys- tematic reviews. J Clin Epidemiol Oct 2008;62(10):1013-20. http://dx.doi.org/10. 1016/j.jclinepi.2008.10.009.
  15. Pieper D, Buechter RB, Li L, Prediger B, Eikermann M. Systematic review found AMSTAR, but not Revised-AMSTAR, to have good measurement properties. J Clin Epidemiol May 2015;68(5):574-83. http://dx.doi.org/10.1016/j.jclinepi.2014.12. 009.
  16. Popovich I, Windsor B, Jordan V, Showell M, Shea B, Farquhar CM. Methodological quality of systematic reviews in subfertility: a comparison of two different ap- proaches. PLoS One 2012;7(12):e50403. http://dx.doi.org/10.1371/journal.pone. 0050403.
  17. Burda B, Holmer H, Norris SL. Limitations of a measurement tool to assess systematic reviews (AMSTAR) and suggestions for improvement. Syst Rev Apr 12 2016;5:58. http://dx.doi.org/10.1186/s13643-016-0237-1.
  18. Faggion Jr CM. Critical appraisal of AMSTAR: challenges, limitations, and potential solutions from the perspective of an assessor. BMC Med Res Methodol Aug 13 2015;15:63. http://dx.doi.org/10.1186/s12874-015-0062-6.
  19. Fleming PS, Koletsi D, Seehra J, Pandis N. Systematic reviews published in higher im- pact clinical journals were of higher quality. J Clin Epidemiol Jul 2014;67(7):754-9. http://dx.doi.org/10.1016/j.jclinepi.2014.01.002.
  20. Sharif MO, Janjua-Sharif FN, Sharif FNJ, Ali H, Ahmed F. Systematic reviews ex- plained: AMSTAR-how to tell the good from the bad and the ugly. Oral Health Dent Manag Mar 2013;12:9-16.
  21. Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gotzsche PC, Ioannidis JPA, et al. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate healthcare interventions: explanation and elaboration. BMJ Jul 21 2009;339:b2700. http://dx.doi.org/10.1136/bmj.b2700.
  22. Moher D, Liberati A, Tetzlaff J. Altman DG; the PRISMA group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Ann Intern Med Aug 18 2009;151:264-9. http://dx.doi.org/10.7326/0003-4819-151-4-

200908180-00135.

  1. Moher D, Shamseer L, Clarke M, Ghersi D, Liberati A, Petticrew M, et al. PRISMA-P Group. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Syst Rev Jan 1 2015;4(1). http://dx.doi.org/10.1186/

2046-4053-4-1.

  1. Moher D, Tsertsvadze A, Tricco AC, Eccles M, Grimshaw J, Sampson M, et al. When and how to update systematic reviews. Cochrane Database Syst Rev Jan 23 2008; 1:MR000023. http://dx.doi.org/10.1002/14651858.MR000023.pub3.
  2. Bryce S, Sloan E, Lee S, Ponsford J, Rossell S. Cognitive remediation in schizophrenia: a methodological appraisal of systematic reviews and meta-analyses. J Psychiatr Res Apr 2016;75:91-106. http://dx.doi.org/10.1016/j.jpsychires.2016.01.004.
  3. Bavry AA, Kumbhani DJ, Bhatt DL. Role of adjunctive thrombectomy and embolic protection devices in acute myocardial infarction: a comprehensive meta-analysis of randomized trials. Eur Heart J 2008;29(24):2989-3001.
  4. D’Souza SP, Mamas MA, Fraser DG, Fath-Ordoubadi F. Routine early coronary angio- plasty versus ischaemia-guided angioplasty after thrombolysis in acute ST-elevation myocardial infarction: a meta-analysis. Eur Heart J 2010:ehq398.
  5. Taylor RS, Brown A, Ebrahim S, Jolliffe J, Noorani H, Rees K, et al. Exercise-based re- habilitation for patients with coronary heart disease: systematic review and meta- analysis of randomized controlled trials. Am J Med 2004;116(10):682-92.
  6. Borgia F, Goodman SG, Halvorsen S, Cantor WJ, Piscione F, Le May MR, et al. Early routine percutaneous coronary intervention after fibrinolysis vs. standard therapy in ST-segment elevation myocardial infarction: a meta-analysis. Eur Heart J 2010; 31(17):2156-69 [Sep].
  7. Morrison LJ, Verbeek PR, McDonald AC, Sawadsky BV, Cook DJ. Mortality and prehospital thrombolysis for acute myocardial infarction: a meta-analysis. JAMA 2000;283(20):2686-92.
  8. Huber K, Lip GYH. Differences between ACC/AHA and ESC guidelines on Antiplatelet therapy in patients with acute coronary syndromes. Thromb Haemost Jul 2013; 110(1):11-3. http://dx.doi.org/10.1160/TH13-06-0453.
  9. Office of Extramural Research, National Institutes of Health [Internet]. Estimates of funding for various research, condition, and disease categories (RCDC). Feb 10 2016 [cited 29 Jul 2016]. [Available from]: https://report.nih.gov/categorical_spend- ing.aspx.
  10. Tricoci P, Allen JM, Kramer JM, Califf RM, Smith SC. Scientific evidence underlying the ACC/AHA clinical practice guidelines. JAMA Feb 25 2009;301(8):831-41. http://dx.doi.org/10.1001/jama.2009.205.
  11. Kesselheim AS, Robertson CT, Myers JA, Rose SL, Gillet V, Ross KM, et al. A random- ized study of how physicians interpret research funding disclosures. N Engl J Med Sep 20 2012;367:1119-27. http://dx.doi.org/10.1056/NEJMsa1202397.
  12. Mendelson TB, Meltzer M, Campbell EG, Caplan AL, Kirkpatrick JN. Conflicts of inter- est in cardiovascular clinical practice guidelines. Arch Intern Med Mar 28 2011; 171(6):577-84. http://dx.doi.org/10.1001/archinternmed.2011.96.
  13. Lexchin J, Bero LA, Djulbegovic B, Clark O. Pharmaceutical industry sponsorship and research outcome and quality: systematic review. BMJ May 31 2003;326(7400): 1167-70.
  14. American College of Cardiology [Internet]. ACC signs on to strict code of ethics; cariology society reaffirms its commitment to transparency and firewalls. Apr 22 2010 [cited 29 Jul 2016]. [Available from]: http://www.acc.org/about-acc/press-re- leases/2010/05/05/11/12/code-of-ethics.
  15. Council of Medical specialty Society [Internet]. Code for interactions with compa- nies. [cited 29 Jul 2016] http://cmss.org/wp-content/uploads/2016/02/CMSS-Code- for-Interactions-with-Companies-Approved-Revised-Version-4.13.15-with-Annota- tions.pdf; April 2015.
  16. American College of Cardiology [Internet]. Relationship with industry and other en- tities policy (ACC/AHA guidelines, performance measures and data standards). May 17 2010 [updated: 29 Jan 2016; cited 29 Jul 2016]. [Available from]: http://www.acc. org/guidelines/about-guidelines-and-clinical-documents/relationships-with-indus- try-policy.
  17. Levine GN, Bates ER, Blankenship JC, Bailey SR, Bittl JA, Cercek B, et al. 2015 ACC/ AHA/SCAI focused update on primary percutaneous coronary intervention for pa- tients with ST-elevation myocardial infarction: an update of the 2011 ACCF/AHA/ SCAI guideline for percutaneous coronary intervention and the 2013 ACCF/AHA guideline for the management of ST-elevation myocardial infarction. J Am Coll Cardiol Mar 29 2016;67(10):1235-50. http://dx.doi.org/10.1016/j.jacc.2015.10.005.
  18. American College of Emergency Physicians, Society for Cardiovascular Angiography and Interventions, O’Gara PT, Kushner FG, Ascheim DD, Casey Jr DE, et al. 2013 ACCF/ AHA guideline for the management of ST-elevation myocardial infarction: a report of the American College of Cardiology Foundation/American Heart Association task force on practice guidelines. J Am Coll Cardiol Jan 29 2013;61(4):e78-140. http:// dx.doi.org/10.1016/j.jacc.2012.11.019.
  19. Kumbhani DJ, Bavry AA, Desai MY, Bangalore S, Bhatt DL. Role of aspiration and me- chanical thrombectomy in patients with acute myocardial infarction undergoing pri- mary angioplasty: an updated meta-analysis of randomized trials. J Am Coll Cardiol 2013;62(16):1409-18.
  20. Navarese EP, De Servi S, Buffon A, Suryapranata H, De Luca G. Clinical impact of simultaneous complete revascularization vs. culprit only primary angioplasty in patients with ST-elevation myocardial infarction and multivessel disease: a meta- analysis. J Thromb Thrombolysis 2011;31(2):217-25.
  21. Vlaar PJ, Mahmoud KD, Holmes DR, van Valkenhoef G, Hillege HL, van der Horst IC, et al. Culprit vessel only versus multivessel and staged percutaneous coronary inter- vention for multivessel disease in patients presenting with ST-segment elevation myocardial infarction: a pairwise and Network meta-analysis. J Am Coll Cardiol 2011;58(7):692-703.

Leave a Reply

Your email address will not be published. Required fields are marked *