- Open Access
Use of existing systematic reviews for evidence assessments in infectious disease prevention: a comparative case study
Systematic Reviews volume 5, Article number: 171 (2016)
Given limited resources and time constraints, the use of existing systematic reviews (SR) for the development of evidence-based public health recommendations has become increasingly important. Recently, a five-step approach for identifying, analyzing, appraising and using existing SRs based on recent guidance by the US Agency for Healthcare Research and Quality (AHRQ) was proposed within the Project on a Framework for Rating Evidence in Public Health (PRECEPT). However, case studies are needed to test whether this approach is useful, what challenges arise and how problems can be solved.
In two case studies, the five-step approach was applied to integrate existing SRs in the development of evidence-based public health recommendations. Case study A focused on the role of neonatal sepsis as a risk factor for adverse neurodevelopmental outcome. Case study B examined the efficacy, effectiveness and safety of influenza vaccination during pregnancy. For each step, we report the approach of the review team, discuss challenges and describe solutions.
For case study A, one existing SR was identified, while in case study B four SRs were eligible for analysis. We found that comparison of inclusion criteria alone was sufficient to judge on relevance of SRs in case study A, but not B. Although methodological quality of all identified SRs was acceptable, risk of bias assessments of individual studies included in the SRs had to be repeated in both case studies. Particular challenges appeared in case study B where multiple SRs addressed the same research question. With the help of spreadsheets comparing the characteristics of the existing SR we decided to use the most comprehensive one for our evidence synthesis and supplemented the results with those from the other SRs.
In both case studies using the complete SR was not possible. The five-step approach provided useful and structured guidance and should be routinely applied when using existing SRs as a basis for evidence-based recommendations in public health. In situations where more than one SR has to be considered, the development of spreadsheets comparing characteristics, inclusion criteria, risk of bias, included studies and outcomes seems useful.
During recent years, the number of published systematic reviews (SR) has increased nearly exponentially . Therefore, scientists experience situations in which one or more SRs for their guideline topic already exist, leading to the question whether and how these reviews can be considered for guideline development. Moreover, limited resources and time constraints are important factors leading to considerations on the use of existing SRs, instead of performing new ones. This goes along with questions regarding the methodological quality of the existing reviews and their comprehensiveness, as well as the potential need for updating. Accordingly, the issue of using existing SRs for new reviews or guidelines has raised a number of methodological concerns .
Current evidence assessment frameworks try to provide guidance on how to integrate existing SRs into new reviews. The Project on a Framework for Rating Evidence in Public Health (PRECEPT), which has been initiated and funded by the European Centre for Disease Prevention and Control (ECDC), has developed methodological guidance for evaluating and grading evidence in public health, with a particular focus on infectious disease prevention and control . PRECEPT explicitly recommends the use of existing SRs for evidence assessments, referring to the five-step methodology developed by the US Agency for Healthcare Research and Quality (AHRQ) and published by Robinson et al. . Recently, additional guidance for this approach was published by the same group .
Case studies are needed to test whether this approach is useful and to identify challenges and workable solutions during the application process. Therefore, we used two recent evidence assessment projects conducted in our institute to test the applicability of the approach. These projects were related to two domains of studies comprised by PRECEPT, namely those on “risk factors” and “interventions.” In the first project, we aimed at assessing the role of neonatal sepsis as a risk factor for adverse neurodevelopmental outcome (case study A). This assessment was part of a project on the burden of healthcare-associated infections initiated and funded by ECDC. The second project comprised an assessment of the efficacy, effectiveness, and safety of influenza vaccination during pregnancy (case study B) which has been performed to support the decision-making process of the German Standing Committee on Vaccination (STIKO).
The objective of the study is to describe approaches, challenges, and solutions regarding the use of existing SRs for evidence assessments in infectious disease prevention and control, using a comparative case study that included two examples from different domains of research (risk factors, interventions).
We performed a comparative case study on the use of SRs for the development of new SRs. For both case studies, systematic searches and analyses were performed by two teams of reviewers. For case study A (neonatal sepsis), SH and TH performed the searches, analyzed the data and finalized the SR, while TE acted as supervisor. For case study B (influenza vaccination), CR and TH performed the searches, analyzed the data, and finalized the SR, while OW acted as supervisor.
For the conduct of the SR of SRs, we used the approach proposed by Robinson et al. which has been incorporated in the PRECEPT framework . This approach comprises five steps which guide the user through the process of identifying, analyzing, appraising, and using existing SRs. In step 1, it is suggested to locate the existing SRs, using a defined and reproducible approach. In step 2, it is proposed to assess the relevance of the identified existing reviews with regard to the new review/evidence assessment, by comparing the inclusion criteria to those of the new SR. In step 3, the assessment of the methodological quality of the identified SR(s) is suggested. Step 4 proposes four different but not mutually exclusive approaches to determine appropriate use and incorporation of existing SRs into new reviews. This includes
using the included studies as a quality check for an own literature search
using the existing review to provide the body of included studies
using the data abstraction, risk of bias assessment, and/or analyses from existing SRs
using the complete existing review
Step 5 suggests to report methods and results from existing SRs. The complete approach is illustrated in Fig. 1.
For each of the five steps of the approach, details on procedures performed for case studies A and B were documented. We considered challenges arising from the application of the approach for each step separately and how they were handled. Finally, a table was constructed that compares the major challenges experienced between both case studies A and B.
Case study A: neurological sequelae of healthcare-associated sepsis in very-low-birth weight infants
Sepsis is suspected to be a frequent cause of neurological impairment in very low birth weight (VLBW) infants. However, the risk of sequelae attributable to sepsis, particularly in the context of healthcare-associated infections, is not known. Therefore, we aimed at systematically assessing the published evidence on neurological sequelae of healthcare-associated sepsis by using existing SRs. The respective paper reporting the results of this evidence assessment has been published recently .
Step 1: locate existing systematic reviews
To identify SRs on neurological sequelae of healthcare-associated sepsis in VLBW infants, we performed a systematic literature search in MEDLINE, EMBASE, and the Cochrane library. The search was limited to SRs published after 1 January 2000. We applied a modification of the health-evidence.ca search filter  which included the following terms: “Medline” OR “systematic review” OR “meta-analysis” OR “intervention.” The search led to the identification of 207 entries. After screening of titles, abstracts, and full publications, one potentially relevant SR remained for analysis (Alshaik et al. 2013 ) (see Appendix 1 and Additional file 1: Figure S1 for details).
Challenges and solutions
To ensure that all existing SRs were identified by a transparent and reproducible approach, a systematic search was performed. A particular challenge was to develop a search strategy which was not only highly sensitive but showed a sufficient specificity to separate SRs from original studies. Therefore, we decided to apply specific search filters to narrow the results of the search.
Although the application of search filters provided an acceptable (and plausible) number of entries, it was necessary to read the full texts from a large number of publications to decide on eligibility. In particular, even from the abstract, it was not always possible to decide whether the respective publication was a SR. Therefore, in some cases, the full text had to be assessed.
Step 2: assess relevance of identified systematic reviews
To assess the relevance of the identified SR , we compared its inclusion criteria to a priori defined inclusion criteria of our new review. As suggested in PRECEPT, we used the PI(E)CO (population, intervention (exposure), comparison, outcome) format . As shown in Table 1, this comparison revealed that differences regarding PICO were rather small. In general, the review by Alshaik et al. was more restrictive in each of the PICO criteria, with the exception of the comparator which was not explicitly defined by the authors but implicitly the same as in our review.
Challenges and solutions
Compared to our predefined criteria, Alshaik et al. did not make any restrictions regarding study design of the included studies, although focused on cohort and case–control studies. Since we aimed at calculating risk differences in our SR, we had to limit our search to cohort studies. Therefore, we had to exclude three case–control studies and a further three studies since they did not report appropriate data for calculation of risk differences. In addition, one study did not report relevant outcomes and was therefore excluded.
Among the 17 studies identified and reported to be eligible by Alshaik et al., there was one study that we were not able to retrieve in databases or via the Internet. After internal discussions, we decided to report this finding in the publication and proceed with the remaining studies.
Step 3: assess quality of existing systematic reviews
As suggested in the guidance paper , we applied a commonly used tool to assess the quality of the existing SR, namely, the Assessment of Multiple Systematic Reviews (AMSTAR) tool . Two independent observers assessed the quality of the SR by Alshaik et al. . Four of the 11 domains of AMSTAR were found to be not adequately addressed in this SR: A list of excluded publications was not provided, no risk of bias assessment was performed and could not be used for formulating conclusions, and conflict of interest in the included studies was not investigated. Both independent researchers agreed on an AMSTAR summary score of 7/11 (Table 2).
Challenges and solutions
Since a threshold for an AMSTAR score indicating “high methodological quality” has not been defined so far, uncertainties remain how to judge the results of the AMSTAR assessment. We chose to use a rather conservative approach and judged the review being of “acceptable methodological quality.” However, it remains uncertain at which AMSTAR score “threshold” researchers should refuse the use of an existing SR due to low methodological quality.
One major AMSTAR domain that led to downgrading of the AMSTAR score was the lack of a risk of bias assessment of the included studies in the review by Alshaik et al. However, we considered this issue of minor importance since we performed our own risk of bias assessment for the new review (see below).
Step 4: determine appropriate use and incorporate existing systematic reviews
Since we aimed at calculating attributable risk for each study, it was necessary to re-analyze the cohort studies included in the SR by Alshaik et al., corresponding to option (ii) in the paper by Robinson et al.  (see above). We performed data extraction using standardized data extraction sheets and assessed risk of bias of the original cohort studies.
Challenges and solutions
In contrast to using the analysis of the existing review, re-analysis of the entire body of evidence requires much more resources. However, for our purposes, a re-analysis was necessary. At this point, the approach does not differ from the procedures used in a conventional SR regarding data extraction, checking, and data synthesis.
We used the Newcastle Ottawa Scale to perform risk of bias assessment of all original studies included in that review . Like data extraction, risk of bias assessment was performed by two independent investigators. Of the nine included studies, three had a high risk of bias, while the remaining six had a low risk of bias (for details, see ).
Step 5: report methods and results from existing systematic reviews
Since we extracted and re-analyzed data of original studies, reporting of methods and results did not differ from the approaches used when a new SR is performed. In the respective publication, we used a table to report location of the study, birth year, population characteristics, definition of sepsis, and duration of follow-up for each included study, along with the results of the risk of bias assessment. This was accompanied by forest plots showing the risk differences from the individual studies and the pooled estimate for each outcome (cerebral palsy, vision impairment, hearing impairment, impaired neurodevelopment) (see Haller et al.  for details).
Challenges and solutions
The approach used to report the results of the assessment was very similar to a conventional SR implicating that much more time and resources were needed, compared to a situation when data extraction and analysis from the existing SR could be used. However, using the existing results of the literature search still appears to be a time- and resource-saving strategy, compared to performing a new review.
We decided to consider the original primary studies rather than the information reported in the SR as data source to avoid perpetuation of possible extraction errors.
Case study B: efficacy, effectiveness and safety of influenza vaccination during pregnancy
Influenza vaccination during pregnancy is recommended by the World Health Organization and many national health authorities to ensure protection to the pregnant woman and her newborn against influenza and associated complications [11, 12]. However, immunization during pregnancy warrants particular attention regarding adverse events. We aimed at systematically assessing the published evidence on efficacy/effectiveness and safety of vaccination against influenza during pregnancy, using existing SRs.
Step 1: locate existing systematic reviews
To identify eligible SRs, we performed a systematic literature search in MEDLINE, in EMBASE, and in the Cochrane Database of Systematic Reviews. The initial search identified a total of 45 entries. After screening of titles, abstracts, and full texts, four potentially relevant SRs remained. All of them reported syntheses of data on either efficacy/effectiveness or safety of influenza vaccination during pregnancy, or both [13–16] (see Appendix 1 and Additional file 1: Figure S1 for details).
Challenges and solutions
Since pregnant women are a part of the healthy adult population, we expected to find relevant information also as part of SRs on influenza vaccination in healthy adults. Therefore, we extended our search which, however, reduced specificity of the search strategy. In fact, one of the identified SRs (Jefferson et al. ) was a Cochrane review on vaccines for preventing influenza in healthy adults which contained a separate chapter on pregnant women and their newborns. Thereby, our case study supports the idea that some topics necessitate broadening the search to capture all relevant systematic reviews. In fact, a search limited to pregnant women only would not have identified the Cochrane review.
Step 2: assess relevance of identified systematic reviews
As in case study A, we started to assess the relevance of the identified SRs by comparing their inclusion criteria to those of our own review, using the PICO format. Generally, comparison of PICOs revealed only small differences between the existing SRs, with one notable exception (Table 3). The SR by Fell et al.  focused on preterm birth and fetal death but did neither investigate maternal outcomes nor those related to the efficacy or effectiveness of vaccination in preventing influenza in the infant.
Challenges and solutions
A more detailed analysis revealed that the comparison of PICOs alone would not suffice to assess the relevance of the identified SRs. In particular, by screening the SRs, we detected considerable differences regarding study designs of the included studies, numbers of included studies, inclusion of pandemic and/or seasonal influenza vaccines, etc. Therefore, we decided to construct an additional spreadsheet for comparison of characteristics of the SRs (Table 4). These comparisons revealed differences with regard to the study designs of the included studies defined a-priori. While the Cochrane review  and the SR by McMillan et al.  did not make any restrictions regarding the design of the studies to be included, Galvao et al.  decided to include only RCTs and cohort studies, whereas Fell et al.  in addition included cross-sectional and case–control studies. Three of the SRs included studies concerning seasonal as well as pandemic influenza vaccines, while the remaining SR (Galvao et al. ) focused on vaccination against seasonal influenza only. In three SRs, a meta-analysis was performed. Remarkably, considerable differences between the SRs were observed with regard to the number of included studies which ranged between 8 and 46.
Furthermore, the identified SRs differed considerably regarding included maternal and infant outcomes. To assess these differences in detail, we constructed a further table comparing the included outcomes with respect to the original studies where they were reported (Table 5). This comparison revealed that a total of 14 outcomes were reported in the four systematic reviews. The most comprehensive data base was provided by the SR by McMillan et al. , while the SR by Fell et al. , due to its inclusion criteria, reported the smallest number of outcomes.
Step 3: assess methodological quality of existing systematic reviews
As shown in Table 2, all four potentially relevant reviews received high AMSTAR summary scores, ranging between 9/11 and 11/11 points. Therefore, we concluded that regarding methodological quality, the SRs were nearly equal. Differences were observed regarding single domains: Two SRs did not consider “status of publication” adequately, one did not investigate publication bias, and only two SRs considered conflict of interest in the included studies.
Challenges and solutions
As discussed in case study A, it is unclear what “high methodological quality” means in terms of the AMSTAR summary scores. Furthermore, empirical studies are missing which investigate whether or not a SR with an AMSTAR summary score of 11 is superior to one with an AMSTAR score of 9. Rather, it appears to be important to consider the domains in which differences were observed. We judged all four SRs as being of adequate methodological quality.
Step 4: determine appropriate use and incorporate existing systematic reviews
Since in step 2 we had identified the SR by McMillan et al.  as the most comprehensive one, we decided to use this SR as the base for our evidence synthesis. In those cases where single studies or outcomes were missing from the review by McMillan et al., we supplemented the results with those from the other SR, using the spreadsheet shown in Table 5 as guidance.
Challenges and solutions
As show in Table 4, we observed that in the four SRs a total of four different risk of bias tools were applied to assess internal validity of RCTs and observational studies. We therefore decided to re-analyze risk of bias in the primary original studies, using the Cochrane risk of bias tool for randomized controlled trials  and the Newcastle Ottawa Scale for cohort studies . Like data extraction, risk of bias assessment was performed by two independent investigators (for results, see Table 4).
Step 5: report methods and results from existing systematic reviews
Using the approach described above, we performed a new evidence synthesis for all outcomes reported in the 20 original studies contributing to the four SRs. We aggregated the study estimates in tables, separately for RCTs and observational studies.
Challenges and solutions
Data from the original studies were very heterogeneous regarding study populations, vaccines used, and outcome definitions. Therefore, although meta-analyses were performed in three of the SRs, we decided not to perform a quantitative data synthesis but to report the non-aggregated study results in tables.
Comparison of major challenges in case studies A and B
In Table 6, we summarized the major challenges and solutions along steps 1 to 5 for both case studies. Obviously, the challenges and respective solutions at each of the five steps differed between the two case studies. Only in step 3 (assessing quality), similar challenges occurred with regard to the interpretation of the AMSTAR score. Many challenges in case study A were related to the need of performing a completely new data extraction and analysis. In case study B, the majority of challenges was caused by partial overlap of the studies included in the different SRs and the resulting question how to deal with this situation.
Major aim of our work described here was to test the application of the AHQR approach in the context of using systematic reviews for developing evidence-based recommendations in the field of infectious disease prevention and control. Importantly, this might also include research questions which do not come from the field of comparative effectiveness but include topics like risk factors or incidence. Nevertheless, we conclude that the approach is applicable to these scenarios and provides excellent guidance for the use of systematic reviews also in this context.
The use of existing SRs for the development of new SRs has been suggested to be an efficient way to develop evidence-based recommendations in public health, offering a time- and resource-saving alternative to the conduct of a new SR. Our comparative case study shows that applying the five-step approach by Robinson et al.  which is integrated in the PRECEPT evidence assessment framework  leads to a transparent evidence assessment based on existing SRs. However, depending on the quantity and quality of existing SRs for the given research question, different challenges have to be managed during the evidence assessment process. Furthermore, we experienced that in cases when extensive re-analysis was needed because the existing SRs did not report appropriate data from the primary studies, the advantages of using existing SRs over performing a new SR are likely to be small.
In our experience in conducting systematic reviews “from scratch,” 1/3 of time has to be spent for literature search, 1/3 for data extraction, and 1/3 for data analysis and interpretation of the data, respectively. We think that it is worth using existing reviews, if at least one of these three “steps” can be omitted. We estimated that a full systematic review takes six to 24 months, depending on complexity of question, number of relevant outcomes, relevant publication dates, etc. In our experience, by using existing systematic reviews instead of starting a new one, this process can be shortened by about one to two thirds, depending on topic, comprehensiveness, and quality of the existing reviews.
Overviews of SR are a new approach of evidence synthesis which becomes increasingly popular . However, it has been recognized earlier that a number of methodological challenges exist related to these approaches . So-called umbrella reviews can be considered to be a specific form of SRs of SRs by focusing on meta-synthesis of the results of comparable meta-analyses . However, our case study illustrates that it might be of equal importance to develop methodological guidance for those cases where detailed comparison and dissection of the existing SRs rather than synthesis are needed. For the development of the PRECEPT framework, we decided to use the approach proposed by Robinson et al.  which has recently been further elaborated by the same group .
Our case studies A and B showed large differences regarding the evidence base. While case study A had to build on only one SR, for case study B four SRs were identified. At first look, these SRs appeared very similar regarding methodological quality and thematic relevance. However, further analysis revealed several challenges resulting from differences, e.g., in included studies, considered outcomes and risk of bias assessments. All reviews included in both case studies had very broad PICO questions, particularly with regard to the outcomes. The systematic reviews on influenza vaccination during pregnancy for example did not focus on one or two outcomes, but used groups of outcomes such as “adverse events during pregnancy” which were presented in an aggregated as well as non-aggregated way. Comparing the systematic reviews on influenza vaccination during pregnancy we realized that the authors transformed the data differently, e.g., regarding outcomes such as “fetal death,” “stillbirth,” “abortion,” or “intrauterine death.”
Interestingly, already 20 years ago, Jadad et al. discussed problems which might arise when SRs on the same topic come to discordant results or conclusions . In our study, one particular challenge arose from the fact that the four SRs had partially overlapping studies. A recent study by Siontis et al. suggested that this problem is not uncommon. Of 73 meta-analyses analyzed in their article, 67 % had at least one other overlapping meta-analysis . Siontis et al. proposed three possible solutions for such a scenario: (i) to select the most appropriate SR, (ii) to perform a new review, or (iii) to synthesize the results of the reviews. Similar challenges in analyzing and using existing SRs with overlapping topics have been recently described by Bolland and Grey . Like in our study, these authors observed important differences in trial selection between seven meta-analyses on vitamin D supplements. Other researchers have developed graphical tools to assist in the assessment of appropriateness of each meta-analysis for a given research question . In our study, we decided to apply the concept of comparative spreadsheets suggested by Whitlock et al.  to compare certain features of the included systematic reviews to assess their value for a new evidence synthesis. We thereby identified the most comprehensive review  and supplemented this study base with the results of the other three SRs. However, it has to be considered that our approach of selecting the most comprehensive review and supplementing it with studies identified in other reviews does not always work. The most comprehensive review may not be of the highest quality, it ignores the possibility that the selection criteria of the review authors were not entirely transparent, it does not eliminate the need for risk of bias assessment, and it still might require evidence assessments to be done. Furthermore, AMSTAR does not completely separate assessment of bias from reporting quality. It has to be investigated whether the recently developed ROBIS tool  performs better in capturing risk of bias in systematic reviews.
In step 4 of the approach, it is suggested to either use the results of the risk of bias assessments from the existing SRs for the new review or to perform a new assessment. In case study A, the latter approach was ultimately necessary since the existing review did not report risk of bias in the included studies. On the contrary, in case study B, all included SRs reported their own risk of bias assessments. However, these were based on four different risk of bias tools. In all SRs, assessments of RCTs were performed by using the Cochrane risk of bias tool , whereas for observational studies either the Newcastle Ottawa Scale , the Downs and Black Checklist , or the JBI-MASTARI tool  was used. The authors of these SRs came to differing conclusions regarding the methodological quality of the included original studies. Although a comparison of risk of bias assessments between the reviews would be interesting, reporting quality of these results showed large differences, making a direct comparison impossible: The systematic review by Fell et al. reported numerical quality scores per study, but not per outcome, using the Newcastle Ottawa Scale and the Downs-Black Scale. In the Methods section of the review by Galvao et al., it was declared that the Cochrane risk of bias tool was used, but the authors did not report the results of this assessment. In the systematic review by McMillan et al., the JBI-MASTARI was used, but the results of this assessment were reported in aggregated form only, rather than for individual studies and outcomes. However, one has to take into account that even the use of identical risk of bias tools can lead to different judgements on methodological quality by different authors, since it has been demonstrated that many risk of bias tools show a considerable degree of inter-rater variability [10, 27]. Taking all these uncertainties into account, we decided to perform a new risk of bias assessment for our review, and compared the results to those by the other investigators.
Remarkably, quality of the evidence was not assessed in the original systematic review by Alshaik et al. which served as the base for our case study A. Regarding the systematic reviews used in case study B, only one (Galvao et al.) of four reviews reported a GRADE quality assessment. However, due to differences in the number of included studies, it is difficult to compare the results of the assessment made by those authors to our GRADE assessments. Furthermore, updating of existing systematic review constitutes an important step which should be considered in every situation when the use of existing reviews is discussed as an approach in the development of evidence assessments. Importantly, methods have been developed to evaluate the need for updating, such as the surveillance system proposed by AHRQ .
Our comparative case study supports the applicability of the five-step approach  included in the PRECEPT framework for the use of existing SRs. However, efforts and challenges depend on the quality and quantity of the evidence in the SRs. Exclusion of non-systematic (or narrative) reviews constitutes a particular challenge since they can often not been identified without reading the full text of the respective article. Important progress in literature searches could be made if this kind of reviews would be adequately termed and indexed in electronic databases. Adherences of all SR authors to the PRISMA guideline  would strongly improve the quality of published reviews. Authors of systematic reviews should provide a database (e.g., an Endnote file) alongside with the paper to support replication, re-analysis, and updating of the respective systematic reviews. In situations where more than one SR has to be considered, the development of spreadsheets comparing characteristics, inclusion criteria, risk of bias, included studies, and outcomes can support the development of evidence-based guidance. For such situations, we used our experiences documented in this paper to develop a recommendation on how to proceed (Table 7). Furthermore, our study identified a number of open questions which should be addressed in future studies, such as
What is a “good” AMSTAR score?
How to deal with overlapping studies when using existing systematic reviews?
When is it more efficient to perform a new systematic review (in terms of time and resources) than to try to use an existing one?
The use of a complete SR is often not possible and intensive re-analyses are necessary. Still, even when such reanalyses are needed, the use of existing SRs is a promising alternative to the conduct of a new SR and might support fast and efficient development of evidence-based recommendations in public health.
Agency for Healthcare Research and Quality
Assessment of Multiple Systematic Reviews
Project on a Framework for Rating Evidence in Public Health
Bastian H, Glasziou P, Chalmers I. Seventy-five trials and eleven systematic reviews a day: how will we ever keep up? PLoS Med. 2010;7(9):e1000326.
Whitlock EP, Lin JS, Chou R, Shekelle P, Robinson KA. Using existing systematic reviews in complex systematic reviews. Ann Intern Med. 2008;148(10):776–82.
Harder T, Abu Sin M, Bosch-Capblanch X, Bruno C, de Carvalho GH, Duclos P, Eckmanns T, Elder R, Ellis S, Forland F, et al. Towards a framework for evaluating and grading evidence in public health. Health Policy. 2015;119(6):732–6.
Robinson KA, Whitlock EP, Oneil ME, Anderson JK, Hartling L, Dryden DM, Butler M, Newberry SJ, McPheeters M, Berkman ND, et al. Integration of existing systematic reviews into new reviews: identification of guidance needs. Syst Rev. 2014;3:60.
Robinson KA, Chou R, Berkman ND, Newberry SJ, Fu R, Hartling L, Dryden D, Butler M, Foisy M, Anderson J et al.: Twelve recommendations for integrating existing systematic reviews into new reviews: EPC guidance. J Clin Epidemiol 2015
Haller S, Deindl P, Cassini A, Suetens C, Zingg W, Abu Sin M, Velasco E, Weiss B, Ducomble T, Sixtensson M et al.: Neurological sequelae of healthcare-associated sepsis in very-low-birthweight infants: Umbrella review and evidence-based outcome tree. Euro Surveill 2016, 21(8).
Lee E, Dobbins M, Decorby K, McRae L, Tirilis D, Husson H. An optimal search filter for retrieving systematic reviews and meta-analyses. BMC Med Res Methodol. 2012;12:51.
Alshaikh B, Yusuf K, Sauve R. Neurodevelopmental outcomes of very low birth weight infants with neonatal sepsis: systematic review and meta-analysis. J Perinatol. 2013;33(7):558–64.
Shea BJ, Grimshaw JM, Wells GA, Boers M, Andersson N, Hamel C, Porter AC, Tugwell P, Moher D, Bouter LM. Development of AMSTAR: a measurement tool to assess the methodological quality of systematic reviews. BMC Med Res Methodol. 2007;7:10.
Hartling L, Milne A, Hamm MP, Vandermeer B, Ansari M, Tsertsvadze A, Dryden DM. Testing the Newcastle Ottawa Scale showed low reliability between individual reviewers. J Clin Epidemiol. 2013;66(9):982–93.
World Health Organization: Influenza (seasonal). Fact sheet No.211. March 2014. Available at http://www.who.int/mediacentre/factsheets/fs211/en/. 2014.
Mak TK, Mangtani P, Leese J, Watson JM, Pfeifer D. Influenza vaccination in pregnancy: current evidence and selected national policies. Lancet Infect Dis. 2008;8(1):44–52.
Fell DB, Platt RW, Lanes A, Wilson K, Kaufman JS, Basso O, Buckeridge D. Fetal death and preterm birth associated with maternal influenza vaccination: systematic review. BJOG. 2015;122(1):17–26.
Galvao TF, Silva MT, Zimmermann IR, Lopes LA, Bernardo EF, Pereira MG. Influenza vaccination in pregnant women: a systematic review. ISRN Prev Med. 2013;2013:879493.
Jefferson T, Di Pietrantonj C, Rivetti A, Bawazeer GA, Al-Ansary LA, Ferroni E. Vaccines for preventing influenza in healthy adults. Cochrane Database Sys Rev. 2014;3:CD001269.
McMillan M, Kralik D, Porritt K, Marshall H: Influenza vaccination during pregnancy: a systematic review of effectiveness and adverse events. The JBI Database of Systematic Reviews and Implementation Reports; Vol 12, No 6 (2014). 2014.
Higgins JP, Altman DG, Gotzsche PC, Juni P, Moher D, Oxman AD, Savovic J, Schulz KF, Weeks L, Sterne JA, et al. The Cochrane Collaboration's tool for assessing risk of bias in randomised trials. BMJ. 2011;343:d5928.
Hartling L, Chisholm A, Thomson D, Dryden DM. A descriptive analysis of overviews of reviews published between 2000 and 2011. PLoS One. 2012;7(11):e49667.
Smith V, Devane D, Begley CM, Clarke M. Methodology in conducting a systematic review of systematic reviews of healthcare interventions. BMC Med Res Methodol. 2011;11(1):15.
Ioannidis JP, Zhou Y, Chang CQ, Schully SD, Khoury MJ, Freedman AN. Potential increased risk of cancer from commonly used medications: an umbrella review of meta-analyses. Ann Oncol. 2014;25(1):16–23.
Jadad AR, Cook DJ, Browman GP. A guide to interpreting discordant systematic reviews. CMAJ. 1997;156(10):1411–6.
Siontis KC, Hernandez-Boussard T, Ioannidis JP. Overlapping meta-analyses on the same topic: survey of published studies. BMJ. 2013;347:f4501.
Bolland MJ, Grey A. A case study of discordant overlapping meta-analyses: vitamin d supplements and fracture. PLoS One. 2014;9(12):e115934.
Panesar SS, Rao C, Vecht JA, Mirza SB, Netuveli G, Morris R, Rosenthal J, Darzi A, Athanasiou T. Development of the Veritas plot and its application in cardiac surgery: an evidence-synthesis graphic tool for the clinician to assess multiple meta-analyses reporting on a common outcome. Can J Surg. 2009;52(5):E137–145.
Whiting P, Savovic J, Higgins JP, Caldwell DM, Reeves BC, Shea B, Davies P, Kleijnen J, Churchill R, group R. ROBIS: A new tool to assess risk of bias in systematic reviews was developed. J Clin Epidemiol. 2016;69:225–34.
Downs SH, Black N. The feasibility of creating a checklist for the assessment of the methodological quality both of randomised and non-randomised studies of health care interventions. J Epidemiol Community Health. 1998;52(6):377–84.
Hartling L, Hamm MP, Milne A, Vandermeer B, Santaguida PL, Ansari M, Tsertsvadze A, Hempel S, Shekelle P, Dryden DM. Testing the risk of bias tool showed low reliability between individual reviewers and across consensus assessments of reviewer pairs. J Clin Epidemiol. 2013;66(9):973–81.
Shekelle PG, Motala A, Johnsen B, Newberry SJ. Assessment of a method to detect signals for updating systematic reviews. Syst Rev. 2014;3:13.
Moher D, Liberati A, Tetzlaff J, Altman DG, Group P. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Ann Intern Med. 2009;151(4):264–9. W264.
Thompson MG, Li DK, Shifflett P, Sokolow LZ, Ferber JR, Kurosky S, Bozeman S, Reynolds SB, Odouli R, Henninger ML, et al. Effectiveness of seasonal trivalent influenza vaccine for preventing influenza virus illness among pregnant women: a population-based case–control study during the 2010–2011 and 2011–2012 influenza seasons. Clin Infect Dis. 2014;58(4):449–57.
Zaman K, Roy E, Arifeen SE, Rahman M, Raqib R, Wilson E, Omer SB, Shahid NS, Breiman RF, Steinhoff MC. Effectiveness of maternal influenza immunization in mothers and infants. N Engl J Med. 2008;359(15):1555–64.
Black SB, Shinefield HR, France EK, Fireman BH, Platt ST, Shay D, Vaccine Safety Datalink W. Effectiveness of influenza vaccine during pregnancy in preventing hospitalizations and outpatient visits for respiratory illness in pregnant women and their infants. Am J Perinatol. 2004;21(6):333–9.
Hulka JF. Effectiveness of polyvalent influenza vaccine in pregnancy. Report of a controlled study during an outbreak of Asian influenza. Obstet Gynecol. 1964;23:830–7.
Munoz FM, Greisinger AJ, Wehmanen OA, Mouzoon ME, Hoyle JC, Smith FA, Glezen WP. Safety of influenza vaccination during pregnancy. Am J Obstet Gynecol. 2005;192(4):1098–106.
Yeager DP, Toy EC, Baker 3rd B. Influenza vaccination in pregnancy. Am J Perinatol. 1999;16(6):283–6.
Englund JA, Mbawuike IN, Hammill H, Holleman MC, Baxter BD, Glezen WP. Maternal immunization with influenza or tetanus toxoid vaccine for passive antibody protection in young infants. J Infect Dis. 1993;168(3):647–56.
Lin SY, Wu ET, Lin CH, Shyu MK, Lee CN. The safety and immunogenicity of trivalent inactivated influenza vaccination: a study of maternal-cord blood pairs in Taiwan. PLoS One. 2013;8(6):e62983.
Nordin JD, Kharbanda EO, Benitez GV, Nichol K, Lipkind H, Naleway A, Lee GM, Hambidge S, Shi W, Olsen A. Maternal safety of trivalent inactivated influenza vaccine in pregnant women. Obstet Gynecol. 2013;121(3):519–25.
Benowitz I, Esposito DB, Gracey KD, Shapiro ED, Vazquez M. Influenza vaccine given to pregnant women reduces hospitalization due to influenza in their infants. Clin Infect Dis. 2010;51(12):1355–61.
Eick AA, Uyeki TM, Klimov A, Hall H, Reid R, Santosham M, O'Brien KL. Maternal influenza vaccination and effect on influenza virus infection in young infants. Arch Pediatr Adolesc Med. 2011;165(2):104–11.
Poehling KA, Szilagyi PG, Staat MA, Snively BM, Payne DC, Bridges CB, Chu SY, Light LS, Prill MM, Finelli L, et al. Impact of maternal immunization on influenza hospitalizations in infants. Am J Obstet Gynecol. 2011;204(6 Suppl 1):S141–148.
France EK, Smith-Ray R, McClure D, Hambidge S, Xu S, Yamasaki K, Shay D, Weintraub E, Fry AM, Black SB, et al. Impact of maternal influenza vaccination during pregnancy on the incidence of acute respiratory illness visits among infants. Arch Pediatr Adolesc Med. 2006;160(12):1277–83.
Chambers CD, Johnson D, Xu R, Luo Y, Louik C, Mitchell AA, Schatz M, Jones KL, Group OCR. Risks and safety of pandemic H1N1 influenza vaccine in pregnancy: birth defects, spontaneous abortion, preterm delivery, and small for gestational age infants. Vaccine. 2013;31(44):5026–32.
Dodds L, Macdonald N, Scott J, Spencer A, Allen VM, McNeil S. The association between influenza vaccine in pregnancy and adverse neonatal outcomes. J Obstet Gynaecol Can. 2012;34(8):714–20.
Legge A, Dodds L, MacDonald NE, Scott J, McNeil S. Rates and determinants of seasonal influenza vaccination in pregnancy and association with neonatal outcomes. CMAJ. 2014;186(4):E157–164.
Louik C, Ahrens K, Kerr S, Pyo J, Chambers C, Jones KL, Schatz M, Mitchell AA. Risks and safety of pandemic H1N1 influenza vaccine in pregnancy: exposure prevalence, preterm delivery, and specific birth defects. Vaccine. 2013;31(44):5033–40.
Omer SB, Goodman D, Steinhoff MC, Rochat R, Klugman KP, Stoll BJ, Ramakrishnan U. Maternal influenza immunization and reduced likelihood of prematurity and small for gestational age births: a retrospective cohort study. PLoS Med. 2011;8(5), e1000441.
Sheffield JS, Greer LG, Rogers VL, Roberts SW, Lytle H, McIntire DD, Wendel Jr GD. Effect of influenza vaccination in the first trimester of pregnancy. Obstet Gynecol. 2012;120(3):532–7.
Steinhoff MC, Omer SB, Roy E, El Arifeen S, Raqib R, Dodd C, Breiman RF, Zaman K. Neonatal outcomes after influenza immunization during pregnancy: a randomized controlled trial. CMAJ. 2012;184(6):645–53.
Irving SA, Kieke BA, Donahue JG, Mascola MA, Baggs J, DeStefano F, Cheetham TC, Jackson LA, Naleway AL, Glanz JM, et al. Trivalent inactivated influenza vaccine and spontaneous abortion. Obstet Gynecol. 2013;121(1):159–65.
This work was performed as a pilot of the Project on a Framework for rating Evidence in Public Health (PRECEPT). PRECEPT is funded by the European Centre for Disease Prevention and Control (ECDC; tenders no. 2012/040; 2014/008). The systematic review on neurological sequelae of neonatal sepsis was conducted within the project “Evidence-based research to develop outcome trees for healthcare-associated infections” funded by the European Centre for Disease Prevention and Control (contract no. 4226).
Availability of supporting data
For case study A, SH and TH performed the searches, analyzed the data, and finalized the SR. For case study B, CR and TH performed the searches, analyzed the data, and finalized the SR. TE and OW acted as supervisors of the SRs. TH wrote the first draft of the manuscript. All authors revised the manuscript draft and contributed to the interpretation and discussion of the findings. All authors read and approved the final manuscript.
All authors declare that they have no competing interests.
Consent for publication
Ethics approval and consent to participate
Search strategy for case study A (Databases searched: MEDLINE, EMBASE, and the Cochrane Central Register of Controlled Trials, date of last search: July 2, 2014).
#6 “cerebral palsy”
#8 “necrotizing enterocolitis”
#9 “bronchopulmonary dysplasia”
#10 “neurodevelopmental impairment”
#11 “periventricular leukomalacia”
#12 “intraventricular haemorrhage”
#16 (#1 OR #2 OR #3 OR #4 OR #5 OR #6 OR #7 OR #8 OR #9 OR #10 OR #11 OR #12) AND (#13 OR #14) AND #15
Filters: publication date from 2000/01/01; humans; “Medline” OR “systematic review” OR “meta-analysis” OR “intervention”
Search strategy for case study B. (Databases searched: MEDLINE, EMBASE and the Cochrane Database of Systematic Reviews, date of last search: October 8, 2014)
#6 “systematic review”
#8 (#1 OR #2)
#9 (#4 OR #5)
#10 (#6 OR #7)
#11 3 AND 8 AND 9 AND 10
Additional file 1:
Case study A: flow chart for the systematic literature search and study selection. (DOCX 87 kb)
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
About this article
Cite this article
Harder, T., Remschmidt, C., Haller, S. et al. Use of existing systematic reviews for evidence assessments in infectious disease prevention: a comparative case study. Syst Rev 5, 171 (2016). https://doi.org/10.1186/s13643-016-0347-9
- Overview of reviews
- Systematic review
- Case study
- Infectious disease prevention
- Public health