Skip to main content

How to conduct systematic reviews more expeditiously?

Abstract

Healthcare consumers, researchers, patients and policy makers increasingly use systematic reviews (SRs) to aid their decision-making process. However, the conduct of SRs can be a time-consuming and resource-intensive task. Often, clinical practice guideline developers or other decision-makers need to make informed decisions in a timely fashion (e.g. outbreaks of infection, hospital-based health technology assessments). Possible approaches to address the issue of timeliness in the production of SRs are to (a) implement process parallelisation, (b) adapt and apply innovative technologies, and/or (c) modify SR processes (e.g. study eligibility criteria, search sources, data extraction or quality assessment). Highly parallelised systematic reviewing requires substantial resources to support a team of experienced information specialists, reviewers and methodologists working alongside with clinical content experts to minimise the time for completing individual review steps while maximising the parallel progression of multiple steps. Effective coordination and management within the team and across external stakeholders are essential elements of this process. Emerging innovative technologies have a great potential for reducing workload and improving efficiency of SR production. The most promising areas of application would be to allow automation of specific SR tasks, in particular if these tasks are time consuming and resource intensive (e.g. language translation, study selection, data extraction). Modification of SR processes involves restricting, truncating and/or bypassing one or more SR steps, which may risk introducing bias to the review findings. Although the growing experiences in producing various types of rapid reviews (RR) and the accumulation of empirical studies exploring potential bias associated with specific SR tasks have contributed to the methodological development for expediting SR production, there is still a dearth of research examining the actual impact of methodological modifications and comparing the findings between RRs and SRs. This evidence would help to inform as to which SR tasks can be accelerated or truncated and to what degree, while maintaining the validity of review findings. Timely delivered SRs can be of value in informing healthcare decisions and recommendations, especially when there is practical urgency and there is no other relevant synthesised evidence.

Peer Review reports

Background

Role of systematic reviews

Systematic reviews (SRs) are transparent and succinct evidence synthesis summaries of empirical results of primary research studies addressing one or more questions regarding any given health problem, intervention(s) or policy decision [1, 2]. The proper conduct of SRs entails the application of predefined explicit systematic approaches to the formulation of research question(s), study eligibility criteria, search strategy (literature sources and identification of primary studies), study selection, data extraction, assessment of methodological quality (or risk of bias) of included studies, data synthesis and analysis and grading the overall quality of evidence (e.g. the GRADE approach). These approaches have been shown to minimise bias and improve the precision of review findings [3]. Over the past two decades, SRs have become an important source of high hierarchy evidence. Healthcare consumers, researchers, patients and policy makers increasingly utilise SRs to aid their decision-making process.

Problems of timeliness and cost

The conduct of SRs can be a time-consuming, cost- and resource-intensive task, which may take on average from 6 months to several years [4–6]. This issue becomes especially problematic when clinical practice guideline developers, healthcare agencies or other decision-makers need to make informed decisions and recommendations expeditiously. For example, scientists working in the field of infectious diseases often deal with time-sensitive circumstances dictated by clinical or public health emergency. In such situations, the timeliness is of essence at both stages of evidence synthesis and development of recommendations. Recent work to support the management of the Ebola outbreak in West Africa offers an extreme example where the need for evidence to guide hand hygiene measures was achieved by accelerated SRs [7] while in other areas requiring an evidence base expert opinion was used without any dedicated form of a SR [8]. Likewise, for academically based hospitals producing hospital-based health technology assessments (HTAs) of new or emerging technologies, both timeliness and costs of producing reviews may be critical, in particular when deadlines for the conduct and delivery of HTAs are driven by the interests of manufacturers, physicians and/or patients [9].

Main text

Three approaches taken alone or in conjunction may be considered as possible solution(s) to address the issues of timeliness in the production of SRs: (1) implement process parallelisation, (2) adapt and apply innovative technologies allowing automation and (3) modify some SR processes. Although the latter two approaches are expected to also reduce the review production costs, both may introduce some form of bias into the review. Implementing the process of parallelisation will not reduce the costs but it will not increase the risk of bias either (see Table 1).

Table 1 The interrelationship between the three approaches to the conduct of reviews with expected impacts on speed, costs and risk of bias

Process parallelisation

Although different steps of a SR can be carried out by two reviewers in a linear fashion, where resources permit many tasks such as study selection, data extraction and quality assessment can be divided amongst several reviewers who can perform these tasks in parallel (at least in part), thereby reducing the time needed to complete a SR. Parallelisation of SR tasks can be analogous to the process of parallel computing [10], the method used in computer technology, when any given large computing task is divided into many smaller tasks which are then computed simultaneously rather than sequentially. One example of the process parallelisation of SR tasks would be the prioritisation of screening during which potentially relevant titles/abstracts are at the top and less relevant ones at the bottom of a screening list [11]. This approach enables one team of reviewers to identify most of the relevant citations quickly, while the other team screens the remaining mostly irrelevant citations. This allows to begin and complete other SR processes such as the retrieval of full texts, data extraction and evidence synthesis more timely, i.e. in parallel with the SR steps initiated chronologically earlier (e.g. screening). Simultaneous implementation of some SR processes can be a time-saving approach whether or not the total workload is reduced. An effective parallelisation of SR processes needs to be supported by the use of a purposefully adapted computer technology [11, 12].

Highly parallelised systematic reviewing requires a team experienced in literature search, clinical epidemiology and research methodology, often working alongside advisors with clinical, statistical and economic expertise. Effective coordination and management within the review team and across the network of external experts and stakeholders are essential parts of a successful process parallelisation. The effective management of parallelisation should not affect the quality of a review produced. However, resources required to maintain such a model of reviewing can be considerable. The assessment groups undertaking health technology assessments for the National Institute for Health and Care Excellence in the UK and the Evidence-Based Centres carrying out comparative effectiveness reviews for the Agency for Healthcare Research and Quality in the US are good examples of such type of management.

Application of innovative technologies

Current developments in innovative technologies (automated or semi-automated) applicable to the production of SRs are a promising armamentarium for reducing costs and workload in expediting the SR process [13]. Of course, all such emerging technologies need to be evaluated for their accuracy, reliability, practicality and costs. Systematic Review (SR) Toolbox, an online catalogue, provides a downloadable list of tools to support SRs (e.g. software, assessment checklists and reporting guidelines) [14].

The most efficient use and application of the machine-learning technologies would be in the areas allowing automation of specific SR processes, in particular those involving time-consuming and resource-intensive tasks such as language translation [15], study selection [11, 16–18], data extraction [19] and risk of bias assessment [20]. Some of these technologies have already been evaluated. For example, Balk and colleagues [15] tested a free web-based application (Google Translate) for the accuracy of translation from 5 languages (Chinese, Japanese, Spanish, French, and German) into English by comparing the data extracted from publications translated to English by Google Translate to data extracted from original language publications done by native speakers. The authors found that the accuracy of translation across the languages depended on an extraction item (study design and intervention yielding higher accuracy scores) and language (most of the incorrectly extracted items for articles translated from Chinese). For the task of study selection, a new semi-automated algorithmic strategy reduced the screening workload by 50 % without missing any relevant bibliographic citation [16]. Marshall et al. developed RobotReviewer, an automated machine-learning system for assessing risk of bias (RoB) for the domains included in the Cochrane RoB tool for randomised trials. The system assigns low, high or unclear RoB rating to each domain and identifies text(s) supporting these RoB judgements. The authors observed only a 10 % difference in the overall accuracy between the RoB assessments by the machine-learning system vs. published review (71.0 % vs. 78.3 %) [20]. The review by Tsafnat et al. surveyed the available tools applicable to the automation of various SR processes (e.g. the review question formulation, search strategy, study selection, data extraction, data synthesis and write-up of a review report) [12]. The authors illustrated that not all SR tasks are equally amenable to automation.

Although fully automated SRs may remain an aspiration for the near future, the current achievements in machine-learning technologies are promising steps into automation of several SR tasks which in turn will help to expedite the production and dissemination of SRs. Collaboration between SR practitioners and experts in informatics, computer sciences and linguistics will become increasingly important in harnessing the potential of automation and artificial intelligence to increase the efficiency of systematic reviewing.

Methodological modifications

An alternative approach to synthesise evidence more expeditiously lies in modifying the SR methodology by restricting, curtailing or bypassing one or more SR steps (e.g. study eligibility criteria, search strategy, data extraction, quality assessment, data analysis), while maintaining the same degree of transparency as in traditional SRs. Although cost saving, these modifications may pose a threat to validity of the review findings. Therefore, empirical evidence informing which traditional SR steps can be accelerated or curtailed and to what degree without gravely compromising the validity of findings would be very useful.

In response to the challenge of timeliness, there has been a growing number of ‘rapid reviews’ (RRs), described as ‘literature reviews that use methods to accelerate or streamline traditional systematic review processes’ [4, 5, 21–23]. RRs are better suited for narrowly defined research questions where one or more SR steps may be reduced or omitted [4, 6, 21, 22, 24, 25].

The term ‘rapid review’ incorporates an array of products that vary greatly in their purpose, methodological rigour, comprehensiveness, resources used, transparency and the time spent for their production, ranging from 1 to 32 weeks [24, 26]. Placing these products under the same term of ‘rapid review’ may be misleading and could contribute to a lack of conceptual clarity. Some authors have provided a taxonomy and descriptions of types of RR. For example, Hartling et al. categorised RRs depending on the level of synthesis into four groups: evidence inventories, rapid responses, true RRs (those using reduced forms of SR methodology) and automated approaches [24]. Polisena and colleagues divided RRs into six groups: accelerated, condensed, focused, form of evidence synthesis, modified and tailored RRs [26]. The wide spectrum of RR products reflects differences in how the agencies (e.g. governmental, non-profit, academic research groups) and other relevant stakeholders commissioning and producing evidence synthesis reports view, define and customise the timelines, conduct, production and dissemination of RRs [6, 26]. Understandably, there is no single accepted definition of what a RR constitutes [22, 26], nor is there any formally established methodology guidance as how to conduct RRs (or any type of RR) [4].

Thus, is there sufficient evidence to reliably guide us how best to expedite SRs without compromising their validity? The majority of RR methodology overviews represent surveys that either describe or compare the methods and processes used for conducting RRs and SRs [4, 6, 21, 22, 24–26]. In contrast, the empirical evidence from studies comparing findings between RRs and SRs is insufficient [5, 24, 26]. Indeed, such evidence would be useful in informing as to which traditional SR steps can be accelerated or curtailed and to what degree, while maintaining the validity of review findings.

Over the last two decades, empirical evidence has accumulated from studies investigating different sources of bias related to specific SR tasks. For example, several authors evaluated study location strategies [27, 28], study inclusion criteria [29–33], study selection [34, 35], data extraction [36] and study quality or risk of bias assessment [37–39] as sources of bias in SRs. Notably, more recent evidence has focused on evaluating time- and resource-efficient techniques to performing specific SR tasks. For example, Sampson et al. showed that an Embase search in addition to Medline resulted in only 6 % change in the pooled effect estimate [40]. Similarly, Royle and Milne found that searches in databases additional to Cochrane Controlled Trials Register (CCTR), Medline and Embase identified only 2.4 % more studies [41]. These findings were corroborated by Cameron et al., who suggested that comprehensive literature searches may have little impact on the conclusions of a review [42]. Another study demonstrated only a slight change in the pooled effect estimates in Cochrane reviews after excluding intervention trials not found in Medline. The authors concluded that searching sources additional to Medline, particularly Embase, resulted in small incremental gains [43]. Preston and colleagues examined 302 citations included in 9 SRs of diagnostic test accuracy studies and found that 93 % of all included citations had been retrieved by searching Medline, Embase and the reference lists [44]. Some researchers agree that when timeliness is of importance, hand searching of reference lists and contacting experts can be more effective than comprehensive bibliographic database searches [45, 46].

Another area worthy of consideration is the restriction of inclusion criteria by language of publication. The inclusion of studies regardless of the language of publication would provide a more complete coverage and a greater precision of an effect estimate. However, the evidence whether or not the exclusion of non-English language study publications of conventional healthcare interventions introduces bias has been inconsistent, some authors showing meta-analyses of only English language studies yielding more conservative estimates [29], and others not demonstrating the presence of any difference [30, 32, 47]. Some authors suggested that the impact of excluding non-English language studies may depend on the topic of the review and the quality of non-English language studies [29, 31, 32]. For example, Moher and colleagues found that in SRs of conventional interventions, language restriction did not alter the review results, whereas such restrictions resulted in a substantial change in the review results of complementary and alternative medicine interventions [31]. In general, given the recent trend showing increased rates of publications in English, the language bias may not have as strong effect as before [48].

The evidence regarding the need for quality assessment of studies included in SRs is more consistent in indicating that bypassing this important step may lead to substantial bias in the review estimates [37–39, 49, 50]. A clear illustration of this phenomenon was shown in the study by Moher and colleagues, where the pooled estimate of low-quality trials, compared to high quality-trials, demonstrated 34 % greater benefit in the treatment effect [38].

Much of the above evidence has been focused on SRs of randomised trials of health interventions. While these studies have been crucial in guiding current approaches to undertaking full or reduced methodology SRs, more empirical evidence is needed as the uptake of SR methodology expands into the evaluation of other types of questions beyond clinical effectiveness (e.g. aetiology, epidemiology or genetic associations).

Conclusion

Future research and perspectives

In situations of clinical urgency (e.g. outbreaks and epidemics of life threatening infections), when there is no relevant systematically reviewed evidence, timely delivered SRs can be of great value in informing healthcare decisions and recommendations. SRs conducted expeditiously may also be relevant if an existing SR is in need of updating or when the available resources are limited [51, 52]. For example, Elliot and colleagues [18], proposed an alternative solution to the problem of keeping SRs and their conclusions up-to-date and accurate. The authors proposed to initiate living systematic reviews, which represent high-quality online evidence summaries, continuously updated as any new relevant evidence becomes available. Living systematic reviews are dynamic and constantly changing online-only evidence summaries that demand less intensive work over time compared to static and sporadically more resource-intensive conventional SRs. The production and publication of living systematic reviews call for modifications in the author team management style and the use of statistical methods (to minimise the rate of false positive findings due to repeated testing associated with an update).

Future empirical evidence comparing RRs to SRs and comprehensive synthesis of methodological studies exploring the magnitude of bias arising from a modification of any given SR step are needed to provide essential foundations for the development of evidence-based methodology for conducting SRs more timely. This evidence could also highlight specific SR steps or subtasks that are either of critical importance or redundant. Assessing the validity of RRs through comparison of their findings with SRs rests on the crucial assumption that current SR methodology is the gold standard which reflects the best available approach. This may be true regarding transparency and theoretical justification for instigating various standard procedures to minimise different forms of potential bias in the review process. But to what extent is current SR methodology supported by empirical evidence to guide practice, taking into account the efficiency of SR production?

Some of the new initiatives and developments in the field are likely to inform the above-mentioned gaps in knowledge. For example, Cochrane Innovations initiated the programme of Rapid Response Review, which is designed to produce expedited reviews by using ‘abbreviated’ and ‘accelerated’ SR methods, while maintaining the methodological rigour and transparency of traditional SRs. This process implies iterative interactions between the commissioners and reviewers in formulating and refining the research question and scope of the review, thereby streamlining the review process through expeditious delivery of the response to any given research question [53].

The January 2015 issue of Systematic Reviews has published a thematic collection of articles highlighting important developments in the RR methodology which will likely help addressing the issues related to timely production of SRs [6, 26, 54]. In February 2015, the Canadian Agency for Drugs and Technologies in Health (CADTH) hosted the Rapid Review Summit (Then, Now, and in the Future) in Vancouver, British Columbia (Canada), where about 150 participants from Canada and other countries discussed the role of RRs in informing healthcare policy and clinical decision-making. Some of the main objectives of this summit were the following: (a) to exchange information amongst stakeholders interested in RRs, (b) to promote the knowledge exchange on applications and production of RRs and (c) to elaborate and prioritise future research agenda for the development of the RR methodology [55].

In addition to making use of automation to expedite the conduct of individual SRs, collective efforts need to be made to improve the platform for the retrieval and synthesis of research information. This can be achieved through standardisation of data collection, reporting and archiving. The best examples are clinical trial registries, the EMBASE Screening Project [56] and the Systematic Review Data Repository (SRDR) [57].

We hope that ongoing and future research initiatives will generate further relevant empirical data to better inform how best to conduct and deliver SRs timely. This evidence may also indicate contexts and/or content areas where this reduced methodology could become a standard SR approach.

Abbreviations

CADTH:

Canadian Agency for Drugs and Technologies in Health

GRADE:

Grading of recommendations assessment, development and evaluation

HTA:

Health technology assessment

RoB:

Risk of bias

RR:

Rapid review

SR:

Systematic review

CCTR:

Cochrane Controlled Trials Register

References

  1. Cook DJ, Mulrow CD, Haynes RB. Systematic reviews: synthesis of best evidence for clinical decisions. Ann Intern Med. 1997;126(5):376–80.

    Article  CAS  PubMed  Google Scholar 

  2. Mulrow CD. Rationale for systematic reviews. BMJ. 1994;309(6954):597–9.

    Article  PubMed Central  CAS  PubMed  Google Scholar 

  3. Higgins JP, Green S. Cochrane handbook for systematic reviews of interventions. Wiley Online Library. 2008.

    Book  Google Scholar 

  4. Khangura S, Konnyu K, Cushman R, Grimshaw J, Moher D. Evidence summaries: the evolution of a rapid review approach. Syst Rev. 2012;1:10. doi:10.1186/2046-4053-1-10.

    Article  PubMed Central  PubMed  Google Scholar 

  5. Ganann R, Ciliska D, Thomas H. Expediting systematic reviews: methods and implications of rapid reviews. Implement Sci. 2010;5:56. doi:10.1186/1748-5908-5-56.

    Article  PubMed Central  PubMed  Google Scholar 

  6. Featherstone RM, Dryden DM, Foisy M, Guise JM, Mitchell MD, Paynter RA, et al. Advancing knowledge of rapid reviews: an analysis of results, conclusions and recommendations from published review articles examining rapid reviews. Syst Rev. 2015;4:50. doi:10.1186/s13643-015-0040-4.

    Article  PubMed Central  PubMed  Google Scholar 

  7. Hopman JK Z, Edrees H, Allen T, Allegranzi B. WHO Guideline and systematic review on hand hygiene and the use of chlorine in the context of Ebola. Geneva: World Health Organization; 2015.

    Google Scholar 

  8. WHO/UNICEF/WFP. Nutritional care of children and adults with Ebola virus disease in treatment centres. Interim guideline. Geneva: World Health Organization; 2014.

    Google Scholar 

  9. Gagnon MP. Hospital-based health technology assessment: developments to date. Pharmacoeconomics. 2014;32(9):819–24. doi:10.1007/s40273-014-0185-3.

    Article  PubMed  Google Scholar 

  10. Grama A, Gupta A, Karypis G, Kumar V. Introduction to parallel computing (2nd edition). Essex: Pearson; 2003.

    Google Scholar 

  11. O’Mara-Eves A, Thomas J, McNaught J, Miwa M, Ananiadou S. Using text mining for study identification in systematic reviews: a systematic review of current approaches. Syst Rev. 2015;4:5. doi:10.1186/2046-4053-4-5.

    Article  PubMed Central  PubMed  Google Scholar 

  12. Tsafnat G, Glasziou P, Choong MK, Dunn A, Galgani F, Coiera E. Systematic review automation technologies. Syst Rev. 2014;3:74. doi:10.1186/2046-4053-3-74.

    Article  PubMed Central  PubMed  Google Scholar 

  13. Elliott J, Sim I, Thomas J, Owens N, Dooley G, Riis J, et al. #CochraneTech: technology and the future of systematic reviews. Cochrane Database Syst Rev. 2014;9:Ed000091. doi:10.1002/14651858.ed000091.

    PubMed  Google Scholar 

  14. Marshall C. Systematic Review (SR) toolbox 2015. http://systematicreviewtools.com/. Accessed 05 November 2015.

  15. Balk EM, Chung M, Chen ML, Trikalinos TA, Kong Win Chang L. Assessing the accuracy of google translate to allow data extraction from trials published in non-english languages. Rockville (MD). 2013. http://effectivehealthcare.ahrq.gov/index.cfm/search-for-guides-reviews-and-reports/?productid=1386&pageaction=displayproduct. Accessed on 5 November 2015.

    Google Scholar 

  16. Wallace BC, Trikalinos TA, Lau J, Brodley C, Schmid CH. Semi-automated screening of biomedical citations for systematic reviews. BMC Bioinformatics. 2010;11:55. doi:10.1186/1471-2105-11-55.

    Article  PubMed Central  PubMed  Google Scholar 

  17. Miwa M, Thomas J, O’Mara-Eves A, Ananiadou S. Reducing systematic review workload through certainty-based screening. J Biomed Inform. 2014;51:242–53. doi:10.1016/j.jbi.2014.06.005.

    Article  PubMed Central  PubMed  Google Scholar 

  18. Elliott JH, Turner T, Clavisi O, Thomas J, Higgins JP, Mavergames C, et al. Living systematic reviews: an emerging opportunity to narrow the evidence-practice gap. PLoS Med. 2014;11(2):e1001603. doi:10.1371/journal.pmed.1001603.

    Article  PubMed Central  PubMed  Google Scholar 

  19. Jonnalagadda SR, Goyal P, Huffman MD. Automating data extraction in systematic reviews: a systematic review. Syst Rev. 2015;4(1):78. doi:10.1186/s13643-015-0066-7.

    Article  PubMed Central  PubMed  Google Scholar 

  20. Marshall IJ, Kuiper J, Wallace BC. RobotReviewer: evaluation of a system for automatically assessing bias in clinical trials. JAMIA. 2015. doi:10.1093/jamia/ocv044.

  21. Khangura S, Polisena J, Clifford TJ, Farrah K, Kamel C. Rapid review: an emerging approach to evidence synthesis in health technology assessment. Int J Technol Assess Health Care. 2014;30(1):20–7. doi:10.1017/S0266462313000664.

    Article  PubMed  Google Scholar 

  22. Harker J, Kleijnen J. What is a rapid review? A methodological exploration of rapid reviews in Health Technology Assessments. Int J Evid Based Healthc. 2012;10(4):397–410. doi:10.1111/j.1744-1609.2012.00290.x.

    Article  PubMed  Google Scholar 

  23. Schunemann HJ, Moja L. Reviews: rapid! rapid! rapid! …and systematic. Syst Rev. 2015;4:4. doi:10.1186/2046-4053-4-4.

    Article  PubMed Central  PubMed  Google Scholar 

  24. Hartling L, Guise JM, Kato E, Anderson J, Aronson N, Belinson S, et al. EPC methods: an exploration of methods and context for the production of rapid reviews. Rockville (MD). 2015. http://effectivehealthcare.ahrq.gov/index.cfm/search-for-guides-reviews-and-reports/?productid=2047&pageaction=displayproduct. Accessed on 5 November 2015.

    Google Scholar 

  25. Watt A, Cameron A, Sturm L, Lathlean T, Babidge W, Blamey S, et al. Rapid reviews versus full systematic reviews: an inventory of current methods and practice in health technology assessment. Int J Technol Assess Health Care. 2008;24(2):133–9. doi:10.1017/s0266462308080185.

    Article  PubMed  Google Scholar 

  26. Polisena J, Garritty C, Kamel C, Stevens A, Abou-Setta AM. Rapid review programs to support health care and policy decision making: a descriptive analysis of processes and methods. Syst Rev. 2015;4:26. doi:10.1186/s13643-015-0022-6.

    Article  PubMed Central  PubMed  Google Scholar 

  27. Hopewell S, McDonald S, Clarke M, Egger M. Grey literature in meta-analyses of randomized trials of health care interventions. Cochrane Database Syst Rev. 2007;2:MR000010. doi:10.1002/14651858.MR000010.pub3.

    PubMed  Google Scholar 

  28. Egger M, Smith GD. Bias in location and selection of studies. BMJ. 1998;316(7124):61–6.

    Article  PubMed Central  CAS  PubMed  Google Scholar 

  29. Juni P, Holenstein F, Sterne J, Bartlett C, Egger M. Direction and impact of language bias in meta-analyses of controlled trials: empirical study. Int J Epidemiol. 2002;31(1):115–23.

    Article  PubMed  Google Scholar 

  30. Moher D, Pham B, Klassen TP, Schulz KF, Berlin JA, Jadad AR, et al. What contributions do languages other than English make on the results of meta-analyses? J Clin Epidemiol. 2000;53(9):964–72.

    Article  CAS  PubMed  Google Scholar 

  31. Moher D, Pham B, Lawson ML, Klassen TP. The inclusion of reports of randomised trials published in languages other than English in systematic reviews. Health Technol Assess. 2003;7(41):1–90.

    Article  CAS  PubMed  Google Scholar 

  32. Pham B, Klassen TP, Lawson ML, Moher D. Language of publication restrictions in systematic reviews gave different results depending on whether the intervention was conventional or complementary. J Clin Epidemiol. 2005;58(8):769–76.

    Article  PubMed  Google Scholar 

  33. Counsell C. Formulating questions and locating primary studies for inclusion in systematic reviews. Ann Intern Med. 1997;127(5):380–7.

    Article  CAS  PubMed  Google Scholar 

  34. Berlin JA. Does blinding of readers affect the results of meta-analyses? University of Pennsylvania Meta-analysis Blinding Study Group. Lancet. 1997;350(9072):185–6.

    Article  CAS  PubMed  Google Scholar 

  35. Edwards P, Clarke M, DiGuiseppi C, Pratap S, Roberts I, Wentz R. Identification of randomized controlled trials in systematic reviews: accuracy and reliability of screening records. Stat Med. 2002;21(11):1635–40. doi:10.1002/sim.1190.

    Article  PubMed  Google Scholar 

  36. Buscemi N, Hartling L, Vandermeer B, Tjosvold L, Klassen TP. Single data extraction generated more errors than double data extraction in systematic reviews. J Clin Epidemiol. 2006;59(7):697–703. doi:10.1016/j.jclinepi.2005.11.010.

    Article  PubMed  Google Scholar 

  37. Egger M, Juni P, Bartlett C, Holenstein F, Sterne J. How important are comprehensive literature searches and the assessment of trial quality in systematic reviews? Empirical study. Health Technol Assess. 2003;7(1):1–76.

    CAS  PubMed  Google Scholar 

  38. Moher D, Pham B, Jones A, Cook DJ, Jadad AR, Moher M, et al. Does quality of reports of randomised trials affect estimates of intervention efficacy reported in meta-analyses? Lancet. 1998;352(9128):609–13. doi:10.1016/S0140-6736(98)01085-X.

    Article  CAS  PubMed  Google Scholar 

  39. Juni P, Altman DG, Egger M. Systematic reviews in health care: assessing the quality of controlled clinical trials. BMJ. 2001;323(7303):42–6.

    Article  PubMed Central  CAS  PubMed  Google Scholar 

  40. Sampson M, Barrowman NJ, Moher D, Klassen TP, Pham B, Platt R, et al. Should meta-analysts search Embase in addition to Medline? J Clin Epidemiol. 2003;56(10):943–55.

    Article  PubMed  Google Scholar 

  41. Royle P, Milne R. Literature searching for randomized controlled trials used in Cochrane reviews: rapid versus exhaustive searches. Int J Technol Assess Health Care. 2003;19(4):591–603.

    Article  PubMed  Google Scholar 

  42. Cameron A, Watt A, Lathlean T, Sturm L. Rapid versus full systematic reviews: an inventory of current methods and practice in Health Technology Assessment: ASERNIP-S Report No. 60. Adelaide, South Australia: 2007. https://www.surgeons.org/media/297941/rapidvsfull2007_systematicreview.pdf. Accessed on 5 November 2015.

  43. Halladay CW, Trikalinos TA, Schmid IT, Schmid CH, Dahabreh IJ. Using data sources beyond PubMed has a modest impact on the results of systematic reviews of therapeutic interventions. J Clin Epidemiol. 2015;68(9):1076–84. doi:10.1016/j.jclinepi.2014.12.017.

    Article  PubMed  Google Scholar 

  44. Preston L, Carroll C, Gardois P, Paisley S, Kaltenthaler E. Improving search efficiency for systematic reviews of diagnostic test accuracy: an exploratory study to assess the viability of limiting to MEDLINE, EMBASE and reference checking. Syst Rev. 2015;4(1):82. doi:10.1186/s13643-015-0074-7.

    Article  PubMed Central  PubMed  Google Scholar 

  45. Royle P, Waugh N. Literature searching for clinical and cost-effectiveness studies used in health technology assessment reports carried out for the National Institute for Clinical Excellence appraisal system. Health Technol Assess. 2003;7(34):iii. ix-x, 1–51.

    Article  CAS  Google Scholar 

  46. Oxman AD, Schunemann HJ, Fretheim A. Improving the use of research evidence in guideline development: 14. Reporting guidelines. Health Res Policy Syst. 2006;4:26. doi:10.1186/1478-4505-4-26.

    Article  PubMed Central  PubMed  Google Scholar 

  47. Morrison A, Polisena J, Husereau D, Moulton K, Clark M, Fiander M, et al. The effect of English-language restriction on systematic review-based meta-analyses: a systematic review of empirical studies. Int J Technol Assess Health Care. 2012;28(2):138–44. doi:10.1017/S0266462312000086.

    Article  PubMed  Google Scholar 

  48. Galandi D, Schwarzer G, Antes G. The demise of the randomised controlled trial: bibliometric study of the German-language health care literature, 1948 to 2004. BMC Med Res Methodol. 2006;6:30. doi:10.1186/1471-2288-6-30.

    Article  PubMed Central  PubMed  Google Scholar 

  49. Grant MJ, Booth A. A typology of reviews: an analysis of 14 review types and associated methodologies. Health Info Libr J. 2009;26(2):91–108. doi:10.1111/j.1471-1842.2009.00848.x.

    Article  PubMed  Google Scholar 

  50. Scott A, Harstall C. Utilizing diverse HTA products in the Alberta Health Technologies Decision Process: Work in Progress. 2012. http://www.ihe.ca/publications/utilizing-diverse-hta-products-in-the-alberta-health-technologies-decision-process-work-in-progress. Accessed on 5 November 2015.

    Google Scholar 

  51. Moher D, Tsertsvadze A. Systematic reviews: when is an update an update? Lancet. 2006;367(9514):881–3. doi:10.1016/S0140-6736(06)68358-X.

    Article  PubMed  Google Scholar 

  52. Ahmadzai N, Newberry SJ, Maglione MA, Tsertsvadze A, Ansari MT, Hempel S, et al. A surveillance system to assess the need for updating systematic reviews. Syst Rev. 2013;2:104. doi:10.1186/2046-4053-2-104.

    Article  PubMed Central  PubMed  Google Scholar 

  53. Cochrane Innovations. Cochrane response—the new rapid review from Cochrane. 2014. http://innovations.cochrane.org/response. Accessed on 5 November 2015.

    Google Scholar 

  54. Wilson MG, Lavis JN, Gauvin FP. Developing a rapid-response program for health system decision-makers in Canada: findings from an issue brief and stakeholder dialogue. Syst Rev. 2015;4:25. doi:10.1186/s13643-015-0009-3.

    Article  PubMed Central  PubMed  Google Scholar 

  55. Webber J. Rapid Review Summit: Then Now and in the Future. Vancouver, Biritish Columbia February 3 and 4, 2015. https://www.cadth.ca/sites/default/files/pdf/RR%20Summit_FINAL_Report.pdf. Accessed on 5 November 2015.

  56. Cochrane Collaboration. The EMBASE screening project: six months old and going strong. 2014. http://community.cochrane.org/news/news-events/current-news/embase-screening-project-six-months-old-and-going-strong. Accessed on 5 November 2015.

    Google Scholar 

  57. Agency for HealthCare Research and Quality (AHRQ). Systematic Review Data Repository (SRDR). 2015. http://srdr.ahrq.gov/. Accessed on 5 November 2015.

    Google Scholar 

Download references

Acknowledgements

We would like to thank Dr Pam Royle for her contributions to search strategies and data management processes.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alexander Tsertsvadze.

Additional information

Competing interests

Noel McCarthy has received institutional funding from the National Institute for Health Research (NIHR) and the UK Food Standards Agency and Defra. Yen-Fu Chen is partly supported by the National Institute for Health Research (NIHR) Collaboration for Leadership in Applied Health Research and Care West Midlands. David Moher is one of the editors of Systematic Reviews. Author(s) have no other competing interest(s) to declare (financial or non-financial).

Authors’ contributions

AT designed and drafted the commentary. YF C, DM, PS and NM provided substantial contribution through their insights, feedback, edits and conceptual suggestions. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tsertsvadze, A., Chen, YF., Moher, D. et al. How to conduct systematic reviews more expeditiously?. Syst Rev 4, 160 (2015). https://doi.org/10.1186/s13643-015-0147-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13643-015-0147-7

Keywords