User survey finds rapid evidence reviews increased uptake of evidence by Veterans Health Administration leadership to inform fast-paced health-system decision-making
© The Author(s). 2016
Received: 9 December 2015
Accepted: 20 June 2016
Published: 5 August 2016
To provide evidence synthesis for faster-paced healthcare decision-making, rapid reviews have emerged as a streamlined alternative to standard systematic reviews. In 2012, the Veterans Affairs Evidence-based Synthesis Program (VA ESP) added rapid reviews to support Veterans Health Administration (VHA) operational partners’ more urgent decision-making needs. VHA operational partners play a substantial role in dissemination of ESP rapid reviews through a variety of routes, including posting on the VA ESP’s public website (http://www.hsrd.research.va.gov/publications/esp/). As demand for rapid reviews rises, much progress has been made in characterizing methods and practices. However, evidence synthesis organizations still seek to better understand how and when rapid reviews are being used.
The VA ESP administered an online survey to rapid review operational partners. The survey assessed the nature of decision-making needs, overall perception of review content, resulting actions, and implementation timeframe. We use descriptive statistics and narrative methods to summarize findings.
Between October 2011 and April 2015, we completed 12 rapid reviews for 35 operational partners. Operational partners were primarily non-academic subject matter experts with VA operations’ decision-making authority. The most common topic categories reviewed were policy or system (50 %) or process of care (42 %) initiatives. Median report completion time was 14.5 weeks. Survey response rate was 46 %, with at least one operational partner responding for 92 % of reports. Reviews served multiple purposes including policy directive or regulation (72 %), supporting program development and evaluation (55 %), identifying future research needs (45 %), and determining implementation strategy (45 %). Overall, operational partners’ perception of report content was positive. A majority of rapid reviews were used immediately and informed actions ranking high on the Institute of Medicine’s Degrees of Impact framework: 45.4 % effected change, 18.2 % inspired action, 18.2 % informed the field, 9.1 % received recognition, and 9.1 % spread a message.
VA ESP rapid reviews have increased the VHA’s uptake of evidence to inform time-sensitive system-level decision-making. Key areas of interest for future evaluation include assessing user perception of our streamlined methods and the quality of our efforts to inform users of these methods, as well as comparing the usability and impact of our rapid and standard systematic reviews.
KeywordsRapid review Evidence synthesis Decision-making Implementation Program impact
To meet time-sensitive demands for quality evidence, rapid reviews have emerged as a streamlined alternative to standard systematic reviews [1–7]. Rapid reviews are used by a variety of health organizations such as Kaiser Permanente, Blue Cross Blue Shield Association, and University of Pennsylvania Health System, and the demand for them only continues to increase [1, 3, 4, 8]. Rapid review products use various approaches to abbreviate the systematic review process. Overall, the two main sources of variation are timeframe (days to months) and extent of synthesis (none to both qualitative and quantitative). Hartling et al.  grouped rapid review products into the following four categories based on the extent of synthesis: (1) “inventories” provide a listing of the available evidence, within 3 days to 6 months; (2) “rapid responses” present the best available evidence with no formal synthesis, within 5 days to 3 months, and often rely on secondary sources; (3) “rapid reviews” synthesize the quality of and findings from the evidence, generally within 2 to 4 months; and (4) “automated approaches” generate meta-analyses in response to user-defined queries.
Other common methods of streamlining the systematic review process include limiting literature searches, extent of data abstraction and quality appraisal, and the use of dual independent reviewing . Although rapid reviews carry the promise of increasing the uptake of evidence in healthcare decision-making where the alternative is no evidence at all [1, 2, 5, 10], uncertainty remains about their potential trade-offs. Concern has been raised that streamlining may compromise the quality of the work and increase the risk of missing evidence or errors in the synthesis, ultimately decreasing utility to end users [3, 4].
Interviews with rapid review producers identified the “continuous intimate relationship with a specific end user” and the nature of the decision as key drivers of rapid review approaches . Rapid reviews also usually require reaching a consensus quickly, which promotes involving stakeholders from different backgrounds early in the process and invites closer attention throughout the revisions . While much progress has been made over the past several years in characterizing rapid review methods and current practices [2, 4–6], less is known about the users of rapid reviews, their knowledge and acceptance of the streamlined methods used to produce rapid reviews, and the impact rapid reviews are having on health system decision-making .
To gain insight into users’ acceptance of methods used to streamline the systematic review process, an Agency for Healthcare Research and Quality (AHRQ) Evidence-based Practice Center (EPC) Method Workgroup conducted qualitative interviews of eight frequent and known users (“Key Informants”) of EPC standard systematic reviews . Key Informants evaluated three sample rapid products on venous thromboembolism and gave their impressions on streamlining approaches and how they might use such products. In exchange for shorter review timelines, the majority of Key Informants were willing to have shortcuts made in the literature search (such as limiting databases, journals, years) and in the abstract and full-text review process (such as using a single reviewer) rather than independent review by two reviewers. However, Key Informants also noted that as potential users may not be aware of the potential ramifications of streamlining standard systematic review methods, rapid review producers have a responsibility to help educate users about the process. Finally, Key Informants identified credibility of the review producer and strength of evidence assessments as critical components of a rapid review. Compared to the eight Key Informants interviewed, however, less-frequent users of reviews or varied audiences may have different perspectives.
Previous evaluations of the impact of rapid reviews have largely focused on health technology topics used to inform Canadian provincial healthcare system coverage and acquisition decisions [13–15]. These evaluations found that rapid health technology assessments (HTAs) have consistently influenced policy decisions, including use as reference material and incorporation of the assessment’s recommendations and conclusions [13, 14]. Similarly, the University of Pennsylvania Health System’s Center for Evidence-based Practice reported that the majority of their rapid technology reviews actually informed users’ final clinical practice, policy, purchasing, and formulary decisions . Additionally, in Quebec, the budget impact of rapid HTAs developed on-site in collaboration with end users was estimated at approximately $3 million in savings per year . Although these rapid HTAs were seen as useful, some authors acknowledged that they were typically considered only as interim products that should be followed up with full assessments. This is because the short timeframes increased the chance of providing inappropriate advice and typically restricted the scopes to only addressing questions of efficacy or effectiveness .
These studies provide preliminary information on the use and influence of rapid HTAs in a few specific settings. As healthcare decision-makers are increasingly demanding accelerated forms of evidence synthesis, rapid reviews are meeting an important need within health systems. It is important, however, to better understand when and in what capacity rapid reviews are used, as well as the mechanisms that help or hinder their implementation from the user’s perspective for a broader range of topics and settings .
The Veterans Affairs’ Evidence-based Synthesis Program (VA ESP) was established in 2007 to provide the Veterans Health Administration (VHA) with timely and accurate evidence synthesis on important topics to meet their healthcare decision-making needs and to improve Veterans’ health and healthcare . The VA Quality Enhancement Research Initiative (QUERI) provides funding for four ESP Centers, and each Center has an active university affiliation with close ties to the AHRQ Evidence-based Practice Center Program. The Centers are located at the Durham and West Los Angeles VA Medical Centers, the Minneapolis VA Health Care System, and the VA Portland Health Care System. The ESP Coordinating Center (ESP CC), also located in Portland, oversees national ESP program operations, program development and evaluation, and dissemination efforts. Each Center is led by a VA clinician investigator and staffed with 2–3 FTE research assistant/associates. The Centers rely heavily on fellows and residents to round out review teams, and each produces 3 standard systematic reviews annually.
In 2012, in response to VHA operational partner feedback, the VA ESP added rapid reviews to support the VHA’s more urgent decision-making needs . The ESP CC increased capacity to provide this product and added a dedicated research staff with extensive systematic review expertise. The rapid review team is led by a VA clinician researcher (.10 FTE) and consists of 1.6 FTE research associates, .50 FTE librarian, and .50 FTE research assistant and utilizes the support of existing ESP CC infrastructure including a full-time Associate Director charged with program management and an editorial coordinator. The ESP CC conducts 3 to 5 rapid reviews each fiscal year. Consequently, rapid reviews are reserved for topics which (1) are identified as top priority by senior management, (2) would potentially have important consequences if delayed, and (3) have a mechanism in place that will allow for rapid implementation of findings. VA ESP rapid review products are completed within 4 months, include primarily qualitative syntheses and conclusions that rely on internal validity and strength of evidence assessments, and are subjected to external peer review, which best resemble the “rapid review” type of products from the taxonomy described above . Our primary means of gaining efficiency is by tailoring the scope to focus on parameters that would drive the operational partners’ decision-making (for example, health outcomes vs intermediate outcomes). Depending on the volume of evidence and time allowed, other steps may be taken to abbreviate the review process, including substituting the second reviewer verification of study selection, data abstraction, quality assessment, and strength of evidence ratings for dual independent review. The VA ESP rapid reviews are led by experienced systematic reviewers who draw on core systematic review values of focusing on the highest-quality evidence, minimizing bias, and maximizing transparency to make decisions about how to abbreviate processes. ESP rapid reviews have primarily addressed process of care, access topics, and systems policy initiative needs.
Operational partners play a substantial role in dissemination of ESP rapid reviews through a variety of routes. All rapid reviews are posted on the VA ESP’s public program website (http://www.hsrd.research.va.gov/publications/esp/) and indexed in PubMed and may be submitted for publication in peer-reviewed journals where appropriate. The ESP CC consults with operational partners to develop a tailored plan for each report, identifying appropriate strategies that are topic-specific and targeted to optimize uptake by the health system. Dissemination efforts may include (1) VA Cyberseminars (i.e., national, online, free, video-archived presentations of report findings), which are augmented by policy and clinical work in order to make the presentations relevant and applicable to clinicians, administrators, and researchers and (2) presentation of findings at leadership briefings, program/committee meetings, or conferences. Operational partners also frequently recommend dissemination strategies and targets for “Management eBriefs,” an electronic publication to provide VHA management with a concise summary of report findings, including implications for VHA policy or practice.
In early 2015, the VA ESP initiated a quality improvement effort aimed at understanding the utility of the evidence products and their impact on decision-making in the VHA. The project involves surveying operational partners—high-level VHA leadership that request and use the evidence products—regarding (1) the nature of their decision-making needs, (2) actions resulting from the report’s findings, (3) implementation timeframe, and (4) overall perception of report content. These objectives were inspired by the VHA’s and QUERI’s goals of rapidly translating research findings and evidence-based treatments into clinical practice, increasing the impact of VA research findings through evaluation, and promoting the VHA as a learning healthcare organization through innovative implementation science . In this article, we report on the retrospective survey results for 11 (out of 12) rapid reviews completed between 2011 and 2015. Our survey findings extend knowledge on users’ perspectives of how and when they use rapid reviews to different types of users, settings, and report topics than have been previously evaluated.
The VA ESP CC drafted the initial survey instrument based on the QUERI Strategic Plan, the VHA Strategic Plan (“Blueprint for Excellence”), and their linkage to the goals of the ESP. The ESP CC refined the survey based on feedback from the Directors of the ESP Centers as well as VA research and implementation leadership. The final survey assessed the following: (1) nature of decision-making needs, (2) actions resulting from the report’s findings, (3) implementation timeframe, and (4) overall perception of report content. The survey comprised both open- and closed-ended questions, to encourage respondents to provide in-depth detail regarding the quality of the review’s content and actions taken as a result of the report findings (see Additional file 1). We administered the survey using Survey Monkey (SurveyMonkey Inc. Palo Alto, CA), an online, cloud-based survey creation and administration tool. The survey was reviewed and approved as quality improvement based on VHA policy .
Study participants were operational partners, defined as leaders of a VHA national program office or business line who are responsible for national clinical programs or policies in the deployment of VHA health services. We surveyed all 35 operational partners who requested all 12 VA ESP rapid reviews we produced from 2011 to 2015. We recruited operational partners via an email that included a link to the online survey. In the recruitment email, we notified operational partners that we would keep their identities confidential. We sent the surveys out in four groups between July and October of 2015. We gave operational partners 4 weeks to respond. We sent nonrespondents a reminder email at 14 days. We compared survey respondents and nonrespondents with respect to their organizational role: (1) Academic Researchers charged with leading system-wide health/quality improvement efforts (no VA operation decision-making authority), (2) non-academic Subject-Matter Experts (SME) with VA operation decision-making authority, including National Program Offices, Central Office, and Chief Consultants, or (3) non-academic Health System Managers with VA operation decision-making authority, such as VISN Directors or Chief Medical Officers.
We imported survey results into Microsoft Excel (Microsoft Corp, Redmond, WA) and used Stats Direct Version 2.8.0 (CamCode, UK) for analysis. We conducted statistical comparisons using χ 2 and Fisher’s exact tests. Narrative methods were used to analyze open-ended responses. We organized the open-ended responses about actions resulting from the report based on the Institute of Medicine’s (IOM) Degrees of Impact—a scale intended to gauge impact made in health systems . This scale provides metrics for assessing five levels of impact: (1) effecting change (e.g., revision of guidelines, legislation enacted), (2) inspiring action (e.g., legislation introduced, advocacy initiatives), (3) informing the field (e.g., subject of meeting or hearing), (4) receiving recognition (e.g., formal response by stakeholders), and (5) spreading the message (e.g., published article). For open-ended responses about how ESP reports compared with other evidence sources, we categorized them as (1) compares equally/similar, (2) prefers ESP for VA focus, (3) no opinion, and (4) other. Open-ended responses were initially coded by one reviewer and verified by one or two other reviewers. Disagreements were resolved by consensus. Close-ended responses were evaluated using descriptive statistics. We planned to explore heterogeneity in operational partners’ perception of content as potential sources of variability in report impact.
Summary of review topic categories, methodology, timeframe, and dissemination by fiscal year (FY)
Overall (N = 12)
FY12 (N = 2)
FY13 (N = 5)
FY14 (N = 4)
FY15 (N = 1)
Median report completion time (in weeks)
Report topic category
Policy or organizational/managerial systema
6 (50 %)
4 (80 %)
2 (50 %)
Process of careb
5 (42 %)
2 (100 %)
1 (20 %)
1 (25 %)
1 (100 %)
1 (8 %)
1 (25 %)
Performance of original meta-analyses
2 (17 %)
1 (25 %)
1 (100 %)
Performance of strength of evidence assessments
8 (67 %)
1 (50 %)
3 (60 %)
3 (75 %)
1 (100 %)
Publically available on VA website
12 (100 %)
2 (100 %)
5 (100 %)
4 (100 %)
1 (100 %)
3 (25 %)
1 (50 %)
1 (25 %)
1 (100 %)
1 (8 %)
1 (50 %)
Peer-reviewed journal submission in process
3 (25 %)
1 (20 %)
1 (25 %)
1 (100 %)
Presentation of findings at leadership briefings, program/committee meetings, or conferences
5 (42 %)
1 (50 %)
3 (60 %)
1 (100 %)
Survey response rate
Operational partners’ description of report purpose
Timeline and final report date
Link to report
Role of the annual physical examination in the asymptomatic adult
0/1 (0 %)
No response obtained
Comprehensive routine physical examinations are not recommended for the asymptomatic adult.
Effect of geriatricians on outcomes of inpatient and outpatient Care
2/3 (67 %)
Determine implementation strategy; guideline or directive; support resource allocation decisions; clinical guidance
The impact of geriatrician involvement on patient function and healthcare utilization varies across the different models of care that include geriatricians in different roles.
Effectiveness of intensive primary care programs
2/9 (22 %)
Clinical guidance; identify future research needs; support program development and evaluation activities
Inconsistent findings on whether these models of care reduced hospitalizations.
Developing a threshold for small VA hospitals
1/4 (25 %)
Guideline or directive; identify future research needs; determine implementation strategy
A relationship between hospital size and quality measures was either not found (for adverse events) or was inconsistent (for other measures).
Effects of small hospital closure on patient outcomes
1/2 (50 %)
Resource allocation decisions
Low-strength evidence that hospital closures leading to increased distance and/or time to nearest hospital may increase mortality for time-sensitive conditions.
Relationship between time delay to colonoscopy and colorectal cancer outcomes
3/5 (60 %)
Guideline or directive; clinical guidance; determine implementation strategy
No evidence to support current policy requiring follow-up colonoscopy within 60 days of positive screening fecal occult blood tests.
Review of reviews on specialty care topics
1/3 (33 %)
Program development and evaluation activities
Provided inventory of main findings from systematic reviews on the topics of shared decision-making in palliative care, oncology, and nephrology; interventions that reduce hospitalizations/emergency room (ER) visits for heart failure and chronic obstructive pulmonary disease (COPD); and interdisciplinary specialty care platforms/teams/neighborhood approaches for reducing hospitalizations/ER visits.
Effectiveness of mandatory computer trainings on ethical, workplace, and security topics
1/1 (100 %)
Performance measure; update existing review; determine implementation strategy; support program development and evaluation activities
No studies identified.
Primary care initial appointment wait times threshold
1/1 (100 %)
Guideline or directive
No clear support for broad use of any specific wait time standard for new patients in accessing their first primary care or mental health appointment. Offered potential options for selecting a wait time target.
Factors that optimize therapy with repetitive transcranial magnetic stimulation for treatment-resistant depressions
1/3 (33 %)
High-frequency rTMS applied to the left dorsolateral prefrontal cortex is the best-studied approach and it includes a FDA-cleared protocol that has been shown to improve quality of life.
Quality of care provided by advanced practice nurses
1/2 (50 %)
Inform proposed regulation
Low-strength evidence suggesting no difference in health status, quality or life, mortality, or hospitalizations favoring either APRN or physician care in primary or urgent care settings.
Updates on the prevalence of and interventions to reduce racial and ethnic disparities
2/2 (100 %)
Guideline or directive; identify future research needs; support program development and evaluation activities; resource allocation decisions
Moderate- and low-strength evidence of worse morbidity and mortality outcomes for some racial minority Veterans groups compared with white Veterans.
Perceptions of the content
Operational partners' perceptions of report content
How would you describe the scope of the report?
To what extent do you agree or disagree with the findings of the report?
How do the ESP reports you’ve read compare with other evidence sources?
Prefer ESP for VA focus
Other (eg, acknowledge benefits of different products)
Do characteristics of RR limit the usefulness of the report?a
Without RR, how would you have addressed your research need?a,b
Nothing--would have had to make decision without evidence review
Used other evidence source
Resulting actions and implementation
Our survey of VHA leadership has improved our understanding of how and when our VA ESP rapid reviews are being used to inform time-sensitive healthcare decision-making within the VA healthcare system. Also, these findings extend knowledge on users’ perspectives of how and when they use rapid reviews to different types of users, settings, and report topics than have been previously evaluated. Overall, operational partner feedback was positive. During its first 3 years of offering rapid reviews, the ESP program increased the uptake of evidence to inform the VHA’s time-sensitive decision-making needs, particularly on occasions where the alternative was no review of the evidence at all. The majority of ESP rapid reviews were used immediately and informed actions that ranked high on the IOM’s Degrees of Impact framework: 45.4 % effected change, 18.2 % inspired action, 18.2 % informed the field, 9.1 % received recognition, and 9.1 % spread a message. This specifically addressed the VHA’s strategic goal of rapidly translating research findings and evidence-based treatments into clinical practice . Although VA rapid review topics are carefully prioritized based on a clear demonstration of urgency and presence of a mechanism for implementation, given the challenges and uncertainty of conducting rapid reviews, it is reassuring to confirm that they are being used as intended.
Our findings are consistent with previous evaluations of the impact of rapid HTAs which all found them to be valuable products . Timely access to evidence and collaboration between researchers and policymakers—which are both key characteristics of rapid reviews—have frequently been reported as facilitators of implementation of evidence [4, 22, 23]. Previous research on the impact of rapid reviews has primarily focused on their use for clinical practice, policy, purchasing, and formulary decisions primarily in non-US settings [13–15]. It is useful to learn that the value of rapid reviews extends to a large US healthcare setting, such as the VA health system, for the types of process of care, access, and systems policy initiative topics addressed by the ESP rapid reviews.
It is important to note that the implementation of evidence depends not only on the content and purpose of the evidence but also on the complex environment around the topic, user, and agency . Operational partners indicated that there were on average 2.75 additional factors influencing their decisions, including other stakeholders (69 %), other VA offices (69 %), clinical/expert opinion (69 %), Veterans input (18.75 %), political pressure (18.75 %), economic pressure (12.5 %), and other evidence sources (12.5 %). This suggests that our rapid reviews served as only one tool from a variety of inputs within a complex decision-making process. Learning more about the VHA decision-makers’ processes for weighing the relative contribution of rapid reviews among these different inputs, and how that may differ for standard systematic reviews, may improve our understanding of the consequences of our rapid reviews’ potential limitations.
These initial results have some limitations that we plan to address in future quality improvement efforts. First, although our operational partners’ feedback was very positive overall, this needs to be taken in context with our low response rate. However, the similarity between nonresponders and responders in their organizational roles does not clearly suggest any obvious differences in their perceptions of the reviews. It is also possible that our low response rate may be due in part to our minimal efforts to remind participants to respond to the survey. For example, although we discovered that 16 % of our nonresponders were not reachable because they had retired or were no longer with the VA, we made no further attempts to contact them. Further, we only reminded participants once via email. This was fewer than the three reminders used in the recent University of Pennsylvania Health System’s Center for Evidence-based Practice survey that had a higher response rate (72 %) . We also did not employ the use of telephone reminders which have a known association of increased response rates (77 vs 53 %, P < .001) . But the possibility of nonresponse bias remains, as other unknown differences between nonresponders and responders could exist.
Second, the retrospective nature of our preliminary data collection may raise the risk of recall bias for some of the survey items. As actions resulting from the rapid reviews are objective and a matter of record, survey items measuring impact likely have the lowest risk of recall bias. However, for survey items measuring the VHA leaderships’ perception of report content, the risk of recall bias may be greater, particularly for the older reports. We attempted to reduce this risk by providing copies of the reports along with the survey; however, it is ultimately unknown how familiar respondents were with the reports’ contents. For all future rapid reviews, we plan to address this issue by routinely surveying users only 6 months after the review’s completion. Third, although we made progress in assessing our operational partners’ acceptance of some of the potentially important trade-offs of rapid reviews (i.e., restricted scopes and syntheses), we have not yet addressed their perceptions of other specific methods to streamline the systematic review process. Empiric evidence is sparse and mixed about whether rapid reviews have less-accurate findings than systematic reviews because they often do not meet all the accepted methodological standards of standard systematic reviews . For this reason, further investigation of the consequences of various methodological shortcuts continues to be among the top three key areas of interest for future rapid review research topics . Fourth, our survey did not specifically assess how well we educated operational partners about and reported on the specific methodological alterations we made to gain efficiency and their potential ramifications. In the general interest of transparency and reporting guideline adherence, and because user education was a theme that emerged from AHRQ’s EPC Program interviews of potential rapid review users, this also warrants further consideration . Finally, our findings should be interpreted as preliminary as our small sample size may have limited the reliability of our findings. We plan to continue surveying a larger number of users over the next several years, which will increase confidence in our findings and allow a more thorough evaluation of potential sources of variation in use and impact.
Retrospective survey results preliminarily suggest that VA ESP rapid reviews have increased the VHA’s uptake of evidence for time-sensitive healthcare decision-making. The majority of ESP rapid reviews were used immediately and informed high-impact VHA decision-making. Key areas of interest for future evaluation include further assessment of users’ perceptions of specific methods we used to streamline the systematic review process and the quality of our efforts to educate about and report on such methods. Another important next step is to compare the usability and impact of VA ESP rapid and standard systematic reviews in meeting VHA leadership operational partner needs.
AHRQ, Agency for Healthcare Research and Quality; ESP CC, Evidence-based Synthesis Program Coordinating Center; ESP, Evidence-based Synthesis Program; HTA, health technology assessment; IOM, Institute of Medicine; QUERI, Quality Enhancement Research Initiative; SME, Subject-Matter Experts; VA, Veterans Affairs; VHA, Veterans Health Administration
We would like to thank Amy Kilbourne, PhD MPH, and Julia Haskin, MA, for their feedback on manuscript drafts. This material is based upon the work funded by the Department of Veterans Affairs, Veterans Health Administration, Office of Research and Development, Quality Enhancement Research Initiative, and Evidence-Based Synthesis Program (Project # 09-199). The views expressed in this article are those of the authors and do not necessarily reflect the position or policy of the Department of Veterans Affairs or the US government. The funders reviewed the manuscript but had no role in conducting the work or writing the manuscript. Any comments received by the funders were taken at the discretion of the authors independently.
KP, NF, VC, and MH conceptualized the quality improvement project. KP, NF, VC, and LF designed the measures for the survey. LF collected, analyzed, and interpreted the quantitative data under the supervision of KP and NF. KP, NF, VC, and LF analyzed and interpreted the qualitative data together. KP and LF developed the initial drafts of this manuscript. KP, NF and VC provided the critical revisions for important intellectual content on subsequent drafts. All authors agreed to be publically accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. All authors read and approved the final manuscript.
The authors declare that they have no competing interests.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
- Khangura S, Polisena J, Clifford TJ, Farrah K, Kamel C. Rapid review: an emerging approach to evidence synthesis in health technology assessment. Int J Technol Assess Health Care. 2014;30(1):20–7.View ArticlePubMedGoogle Scholar
- Khangura S, Konnyu K, Cushman R, Grimshaw J, Moher D. Evidence summaries: the evolution of a rapid review approach. Syst Rev. 2012;1:10.View ArticlePubMedPubMed CentralGoogle Scholar
- Schunemann H, Moja L. Reviews: rapid! rapid! rapid! …and systematic. Syst Rev. 2015;4(4):1–3. doi:https://doi.org/10.1186/2046-4053-4-4.Google Scholar
- Hartling LGJ, Kato E, Anderson J, Aronson N, Belinson S, Berliner E, Dryden D, Featherstone R, Foisy M, Mitchell M, Motu’apuaka M, Noorani H, Paynter R, Robinson K, Schoelles K, Umscheid C, Whitlock E. A taxonomy of rapid reviews links report types and methods to specific decision-making contexts. J Clin Epidemiol. 2015. doi:https://doi.org/10.1016/j.jclinepi.2015.05.036.Google Scholar
- Tricco AC, Tetzlaff J, Moher D. The art and science of knowledge synthesis. J Clin Epidemiol. 2011;64(1):11–20. http://dx.doi.org/10.1016/j.jclinepi.2009.11.007.View ArticlePubMedGoogle Scholar
- Ganann R, Ciliska D, Thomas H. Expediting systematic reviews: methods and implications of rapid reviews. Implement Sci. 2010;5:56.View ArticlePubMedPubMed CentralGoogle Scholar
- Hartling LGJ, Kato E, Anderson J, Aronson N, Belinson S, Berliner E, Dryden D, Featherstone R, Foisy M, Mitchell M, Motu’apuaka M, Noorani H, Paynter R, Robinson K, Schoelles K, Umscheild C, Whitlock E. EPC methods: an exploration of methods and context for the production of rapid reviews. Portland: Agency for Healthcare Research and Quality, Center SR; 2015.Google Scholar
- Jayakumar KL, Lavenberg JA, Mitchell MD, Doshi JA, Leas B, Goldmann DR, et al. Evidence synthesis activities of a hospital evidence-based practice center and impact on hospital decision making. J Hosp Med. 2016;11(3):185–92.View ArticlePubMedGoogle Scholar
- Tricco AC, Antony J, Zarin W, Strifler L, Ghassemi M, Ivory J, et al. A scoping of rapid review methods. BMC Med. 2015;13(224):1–15. doi:https://doi.org/10.1186/s12916-015-0465-6.Google Scholar
- Perrier L, Persaud N, Thorpe KE, Straus SE. Using a systematic review in clinical decision making: a pilot parallel randomized controlled trial. Implement Sci. 2015;10(118):1–8. doi:https://doi.org/10.1186/s13012-015-0303-4.Google Scholar
- Polisena J, Garritty C, Umscheid CA, Kamel C, Smith J, Samra K, et al. Rapid review summit: an overview and initiation of a research agenda. Syst Rev. 2015;4(111):1–6. doi:https://doi.org/10.1186/s13643-015-0111-6.Google Scholar
- Hartling L, Guise J-M, Hempel S, Featherstone R, Mitchell MD, Motu’apuaka ML, et al. EPC methods: AHRQ end-user perspectives of rapid reviews: agency for healthcare research and quality. 2016.Google Scholar
- Hailey D. A preliminary survey on the influence of rapid health technology assessments. Int J Technol Assess Health Care. 2009;25(3):415–8.View ArticlePubMedGoogle Scholar
- Hailey D, Corabian P, Harstall C, Schneider W. The use of impact of rapid health technology assessment. Int J Technol Assess Health Care. 2000;16(2):651–6.View ArticlePubMedGoogle Scholar
- McGregor M, Brophy JM. End-user involvement in health technology assessment (HTA) development: a way to increase impact. Int J Technol Assess Health Care. 2005;21(2):263–7.PubMedGoogle Scholar
- Evidence-based synthesis program: about the ESP. Health Services Research and Development. http://hsrd.research.va.gov/publications/esp/. Accessed 26 Jul 2016.
- Mohr D, Cohen A, Chan J, Marsella S, Charns M. Evaluation of the HSR&D evidence-based synthesis program: HSR&D center for organization, leadership, and management research (COLMR). 2012.Google Scholar
- VA Health Services Research and Development Program. VA Quality Enhancement Research Initiative (QUERI) Strategic Plan 2016-2020. Washington, DC: VHA Office of Research and Development; 2015. http://www.queri.research.va.gov/about/strategic_plans/QUERIStrategicPlan.pdf. Accessed 26 Jul 2016.
- Veterans Health Administration. VHA Operations Activities that May Constitute Research. In: VHA Handbook 1058.05. Washington, DC: US Dept of Veterans Affairs; 2011. http://www.va.gov/vhapublications/ViewPublication.asp?pub_ID=2456. Accessed 26 Jul 2016.
- President’s Report Supplement: Program Listing and View of IOM Finances, 2012 Edition. Washington, DC: Institute of Medicine of the National Academies; 2012. https://www.nationalacademies.org/hmd/~/media/Files/About%20the%20IOM/President-Supplement-2012.pdf. Accessed 26 Jul 2016.
- Veterans Health Administration. Blueprint for Excellence. Washington, DC: US Dept of Veterans Affairs; 2014. http://www.va.gov/HEALTH/docs/VHA_Blueprint_for_Excellence.pdf. Accessed 26 Jul 2016.
- Oliver K, Innvar S, Lorenc T, Woodman J, Thomas J. A systematic review of barriers to and facilitators of the use of evidence by policymakers. BMC Health Serv Res. 2014;14:2. doi:https://doi.org/10.1186/1472-6963-14-2.View ArticlePubMedPubMed CentralGoogle Scholar
- Lavis JN. How can we support the use of systematic reviews in policymaking? PLoS Med. 2009;6(11):e1000141. doi:https://doi.org/10.1371/journal.pmed.1000141.View ArticlePubMedPubMed CentralGoogle Scholar
- Asch DA, Jedrziewski MK, Christakis N. Response rates to mail surveys published in medical journals. J Clin Epidemiol. 1997;50(10):1129–36.View ArticlePubMedGoogle Scholar