Skip to main content

Rapid evidence synthesis to enable innovation and adoption in health and social care

Abstract

Background

The rapid identification and adoption of effective innovations in healthcare is a known challenge. The strongest evidence base for innovations can be provided by evidence synthesis, but this is frequently a lengthy process and even rapid versions of this can be time-consuming and complex. In the UK, the Accelerated Access Review and Academic Health Science Network (AHSN) have provided the impetus to develop a consistently rapid process to support the identification and adoption of high-value innovations in the English NHS.

Methods

The Greater Manchester Applied Research Collaboration (ARC-GM) developed a framework for a rapid evidence synthesis (RES) approach, which is highly integrated within the innovation process of the Greater Manchester AHSN and the associated healthcare and research ecosystem. The RES uses evidence synthesis approaches and draws on the GRADE Evidence to Decision framework to provide rapid assessments of the existing evidence and its relevance to specific decision problems. We implemented this in a real-time context of decision-making around adoption of innovative health technologies.

Results

Key stakeholders in the Greater Manchester decision-making process for healthcare innovations have found that our approach is both timely and flexible; it is valued for its combination of rigour and speed.

Our RES approach rapidly and systematically identifies, appraises and contextualises relevant evidence, which can then be transparently incorporated into decisions about the wider adoption of innovations. The RES also identifies limitations in existing evidence for innovations and this can inform subsequent evaluations. There is substantial interest from other ARCs and AHSNs in implementing a similar process. We are currently exploring methods to make completed RES publicly available. We are also exploring methods to evaluate the impact of using RES as more implementation decisions are made.

Conclusions

The RES framework we have implemented combines transparency and consistency with flexibility and rapidity. It therefore maximises utility in a real-time decision-making context for healthcare innovations.

Peer Review reports

Introduction

Rapid evidence synthesis

Whilst evidence synthesis can represent the strongest evidence base for innovations, conventional systematic reviews may often take up to 2 years to produce [1, 2], whilst even rapid reviews have a timeframe which may range up to a year [3], with the extent to which methods differ from those of systematic reviews varying widely [4]. Evidence summaries or evidence briefings are a form of rapid evidence synthesis which are usually produced on a shorter timescale driven by decision-makers’ needs [5] and have been found useful in informing decision-making, including by sub-national healthcare administrations [6]. Rapid evidence synthesis (RES) has been used to inform the commissioning of research [7] and services [8] and to inform policy-making [9, 10]. However, a standardised process of rapid evidence synthesis has not yet been presented to inform decision-making around accelerating access to beneficial innovation adoption, where timescales are often short. We present the process we have developed to enable integrated evidence synthesis within such a decision-making context.

Background and context

Challenges in getting proven innovations rapidly adopted into systems, policies or practice have long been recognised. In the UK, the Accelerated Access Review provided fresh policy impetus to efforts to develop a faster pathway to identify and adopt high-value innovations in the National Health Service (NHS) in England [11]. The Accelerated Access Review set out recommendations to improve efficiency and outcomes for NHS patients by increasing the speed of access to beneficial innovative healthcare methods and technologies, including digital products. A key part of increasing access has been the development of infrastructure including the Academic Health Science Networks (AHSNs) and the NHS Accelerated Access Collaborative (AAC). AHSNs are the agencies charged with supporting the introduction and diffusion of innovative products across the NHS.

In Greater Manchester, Health Innovation Manchester (HInM) is the AHSN with the remit to implement beneficial health innovations across the region [12, 13]. Our definition of innovation includes any technology, device, procedure, set of behaviours, routine or way of working that is new to the Greater Manchester context [14].

Decisions about innovation adoption may need to be made rapidly but this does not negate the need for evidence-informed decision-making, although for some novel technologies the available evidence may be limited. It remains important that there should be transparency about the evidence used, its reliability and relevance to the decision problem and what, if any, further evaluation may be warranted.

Rapid evidence synthesis in Greater Manchester

To ensure that decisions in Greater Manchester on innovation adoption and roll-out are informed by evidence, we developed a framework for the production of RES for innovations being considered for implementation. This framework has been made publicly available and registered [15] and is provided here, together with a representative example (see Supplementary material). The framework builds on earlier work and experience in developing frameworks for evidence briefings including those to directly inform decision-making by healthcare organisations [16,17,18]. It also has some similarities to other evidence briefing approaches which were identified in previous work [19]. However, the approach we have developed is unique in several important respects, which are described here, and represents a RES development in combining the key considerations of speed, transparency, dual emphasis on robustness and relevance of evidence [20, 21] and usability for stakeholders in the context of decisions on innovation adoption.

Speed and flexibility

The RES we produce are designed to be requested, undertaken, and delivered in a time period of 2 weeks. We use a systematic but streamlined process to enable replicable, transparent yet consistently very rapid delivery compared with other rapid evidence synthesis processes [6,7,8,9]. Our RES approach is also explicitly designed to take account of the fact that the evidence for innovations may be sparse or of limited relevance and incorporates protocols for dealing with innovations which are complex interventions [22, 23]. Flexible question sets have provisions both for category-level appraisal of the evidence or for component analysis of innovations [24]. This means that even where there is very limited evidence for an innovation per se, the findings of a RES using our approach can still inform implementation decisions. We explore examples of this in the section below on “Flexibility in structuring the rapid evidence synthesis.

Integration

The production of RES is embedded within, and integral to, the Greater Manchester innovation adoption decision-making process, rather than representing an input from a separate organisation. This integration means that key stakeholders and reviewers can work closely together to co-produce the RES. HInM uses a high-level, broad framework which involves identifying and prioritising innovations which are assessed and selected via an ongoing “pipeline” process [25], whereby support is provided for the roll-out of innovations across the region, where those innovations have been identified as a priority by stakeholders [26]. Innovations may come via academia or industry but may also be flagged as being a high priority from a local, regional or national perspective by local or national decision-making bodies. We refer here to the organisation which has identified or developed the intervention as “the sponsor”. The process draws on ARC and AHSN expertise in implementation science, healthcare decision-making and lived experience, as well as evidence synthesis and evaluation. The innovations for which we conduct RES are those which are submitted to this process and meet initial criteria as potentially relevant innovations.

The RES team attends relevant meetings where innovations are discussed, and the need for RES is explored on a case-by-case basis. Where a RES is requested, the review plan is developed iteratively between reviewers and decision-makers. This enables resolution of queries at each stage, maximises the relevance of the RES and supports integration of the evidence appraisal into decision-making.

RES is recognised as one of the necessary components of the early stage of the decision-making process around innovation adoption, alongside public patient involvement and engagement (PPIE) input, business case assessment and consultation with local health and social care stakeholders. The assessment of evidence, its relevance and certainty, is integrated into the considerations before decisions are made as to whether to proceed with the development of the implementation plans. This consideration of evidence may include wider evidence about the impact of interventions for similar decision problems. The RES does not produce recommendations, and decisions are not determined solely by the findings of the RES, but the RES ensures that decisions are evidence informed. In a small number of instances, the RES proves critical to the decision, usually where a decision is made not to proceed. For example, one RES identified that the innovation was addressed by a national guideline that recommended that a particular intervention should not be offered. Usually the contribution is more nuanced, however.

Transparency and consistency

The principles of the GRADE evidence to decision framework are central to our approach to RES, which makes the RES, and the decision process to which it contributes, more transparent, consistent and reproducible [20, 21]. GRADE provides a clear set of considerations for the formation of judgements about the strength of the evidence base for each question addressed and is central to the consideration of the certainty and relevance of the evidence, which are inter-related. The available GRADE frameworks have undergone substantive developments since our previous work on evidence briefings [16].

Relevance

The RES uses GRADE approaches to consider the applicability of the evidence to the Greater Manchester context as well as its reliability. Context, both broadly and narrowly considered, can be key to the impact of introducing an intervention [27]. In the cases of complex or service-level interventions, in particular, it can be difficult to determine the boundary between the intervention and the context [28]. For example, a test for poor prognosis in heart failure [29] has potential implications for an entire treatment pathway [30], emphasising the importance of adopting a test-and-treat approach [31]. We are also mindful of the fact that many innovations may look superficially simple but are being considered for introduction into complex systems such as primary care [32].

Although in the first instance our RES consider relevance in terms of a UK-wide NHS context, we also consider the local Greater Manchester context. We do not generally undertake RES where an innovation is already a nationally mandated priority, so local context is potentially important to all the innovations assessed. Local context includes the existing service models, relevant infrastructure, and area and population characteristics including urbanicity, relative deprivation, etc. GRADE helps to ensure that relevance can be given equal prominence to certainty in evidence summaries. An example of identified limited relevance at the national level is where an RES of an innovation in asthma care identified only evidence from US trials. This is not directly relevant to asthma control in people with asthma in the UK, who at the population level have higher baseline control and use of preventative medication [33]. At the local level, an RES for an innovation improving connections between healthcare staff included evidence from a pilot project where one site was very rural [34, 35]. The evidence from that site was considered likely to be only indirectly relevant to Greater Manchester.

Process and stages of RES

We present a full example of an RES in the Supplementary material.

The key elements of the completed RES are shown in Table 1.

Table 1 Structure of RES

Timeframe and personnel

The RES is designed to produce a “good-enough” (rather than perfect) summary of the evidence as a contribution to decision-making in a short timeframe. “Good-enough” is considered to be an up-to-date evidence summary which is based on the most relevant and methodologically rigorous evidence available, with uncertainties clearly articulated and acknowledged. The methods described are implemented using a median of up to 2 days of time for an experienced researcher with a background in evidence synthesis, spread over an approximately 2-week period. More complex innovations may entail more resources for RES production and may involve more input from an information specialist. More details are provided in the framework (see Supplementary information) [15].

Describing the innovation

The first stage is to briefly describe the innovation in terms of its nature and purpose (Table 2). This establishes the type of innovation (e.g. intervention/test/service delivery mechanism), the population or system that is targeted and the outcomes which should be considered. A comparator, which is usually the current standard of care, is typically also identified through this process. This stage involves assessment and clarification of the information supplied by the sponsor through engagement with HInM and/or other relevant stakeholders. This includes determination of the key characteristics of the innovation which will inform the decision problem.

Table 2 Example: innovation description

Developing the questions

Using the description of the intervention, we formulate a series of questions based on the innovation description (Table 3). These begin with the most narrowly focused and move to wider category-based questions. These consider innovations in the same category and are key to production of a useful RES where evidence for the innovation is limited. Questions use the PICO(S) approach: defining the Population, Intervention (Innovation), Comparator, key Outcomes and Setting (where relevant) [36]. The eligible study designs will always include existing evidence syntheses or, in their absence, the most relevant primary research design.

Table 3 Example: key questions

If the innovation is an intervention, then the questions will be ones of effectiveness and safety; where the innovation is (for example) a diagnostic test or screening tool, we consider accuracy as well as the impact on participants and health systems of implementing the technology. For complex interventions, each core feature is described and underpins the formulation of several questions. When evaluating evidence for particular components of a complex intervention, we are mindful of the fact that effectiveness in such an intervention may derive not solely from the additive effect of components but from their interaction with each other, as well as with the (often complex) system context [37]. We would therefore consider evidence relating to an individual component to be indirectly relevant to the innovation as a whole. There may be multiple questions of this form (to reflect different populations or comparators, for example). For example, an intervention designed to support inhaler use required separate questions for the populations of people with asthma and people with chronic obstructive pulmonary disease.

Whilst we first focus on effectiveness evidence for the specific innovation being assessed, where this is limited, we will explore evidence for (2) the category of innovations (“innovations like this”) and then (3) wider categories of relevant innovation (e.g. “innovations with a similar aim”). Categories are sometimes not obvious, particularly where the innovation is complex [22,23,24, 28, 38]. In the case of wider categories, we may ultimately be looking at any intervention with a purpose similar to the index innovation or all interventions for the condition or issue under consideration (see Table 1).

These subsequent questions are designed to ensure that useful evidence can be provided where the evidence for the innovation itself is absent, limited or not directly relevant. Development of these questions involves consultation with key individuals to ensure the focus is relevant. For example, such consultation determined that the widest appropriate question in an appraisal of a particular digital care plan for dementia was “digital care plans for any condition” rather than “care plans for dementia”. These additional questions are designed to be addressed only where the evidence for the first question(s) is considered insufficient. Implementation of the sequential question set is then flexible and sensitive to the nature of the identified evidence. Additional questions are also required when innovations are tests, where the evidence for available treatments should be considered as a whole in the absence of test-and-treat evaluations [31].

Types of evidence

Our focus is always on those study designs best able to answer the questions we have developed. We focus on the identification of existing evidence synthesis (systematic reviews) where possible; where this is not possible, we focus on the most informative primary evidence. In the case of most innovations, this is from comparative studies, giving priority to randomised controlled trials. Where appropriate to the questions, we also include diagnostic accuracy or prognostic studies. There are also questions, especially where the focus of an innovation is on patient experience, where mixed methods or qualitative studies will be the most appropriate form of primary evidence.

Identifying evidence

We adopt a pragmatic and iterative approach to identifying evidence. This uses an initially narrow focus to maximise relevance and progresses to a broader evidence base as necessary. We search key resources including NICE guidance [39], PubMed and the Cochrane Library, which includes both the Cochrane Database of Systematic Reviews and the Cochrane Central Register of Controlled Trials. We also use reference checking or forward citation searching of relevant evidence syntheses and primary studies.

Where relevant, we increasingly encourage those putting forward an innovation for consideration to provide research evidence, as would be the case with a single technology submission to NICE [40], and we routinely search a commercial sponsor’s website. In some cases, sponsors supply evidence which is unpublished or is published but not peer reviewed. We incorporate this material but are transparent about its status. Our own searches often identify grey literature which is treated in the same way.

Where appropriate, we use subject/domain-specific resources, such as the webpage of a particular Cochrane Group [41] or the ORCHA database of health apps [42]. Where required, we consult with an information specialist. Our decision to consult an information specialist is based on the complexity of the innovation and, where appropriate, the wider categories of interventions.

Critical appraisal

We use appropriate methods to critically appraise the different types of evidence we identify. Cochrane reviews are generally considered to represent reliable evidence [43], and we use their summaries and assessments of evidence certainty rather than re-appraising the evidence, unless there are issues around relevance. Where possible with other high-quality systematic reviews, we will also use the existing assessments of evidence from the review. This approach maximises the use of existing high-quality evidence whilst improving timeliness. We consider the quality of non-Cochrane systematic reviews, using the signalling questions from ROBIS as a guide [44]. We consider the possibility of duplication of evidence between multiple evidence syntheses [45].

Where there is no existing evidence synthesis, or we have concerns about the robustness or relevance of a systematic review, we consider primary evidence for the question. We also move to assessing primary research where the existing synthesis has only partially addressed a question, for example because eligibility criteria were narrower. Conversely, where a review has a broader remit, we may look at the included primary studies relevant to our question. In the example of the Phagenyx RES, we looked at the subgroup of RCTs assessing Phagenyx within the Cochrane review of interventions for dysphagia in stroke [46].

Assessment of primary studies considers both the capacity of the study design(s) to answer the question and an assessment of the risk of bias in the identified studies of the review to produce overall judgements of reliability. Because of our narrow timescale, we do not undertake full assessments but, as with ROBIS, are guided by the domains used. For example, for randomised controlled trials, we are guided by the criteria and considerations of the Cochrane Risk of Bias tool [47]; for other study designs, we consider questions posed by tools such as ROBINS-I, QUIPS, etc. [48, 49].

Relationship with GRADE

In forming judgements about the certainty of the evidence, we are guided by the principles of GRADE [21, 50]. GRADE assesses the certainty of evidence through evaluation of several domains in order to produce an assessment of high, moderate, low or very low certainty. The first domain is the risk of bias in the evidence, which we consider as outlined above. This is considered alongside questions of imprecision, inconsistency, direct relevance of the evidence, and publication bias. There are adaptations of GRADE for non-effectiveness questions [51, 52].

Apart from the risk of bias, the domains most relevant to our rapid evidence syntheses are usually imprecision (because of small sample sizes) and indirectness (often a function of context): there is often insufficient evidence to determine inconsistency between studies for the initial questions because there are usually only a small number of studies. The evidence for category-level questions is often of higher certainty than the evidence for the innovation itself; here, the domains of inconsistency (and completeness of evidence (publication bias)) are more likely to be considerations.

We bear in mind that where inconsistency is present (as at the wider category level), this may be a consequence of either—or both—differences in the interventions or the systems in which they are evaluated as well as differences in participants or outcome measures. Whilst some interventions are clearly complex, even apparently simple interventions are frequently implemented into complex health systems and this is especially true of those which would represent changes in patient management [32]. We therefore keep in mind that there may be non-obvious reasons for inconsistent evidence. This especially includes diagnostic and prognostics test, for which we always primarily address a key question about the effect of testing on the people involved and their management [31].

In our RES approach, we explore possible reasons for identified inconsistency and attempt to relate these to the Greater Manchester context. For example, in one RES, we identified a pilot study that reported between-site differences in the impact of an intervention on increasing staff participation in a specific form of mental health assessment. It appeared likely, after considering the qualitative data in the case studies, that this differential was likely to be due to one of the sites being part of a large urban area, whilst the other was very rural [34]. As we noted above, this led to us reporting that the evidence from one site was more directly relevant to our decision problem. Sometimes, as here, the probable cause of inconsistency can become clear; in other cases, we may identify several potential explanations. Where appropriate, we may consult with our RES end-users to establish more details about the implementation context to inform our assessment of the likely importance of identified inconsistency and its relationship to indirectness. Clearly, if inconsistency suggests that some of the evidence is not directly relevant, then this may have implications for the precision of the directly relevant evidence.

Imprecision is usually the consequence of small studies with insufficient participants; this results in wide confidence intervals and effect estimates which would be highly likely to change with further evidence. Indirectness is also often an issue for some or all of the evidence. This may be because of inconsistency (as discussed above). However, because innovations assessed are novel, there is also often only a partial evidence base, where the evidence may be only indirectly relevant to many of the people in question, although directly relevant to the group represented in the studies.

Our considerations of relevance (which GRADE considers as (in)directness) are key to our assessments. In addition to the consideration of indirectness which informs our assessment of the certainty of the evidence, we also consider the relevance of the evidence to the context and health system in which the innovation would be implemented—in this case, Greater Manchester in the UK.

Synthesis of the evidence

We use the identified evidence to produce narrative summaries of the evidence for the key questions in the RES. We always summarise evidence for core question(s) relating to the innovation, although we may identify little or no (useful) evidence. We provide a separate answer to each question addressed.

Where possible, we summarise existing evidence syntheses, together with either their existing GRADE assessment or, if these are not available, a judgement based on our assessment of the GRADE considerations. We also provide an assessment of how relevant the evidence from the existing synthesis is to the question.

Where we have been unable to identify relevant existing evidence synthesis, we summarise the primary studies identified. We use a narrative summary to report effect estimates (with confidence intervals) and their certainty and relevance, very rarely would we seek to undertake meta-analysis.

We outline the certainty and relevance of the evidence for each outcome in the question, distinguishing where appropriate the population or subgroup to whom it is directly relevant. So in the Phagenyx population, the evidence is directly relevant to people who have dysphagia following stroke, who represent a subgroup of people with dysphagia. We adopt the GRADE principle of assigning judgements around certainty to a particular outcome rather than at the study level. Where appropriate, we report the evidence for each component of an intervention or intervention bundle (where there is no or very limited evidence for the whole). We provide as nuanced a summary of the evidence as possible, clarifying where evidence has different levels of certainty for different populations, components, or outcomes. An example of a full RES is provided in the Supplementary information.

Producing a summary

We provide two levels of summary information, written in non-technical language.

The first provides a single brief summary of the evidence picture and highlights its certainty and relevance (Table 4).

Table 4 Example: Headline summary

The second provides a bulleted summary of the certainty and relevance of the evidence for each key question, including (e.g.) nuances of the population to which the evidence is directly relevant (Table 5). This may include aspects of the evidence where relevance to the NHS, or to Greater Manchester, is limited. In both sections, summaries include questions for which we identified no evidence, very limited evidence or very uncertain evidence. The summary follows the approach of the whole evidence synthesis and does not make recommendations to the decision-makers.

Table 5 Example: bulleted summary

Flexibility in structuring the rapid evidence synthesis

As described above, our question series has three possible levels: these relate to (1) evidence for the specific innovation, (2) evidence for innovation category and (3) evidence for wider relevant innovations. Our process involves addressing these questions sequentially, stopping at the point at which we have identified evidence of sufficient certainty and relevance. For the Phagenyx example, suitable innovation-specific (level 1) evidence was identified and no further evidence was required [46, 53]. In another example, a chatbot for mental health [54], there was limited innovation-specific evidence so the search was extended to evidence for the innovation type (level 2) question [55, 56]. For novel innovations that are not part of a wider innovation group, only innovation-specific evidence will be relevant [34].

The use of these question sets allows us to be agile in our approach to RES. Where we consider a multicomponent or bundled innovation, we can rapidly review evidence for the innovation as a whole and, where required, evidence for the innovation components. An example of this is the RES we carried out for RESTORE-2, a tool for care home staff which consists of three key components: identification of “softs signs” of possible physical decline, an early warning score and a structured communication plan. We identified limited evidence for the intervention as a whole [57], so looked at level 1 and 2 questions, as required, for the different innovation components [58,59,60].

The relevance of evidence reported in the RES is considered during subsequent decision-making, with transparent and cautious extrapolation of indirect data where required. For example, in a RES for an innovation for both people with asthma and people with chronic obstructive pulmonary disease (COPD), we found only randomised evidence for people with asthma [61, 62], and this was extrapolated to people with COPD in the absence of other suitable evidence, but we also considered a level 2 question for people with COPD [63, 64]. This is one of the areas where experience suggests it is most useful for the researcher who produced the RES to be available for consultation during the subsequent decision process, as this allows the relationship between the identified evidence and the local context to be explored in detail.

Discussion

The need for evidence-informed decision-making is increasingly apparent for a wide range of healthcare organisations in a climate of increasing and competing demands for services. Decision-making is informed by multiple considerations, including costs and opportunity costs, acceptability and existing infrastructure. Whilst it is critical that decisions are informed by evidence, it is also important that this process is both transparent and consistent. As with all rapid evaluation work, there is a necessary trade-off between rigour, speed and available research resources [65].

The RES we undertake are not conducted as an alternative to full systematic reviews, or even more conventional rapid reviews. Rather they represent the introduction into decision-making of some synthesis of existing research evidence. Rapid evidence synthesis itself is widely used in the form of rapid reviews [4], and the use of rapid reviews to inform research, services and policy-making is becoming established [6,7,8,9,10]. However, the development of a clear framework for both the integration of the RES into a rapid local decision-making process and for the production of the RES is, to the best of our knowledge, unique in the field of innovation adoption.

The framework presented here is grounded in the GRADE evidence to decision approach as well as previous work in evidence briefing services [16, 20]. It utilises the principles of this approach to support researchers who need to rapidly identify, assess and synthesise evidence from existing evidence syntheses and other sources in order to support immediate, real-world, healthcare decision-making processes. These processes are multidimensional, taking account of evidence alongside stakeholder views, system constraints and financial considerations. We have found that developing and using this framework provides improved transparency about the evidence base for innovations, including the limitations and gaps, to inform pragmatic decisions about implementation and future evaluation needs. There is also transparency and consistency about the process used to generate the evidence synthesis and the limitations on this which are necessary for the rapidity obtained.

The framework is intended for use where there may be relatively limited evidence available for the innovation, as well as where more research is available. It is envisaged that this approach is used by researchers from organisations involved in decisions about innovation adoption, so that queries are rapidly resolved in the innovation description and question formulation stages. In our process, researchers attend relevant meetings, enabling them to answer queries and discuss issues with stakeholders and decision-makers which arise from the RES. The RES does not therefore exist only as a stand-alone document but as part of a broader integration of relevant research evidence in the decision-making process. This may distinguish the process we present from other evidence briefing services which have tended to be externally commissioned [6, 16].

Strengths and limitations of the process

The principal limitation of the process is the necessary trade-off between rapidity and both comprehensiveness and rigour which all rapid evaluation confronts [65]. This is the case both in evidence synthesis and in primary research [66, 67]. These trade-offs are sometimes explicit—as is the case with the NICE Digital Health Framework, which is particularly relevant to the significant proportion of our evaluations which relate to digital health innovations [68]. In this instance, we use a transparent and structured process to make the trade-offs for rapidity of evidence synthesis explicit. However, it is known that rapid review processes in general may produce different results to full systematic reviews. This is the case where processes are more complex than those employed in this rapid evidence synthesis process [66, 69, 70].

We have not formally compared the results of our RES with those of a systematic review: although the need for a full systematic review may be identified, there is often insufficient time or resource to commission a further piece of evidence synthesis for the current decision problem, and this limitation is flagged. Where a systematic review is identified as being a high priority, the scope for this can be explored within the wider ARC-GM. We are currently developing a full systematic review informed by the RES process, although with a narrower review question; we plan to compare the full review findings with those of the RES once it has been completed.

It is likely that we will not identify all relevant evidence for some questions, particularly broader category-level questions. This is particularly likely to be the case where we do not identify any relevant evidence syntheses and are summarising primary studies. The iterative process we use, which includes citation searching and a saturation-based approach, mitigates this risk, as does the focus on existing evidence syntheses. The fact that the RES is produced by a single researcher also makes it necessarily vulnerable to bias and error. The potential for error may be mitigated by the researcher being relatively experienced in evidence synthesis [71]; a possible adaption to the process would be to incorporate checks by a second researcher, with a concomitant cost in time and resources.

We are especially conscious of the difference that the involvement of an information specialist or medical librarian may make to a systematic review [72, 73], and we have been able to informally assess the impact of their input on the RES. In three instances, we conducted an initial search but, due to time constraints, were only able to consult the information specialist after we had assessed our search results. Across these assessments, their amendments to our searches resulted in the inclusion of one additional study in one RES, which did not make a substantive difference to the findings; in the other two, no changes were made as a result of their involvement. We think it is likely that the impact of involving an information specialist may be less critical because in many cases we are searching for existing systematic reviews, and these may be better indexed than other types of publications [74].

We also acknowledge that searching a limited selection of databases (with or without specialist input) can have an impact on the findings of an evidence synthesis [75]. We accept that we may miss some relevant evidence: because the RES is not a systematic review, it is unlikely to be exhaustive. However, we never search only a single database, and our database searches are supplemented with other methods such as citation searching. Iterative processes as we look at wider questions mean we adopt elements of a saturation approach, which may also mitigate the risk of missing key pieces of evidence; again the fact that we are searching for evidence syntheses for some stages of the RES may also mitigate this through better indexing [74]. Where we are searching for primary evidence, the fact that we are considering specific innovations rather than broader interventions may also mean that the combination of limited database searches and the use of a single reviewer makes less difference to the outcome [76].

We have considered the potential for information supplied by commercial sponsors to have undue influence on the process. As is the case in our experience of the NICE single technology appraisal approach [40], only very rarely have commercial sponsors submitted published material eligible for inclusion which has not been identified by our own searches. Where newly published material is brought forward, we use reference checking and identification of key terms to augment our database searches. Material which is unpublished but relevant is handled in the way in which personal communications from authors would be within a systematic review and is clearly identified as such. Such material is usually interim or additional analyses from a study we have previously identified and is considered as additional material for that study. Where sponsors supply evidence which is not relevant to the decision problem, we very briefly summarise this, noting the reasons we have excluded it from our evidence synthesis.

The approach to rapid evidence synthesis outlined here has the advantage that it can be undertaken very rapidly: it is designed to be undertaken in the 2 weeks between the initial decision that an innovation has potential merit and a subsequent meeting, where an adoption decision will be made. This represents a very short timescale, even for evidence briefing production [7,8,9, 19], and a much shorter timeframe even than most rapid reviews [3]. In maximising the use of existing evidence synthesis wherever possible, it is an efficient process which minimises research waste.

The RES approach outlined is designed to be extremely flexible, both in terms of the questions which can be addressed and the process of answering those questions; it is an iterative and pragmatic process whereby researcher judgement can be used to refine the approach at every stage. This means it is easily adapted for the assessment of a wide range of innovations: those we have so far assessed include medical devices, screening and prognosis testing, bundled service process interventions and digital mHealth apps.

Implementation and evaluation

Our approach to rapid evidence synthesis has been developed and implemented in a real-time context of decision-making around the adoption of innovative health technologies. Key stakeholders in this decision-making process have found that it is sufficiently timely and flexible to be a useful input and have engaged actively with its production and interpretation. There is substantial interest from other ARCs and AHSNs in implementing a similar process; creating a common resource database of RES undertaken by any organisation would further minimise research waste and improve evidence-informed decision-making. We are exploring options to enable this, including making some RES publicly available. Although we consider local relevance, the RES first consider the relevance of evidence to the NHS in England, meaning that they are also relevant to other regional decision-makers.

We track the progress of innovations for which we have undertaken RES; decisions to date have included adoption, requests for further information from the sponsor, and decisions not to progress. For innovations now adopted for roll-out, the RES can inform subsequent evaluation questions. Published evaluations of the use of evidence briefings are limited [19], and we are considering possible approaches to evaluating our use of RES.

Availability of data and materials

The framework for this methodology is publicly available on OSF: https://osf.io/hsxk5/.

It is also cited in the bibliography and is available as supplementary information to this paper. Full copies of all rapid evidence syntheses produced are available on reasonable request to the corresponding author; an example is included in the supplementary information.

References

  1. Borah R, Brown AW, Capers PL, Kaiser KA. Analysis of the time and workers needed to conduct systematic reviews of medical interventions using data from the PROSPERO registry. BMJ Open. 2017;7:e012545.

    Article  PubMed  PubMed Central  Google Scholar 

  2. Cochrane. Proposing and registering new Cochrane Reviews https://community.cochrane.org/review-production/production-resources/proposing-and-registering-new-cochrane-reviews: Cochrane Community. Accessed Nov 2020.

  3. Featherstone R, Dryden D, Foisy M, Guise J-M, Mitchell M, Paynter R, et al. Advancing knowledge of rapid reviews: an analysis of results, conclusions and recommendations from published review articles examining rapid reviews. Syst Rev. 2015;4:50.

    Article  PubMed  PubMed Central  Google Scholar 

  4. Garritty C, Gartlehner G, Nussbaumer-Streit B, King VJ, Hamel C, Kamel C, et al. Cochrane Rapid Reviews Methods Group offers evidence-informed guidance to conduct rapid reviews. J Clin Epidemiol. 2021;130:13–22.

    Article  PubMed  Google Scholar 

  5. Khangura S, Konnyu K, Cushman R, Grimshaw J, Moher D. Evidence summaries: the evolution of a rapid review approach. Syst Rev. 2012;1:10.

    Article  PubMed  PubMed Central  Google Scholar 

  6. Hailey D, Corabian P, Harstall C, Schneider W. The use and impact of rapid health technology assessments. Int J Technol Assess Health Care. 2000;16(2):651–6.

    Article  CAS  PubMed  Google Scholar 

  7. Chambers D, Booth A, Rodgers M, Preston L, Dalton J, Goyder E, et al. Evidence to support delivery of effective health services: a responsive programme of rapid evidence synthesis. Evid Policy. 2021;17(1):173–87.

    Article  Google Scholar 

  8. Wilson P, Farley K, Bickerdike L, Booth A, Chambers D, Lambert M, et al. Does access to a demand-led evidence briefing service improve uptake and use of research evidence by health service commissioners? A controlled before and after study. Implement Sci. 2017;12:20.

    Article  PubMed  PubMed Central  Google Scholar 

  9. Partridge A, Mansilla C, Randhawa H, Lavis J, El-Jardali F, Sewankambo N. Lessons learned from descriptions and evaluations of knowledge translation platforms supporting evidence-informed policy-making in low- and middle-income countries: a systematic review. Health Res Policy Syst. 2020;18(1):127.

    Article  PubMed  PubMed Central  Google Scholar 

  10. Polisena J, Garritty C, Kamel C, Stevens A, Abou-Setta AM. Rapid review programs to support health care and policy decision making: a descriptive analysis of processes and methods. Syst Rev. 2015;4(1):26.

    Article  PubMed  PubMed Central  Google Scholar 

  11. Trust W. Accelerated Access: review of innovative medicines and medical technologies supported by the Welcome Trust. https://www.gov.uk/government/publications/accelerated-access-review-final-report; 2016.

    Google Scholar 

  12. Rigby J, Chukwukelu G, Mendoza JP, Yeow J. Health Innovation Manchester as AHSS – the test of a hypothesis. Int J Integr Care. 2021;21(3):5.

    Article  PubMed  PubMed Central  Google Scholar 

  13. Rigby J, Yeow J, Chukwukelu G, Mendoza JP. Health Innovation Manchester origins, formalization, operation. Manchester: The University of Manchester; 2021. https://healthinnovationmanchester.com/wp-content/uploads/2021/08/Health-Innovation-Manchester-Origins-Formalization-Operation-Final-Report.pdf.

  14. Greenhalgh T, Robert G, MacFarlane F, Bate P, Kyriakidou O. Diffusion of innovations in service organizations: systematic review and recommendations. Milbank Q. 2004;82(4):581–629.

    Article  PubMed  PubMed Central  Google Scholar 

  15. Norman G. Rapid evidence synthesis to support health system decision making; 2020.

    Google Scholar 

  16. Chambers D, Wilson P. A framework for production of systematic review based briefings to support evidence-informed decision-making. Syst Rev. 2012;1:32.

    Article  PubMed  PubMed Central  Google Scholar 

  17. CRD. EffectivenessMatters. York: Centre for Reviews and Dissemination, Alcuin College, University of York; 2017. https://www.york.ac.uk/crd/publications/effectiveness-matters.

  18. CRD. Effective Health Care. York: Centre for Reviews and Dissemination, Alcuin College, University of York; 2004. https://www.york.ac.uk/crd/publications/archive/.

  19. Chambers D, Wilson P, Thompson C, Hanbury A, Farley K, Light K. Maximising the impact of systematic review in health care decision making: a systematic scoping review of knowledge-translation resources. Milbank Q. 2011;89(1):131–56.

    Article  PubMed  PubMed Central  Google Scholar 

  20. Alonso-Coello P, Schunemann H, Moberg J, Brignardello-Petersen R, Akl E, Davoli M, et al. GRADE Evidence to Decision (EtD) frameworks: a systematic and transparent approach to making well informed healthcare choices. 1: Introduction. BMJ. 2016;353:i2016.

    Article  PubMed  Google Scholar 

  21. Guyatt G, Oxman A, Akl E, Kunz R, Vist G, Brozek J, et al. GRADE guidelines: 1. Introduction-GRADE evidence profiles and summary of findings tables. J Clin Epidemiol. 2011;64(4):383–94.

    Article  PubMed  Google Scholar 

  22. Petticrew M, Anderson L, Elder R, Grimshaw J, Hopkins D, Hahn R, et al. Complex interventions and their implications for systematic reviews: a pragmatic approach. J Clin Epidemiol. 2013;66:1209–14.

    Article  PubMed  Google Scholar 

  23. Petticrew M, Knai C, Thomas J, Rehfuess E, Noyes J, Gerhardus A, et al. Implications of a complexity perspective for systematic reviews and guideline development in health decision making. BMJ Glob Health. 2019;4(S1):e000899.

    Article  PubMed  PubMed Central  Google Scholar 

  24. Sutcliffe K, Thomas J, Stokes G, Hinds K, Bangpan M. Intervention Component Analysis (ICA): a pragmatic approach for identifying the critical features of complex interventions. Syst Rev. 2015;4:140.

    Article  PubMed  PubMed Central  Google Scholar 

  25. Sibley A, Ziemann A, Robens S, Scarbrough H, Tuvey S. Review of spread and adoption approaches across the AHSN Network: The AHSN Network; 2021. https://www.ahsnnetwork.com/wp-content/uploads/2021/05/Spread-and-adoption-review-final.pdf.

  26. HInM. Innovation deployment: healthcare innovations into frontline care. Health Innovation Manchester. https://healthinnovationmanchester.com/partnerships/innovation-deployment/. Accessed Sept 2022.

  27. Robert G, Fulop N. The role of context in successful improvement. In: Perspectives on context. A selection of essays considering the role of context in successful quality improvement. London: The Health Foundation; 2014. https://www.health.org.uk/sites/default/files/PerspectivesOnContext_fullversion.pdf.

  28. Wells M, Williams B, Treweek S, Coyle J, Taylor J. Intervention description is not enough: evidence from an in-depth multiple case study on the untold role and impact of context in randomised controlled trials of seven complex interventions. Trials. 2012;13:95.

    Article  PubMed  PubMed Central  Google Scholar 

  29. Cuvelliez M, Vandewalle V, Brunin M, Beseme O, Hulot A, de Groote P, et al. Circulating proteomic signature of early death in heart failure patients with reduced ejection fraction. Sci Rep. 2019;9:19202.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  30. NICE. Chronic heart failure in adults: diagnosis and management. London: National Institute for Health and Care Excellence; 2018. https://www.nice.org.uk/guidance/ng106.

  31. Ferrante di Ruffano L, Hyde C, McCaffery K, Bossuyt P, Deeks J. Assessing the value of diagnostic tests: a framework for designing and evaluating trials. BMJ. 2012;344:e686.

    Article  PubMed  Google Scholar 

  32. Shiel A, Hawe P, Gold L. Complex interventions or complex systems? Implications for health economic evaluation. BMJ. 2008;336(7656):1281–3.

    Article  Google Scholar 

  33. Fuhlbrigge A, Reed ML, Stempel DA, Ortega HO, Fanning K, Stanford RH. The status of asthma control in the U.S. adult population. Allergy Asthma Proc. 2009;30(5):29–33.

    Article  Google Scholar 

  34. NHS. Innovation: S12 Solutions. London: NHS Innovation Accelerator, c/o UCLPartners; 2020. https://nhsaccelerator.com/innovation/s12-solutions/.

  35. S12. S12 solutions 2020 https://www.s12solutions.com/about-us

    Google Scholar 

  36. NICE. Developing review questions and planning the systematic review. In: The guidelines manual Process and methods [PMG6]. London: National Institute for Health and Care Excellence; 2012. https://www.nice.org.uk/process/pmg6/chapter/developing-review-questions-and-planning-the-systematic-review. Accessed Apr 2021.

  37. Campbell N, Murray E, Darbyshire J, Emery J, Farmer A, Griffiths F, et al. Designing and evaluating complex interventions to improve health care. BMJ. 2007;334:445.

    Article  Google Scholar 

  38. Higgins JP, Lopez-Lopez J, Becker B, Davies S, Dawson S, Grimshaw J. Synthesising quantitative evidence in systematic reviews of complex health interventions. BMJ Glob Health. 2019;4:e000858.

    Article  PubMed  PubMed Central  Google Scholar 

  39. NICE. National Institute for Health and Care Excellence. https://www.nice.org.uk/guidance. Accessed July 2020.

  40. NICE. Guide to the single technology appraisal process. London: National Institute for Health and Care Excellence; 2009. https://www.nice.org.uk/media/default/about/what-we-do/nice-guidance/nice-technology-appraisals/guide-to-the-single-technology-appraisal-process.pdf.

  41. Cochrane. Cochrane Review Groups. https://www.cochranelibrary.com/about/cochrane-review-groups. Accessed July 2020.

  42. ORCHA. https://orchahealth.com/. Accessed July 2020.

  43. Goldkuhle M, Narayan VM, Weigl A, Dahm P, Skoetz N. A systematic assessment of Cochrane reviews and systematic reviews published in high-impact medical journals related to cancer. BMJ Open. 2018;8(3):e020869.

    Article  PubMed  PubMed Central  Google Scholar 

  44. Whiting P, Savović J, Higgins J, Caldwell D, Reeves B, Shea B, et al. ROBIS: a new tool to assess risk of bias in systematic reviews was developed. J Clin Epidemiol. 2016;69(9):225–34.

    Article  PubMed  PubMed Central  Google Scholar 

  45. Pollock M, Fernandes R, Newton A, Scott S, Hartling L. A decision tool to help researchers make decisions about including systematic reviews in overviews of reviews of healthcare interventions. Syst Rev. 2019;8:29.

    Article  PubMed  PubMed Central  Google Scholar 

  46. Bath PM, Lee H, Everton LF. Swallowing therapy for dysphagia in acute and subacute stroke. Cochrane Database Syst Rev. 2018;10(10):CD000323.

    PubMed  Google Scholar 

  47. Higgins J, Savović J, Paget M, Elbers R, Sterne J. Chapter 8: assessing risk of bias in a randomized trial. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, et al., editors. Cochrane handbook for systematic reviews of interventions version 6. https://training.cochrane.org/handbook/current/chapter-08.2019.

  48. Sterne J, Hernan M, Reeves B, Savović J, Berkman N, Viswanathan M, et al. ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions. BMJ. 2016;355:i4919.

    Article  PubMed  PubMed Central  Google Scholar 

  49. Hayden J, van der Windt D, Cartwright J, Côté P, Bombardier C. Assessing bias in studies of prognostic factors. Ann Intern Med. 2013;158(4):280–6.

    Article  PubMed  Google Scholar 

  50. Schünemann H, Cuello C, Akl E, Mustafa R, Meerpohl J, Thayer K, et al. GRADE guidelines: 18. How ROBINS-I and other tools to assess risk of bias in nonrandomized studies should be used to rate the certainty of a body of evidence. J Clin Epidemiol. 2019;111:105–14.

    Article  PubMed  Google Scholar 

  51. Iorio A, Spencer F, Falavigna M, Alba C, Lang E, Burnand B, et al. Use of GRADE for assessment of evidence about prognosis: rating confidence in estimates of event rates in broad categories of patients. BMJ. 2015;350:h870.

    Article  PubMed  Google Scholar 

  52. Schunemann H, Mustafa R, Brozek J, Santesso N, Alonso-Coello P, Guyatt G. GRADE guidelines: 16. GRADE evidence to decision frameworks for tests in clinical practice and public health. J Clin Epidemiol. 2016;76:89–98.

    Article  PubMed  Google Scholar 

  53. Scutt P, Lee HS, Hamdy S, Bath PM. Pharyngeal electrical stimulation for treatment of poststroke dysphagia: individual patient data meta-analysis of randomised controlled trials. Stroke Res Treat. 2015;2015:429053.

    PubMed  PubMed Central  Google Scholar 

  54. Inkster B, Sarda S, Subramanian V. An empathy-driven, conversational artificial intelligence agent (Wysa) for digital mental well-being: real-world data evaluation mixed-methods study. JMIR Mhealth Uhealth. 2018;6(11):e12106.

    Article  PubMed  PubMed Central  Google Scholar 

  55. Fitzpatrick KK, Darcy A, Vierhile M. Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (Woebot): a randomized controlled trial. JMIR Mental Health. 2017;4(2):e19.

    Article  PubMed  PubMed Central  Google Scholar 

  56. Ly KH, Ly AM, Andersson G. A fully automated conversational agent for promoting mental well-being: a pilot RCT using mixed methods. Internet Interv. 2017;10:39–46.

    Article  PubMed  PubMed Central  Google Scholar 

  57. AHSN. Improving safety in care homes: a summary of Academic Health Science Network projects and innovations. https://www.ahsnnetwork.com/app/uploads/2019/09/Care_Homes_Report_WEB.pdf2019. Accessed May 2020.

  58. Müller M, Jürgens J, Redaélli M, et al. Impact of the communication and patient hand-off tool SBAR on patient safety: a systematic review. BMJ Open. 2018;8:e022202.

    Article  PubMed  PubMed Central  Google Scholar 

  59. Fang AHS, Lim WT, Balakrishnan T. Early warning score validation methodologies and performance metrics: a systematic review. BMC Med Inform Decis Mak. 2020;20(1):111.

    Article  PubMed  PubMed Central  Google Scholar 

  60. Douw G, Schoonhoven L, Holwerds T, et al. Nurses’ worry or concern and early recognition of deteriorating patients on general wards in acute care hospitals: a systematic review. Crit Care. 2015;19(1):230.

    Article  PubMed  PubMed Central  Google Scholar 

  61. Van Sickle D, Barrett M, Humblet O, Henderson K, Hogg C. Randomized, controlled study of the impact of a mobile health tool on asthma SABA use, control and adherence. Eur Respir J. 2016;48:PA1018.

    Google Scholar 

  62. Merchant RK. Effectiveness of population health management using the Propeller health asthma platform: a randomized clinical trial. J Allergy Clin Immunol Pract. 2016;4(3):455–63.

    Article  PubMed  Google Scholar 

  63. McCabe C, McCann M, Brady AM. Computer and mobile technology interventions for self-management in chronic obstructive pulmonary disease. Cochrane Database Syst Rev. 2017;5(5):CD011425.

    PubMed  Google Scholar 

  64. Zwerink M, Brusse-Keizer M, van der Valk PDLPM, Zielhuis GA, Monninkhof EM, van der Palen J, et al. Self management for patients with chronic obstructive pulmonary disease. Cochrane Database Syst Rev. 2014;2014(3):CD002990.

    PubMed  PubMed Central  Google Scholar 

  65. BRACE. The trade off between rigour and real world evidence needs. Birmingham: Birmingham, RAND and Cambridge Evaluation (BRACE) Centre, Health Services Management Centre (HSMC) University of Birmingham; 2021. https://www.birmingham.ac.uk/research/brace/blogs/.aspx.

  66. Marshall IJ, Marshall R, Wallace BC, Brassey J, Thomas J. Rapid reviews may produce different results to systematic reviews: a meta-epidemiological study. J Clin Epidemiol. 2019;109:30–41.

    Article  PubMed  PubMed Central  Google Scholar 

  67. Vindrola-Padros C. Can we re-imagine research so it is timely, relevant and responsive? Comment on “Experience of Health Leadership in Partnering with University-Based Researchers in Canada: A Call to ‘Re-Imagine’ Research”. Int J Health Policy Manag. 2021;10(3):172–5.

    PubMed  Google Scholar 

  68. NICE. Evidence standards framework for digital health technologies: National Institute for Health and Care Excellence; 2021. https://www.nice.org.uk/about/what-we-do/our-programmes/evidence-standards-framework-for-digital-health-technologies

    Google Scholar 

  69. Reynen E, Robson R, Ivory J, Hwee J, Straus SE, Pham B, et al. A retrospective comparison of systematic reviews with same-topic rapid reviews. J Clin Epidemiol. 2018;96:23–34.

    Article  PubMed  Google Scholar 

  70. Taylor-Phillips S, Geppert J, Stinton C, Freeman K, Johnson S, Fraseer H, et al. Comparison of a full systematic review versus rapid review approaches to assess a newborn screening test for tyrosinemia type 1. Res Synth Methods. 2017;8(4):475–84.

    Article  PubMed  Google Scholar 

  71. Waffenschmidt S, Knelangen M, Sieben W, al. e. Single screening versus conventional double screening for study selection in systematic reviews: a methodological systematic review. BMC Med Res Methodol. 2019;19:132.

    Article  PubMed  PubMed Central  Google Scholar 

  72. Rethlefsen ML, Farrell AM, Osterhaus Trzasko LC, Brigham TJ. Librarian co-authors correlated with higher quality reported search strategies in general internal medicine systematic reviews. J Clin Epidemiol. 2015;68(6):617–26.

    Article  PubMed  Google Scholar 

  73. Schellinger J, Sewell K, Bloss JE, Ebron T, Forbes C. The effect of librarian involvement on the quality of systematic reviews in dental medicine. PLoS One. 2021;16(9):e0256833.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  74. Goossen K, Hess S, Lunny C, Pieper D. Database combinations to retrieve systematic reviews in overviews of reviews: a methodological study. BMC Med Res Methodol. 2020;20(1):138.

    Article  PubMed  PubMed Central  Google Scholar 

  75. Bramer WM, Rethlefsen ML, Kleijnen J, Franco OH. Optimal database combinations for literature searches in systematic reviews: a prospective exploratory study. Syst Rev. 2017;6(1):245.

    Article  PubMed  PubMed Central  Google Scholar 

  76. Affengruber L, Wagner G, Waffenschmidt S, Lhachimi SK, Nussbaumer-Streit B, Thaler K, et al. Combining abbreviated literature searches with single-reviewer screening: three case studies of rapid reviews. Syst Rev. 2020;9(1):162.

    Article  PubMed  PubMed Central  Google Scholar 

Download references

Acknowledgements

The authors are grateful to Sophie Bishop for information specialist support for some RES and for allowing us to assess the impact of including an information specialist in the process.

Funding

This research was funded by the National Institute for Health Research Applied Research Collaboration Greater Manchester. The views expressed in this publication are those of the authors and not necessarily those of the National Institute for Health Research or the Department of Health and Social Care.

Author information

Authors and Affiliations

Authors

Contributions

NC conceived the idea. GN, NC, JD, PW and PB developed the framework, which draws on previous work by PW. GN produced the rapid evidence syntheses with support from NC and JD. GN wrote the first draft of the paper with substantive contributions from PW, PB, JD and NC. All authors approved the submission for publication.

Corresponding author

Correspondence to Gill Norman.

Ethics declarations

Ethics approval and consent to participate

Not applicable: this is a methods paper.

Consent for publication

Not applicable: this is a methods paper.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1.

The NIHR ARC-GM and Health Innovation Manchester approach to rapid evidence synthesis to support health system decision making.

Additional file 2.

Rapid evidence synthesis: Phagenyx.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Norman, G., Wilson, P., Dumville, J. et al. Rapid evidence synthesis to enable innovation and adoption in health and social care. Syst Rev 11, 250 (2022). https://doi.org/10.1186/s13643-022-02106-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13643-022-02106-z