Skip to main content

Lack of systematicity in research prioritisation processes — a scoping review of evidence syntheses



A systematically and transparently prepared research priority-setting process within a specific scientific area is essential in order to develop a comprehensive and progressive evidence-based approach that will have a substantial societal impact on the site of interest. On the basis of two consensus workshops, the authors suggest the following methods for all such processes: use of experts, stakeholder involvement, literature review, and ranking.


The identification, categorisation, and discussion of methods for preparing a research prioritisation process.


Eligibility criteria: Evidence synthesis includes original studies presenting a research prioritisation process and which listed the methods used to create a research prioritisation process. Only evidence syntheses related to health research were included.

Data sources: We searched the following electronic databases, without limiting by date or language: MEDLINE Ovid, Embase Ovid, Epistemonikos, and CINAHL EBSCO.

Charting methods: The methods used were mapped and broken down into different elements, and the use of the elements was determined. To support the mapping, (A) all of the elements were collapsed into unique categories, and (B) four essential categories were selected as crucial to a successful research prioritisation process.


Twelve evidence syntheses were identified, including 416 original studies. The identification and categorisation of methods used resulted in 13 unique categories of methods used to prepare a research agenda.


None of the identified categories was used in all of the original studies. Surprisingly, all four of the essential categories were used in only one of the 416 original studies identified. There is seemingly no international consensus on which methods to use when preparing a research prioritisation process.

Protocol registration

The protocol was registered in Open Science Framework (

Peer Review reports


The annual global investment of more than US $130 billion in health research makes it increasingly necessary to prioritise health research investment at all levels [1, 2]. Research prioritisation processes can strengthen national health research systems [3,4,5] and may contribute to better harmonisation of health research globally [3].

It is commonly accepted that health research priority-setting processes help researchers and policymakers to effectively target the research with the best potential to benefit public health [6, 7]. The advantages of such research prioritisation processes are the identification and resolution of clinical challenges, assistance in prioritising different research questions, the balancing of participant opinions, and the identification of knowledge gaps without influencing preferences among subgroups. However, health research prioritisation is difficult; as there are many ideas competing for research, research outcomes are inherently uncertain, and it is difficult to predict and measure what the exact impact of research will be [8]. An essential prerequisite to ensuring evidence-based practice is the conduct of the relevant and necessary research.

Previous attempts to describe research prioritisation processes have concluded that processes differ considerably in terms of methods used [1, 8, 9]. As a common standard may not be appropriate, one study prepared a checklist with nine common themes of good practice [3]. Another study highlighted four methods that combine different elements (the Essential National Health Research (ENHR), the Combined Approach Matrix (CAM), the James Lind Alliance method (JLA), and the Council on Health Research for Development (COHRED)) but emphasised that future research prioritisation processes should offer more transparency and replicability [8]. Finally, one study highlighted the waste of resources as a result of a lack of decision-making in the research prioritisation process [2].

Two systematic reviews evaluated some of the aspects of research prioritisation that we intended to identify and suggested methods to be included in the development of a research agenda [9, 10]. However, one of these reviews limited its search to studies published between 1990 and 2012 [10], and both were limited to a geographical area [9, 10]. Thus, there are no earlier studies that have systematically identified all studies conducting a research prioritisation process to determine the methods used.

Diverse methods have been used to develop a collaborative research prioritisation process, ranging from reports by expert groups to very complex processes involving hundreds of key stakeholder representatives. Thus, many different elements could potentially be included in the methods used in the research prioritisation process. At two workshops, the author team of the present study discussed the importance of these different elements and decided to produce a list of four elements that should be applied in all research prioritisation processes. The list produced included the following: (A) a systematic and transparent approach to identifying any research gaps (such as systematic reviews or scoping reviews of previous studies or a systematic search for potential earlier similar studies); (B) a systematic and transparent approach to gathering the concerns, values, preferences, experiences, and perspectives of end-users; (C) the involvement of persons with clinical and scientific expertise relevant to the planned research prioritisation process; and (D) a transparent prioritisation and consensus process among all stakeholders involved in preparing the research agenda. Systematic reviews are of vital importance. It has recently been suggested that whenever new research is planned, it should be justified on the basis of a systematic and transparent collection of current evidence from previous research, both clinical and pre-clinical. Furthermore, within a systematic and transparent approach to identifying end-users’ concerns, values, preferences, experiences, and perspectives, “end-users” are defined as those who will use and be affected by the planned research [2, 11,12,13,14,15].

The development of a research prioritisation process should also be based on its relevance and value for society (e.g. the burden of disease), its importance given current knowledge (e.g. evidence of the benefits of active intervention, synthesis of previous trials, and consultations with experts), the nature, scope, and severity of the problem, and a plausible explanation/rationale (preclinical research). Thus, a systematic and transparent approach that clarifies end-users’ concerns, values, preferences, experiences, and perspectives should always be part of the research prioritisation process. In addition, such a process should involve clinical and scientific experts so as to avoid impracticable, infeasible, and unrealisable project suggestions.

Finally, the suggested research prioritisation process is the result of transparency and consensus among the stakeholders in preparing this process. Hence, we intended to evaluate the extent to which these four essential elements are included in the research prioritisation process. Our explicit preconception is that all research prioritisation processes should ideally include the four essential elements. We intend to develop a systematic and transparent approach to developing research agendas that includes all relevant sources within rehabilitation research. Before embarking on such an endeavour, we need to establish an overview of the various methods that currently inform the creation of the research agenda.

This scoping review has aimed to identify, categorise, and discuss the methods used in research prioritisation processes. To accomplish this in a reasonable period of time, we decided to include evidence syntheses of studies preparing a research prioritisation process rather than original papers on developing a research prioritisation process. Therefore, on the basis of elements identified and reported in the included original studies, we were able to identify the methods used in the research prioritisation process.


Protocol and registration

A protocol for this scoping review (ScR) can be found at The reporting of this scoping review follows the PRISMA guidelines extension for scoping reviews [16] (see Additional file 1).


The overall method consisted of initiating several workshops. First of all, a rehabilitation research group identified a need to look into research prioritisation processes due to an urgent need for an evidence-based research agenda within rehabilitation [17]. To accomplish this, the present study’s authors discussed the general issues related to the research prioritisation process at a 2-day workshop to prepare the current scoping review. At two other consensus workshops, the authors formulated the mapping of the identified methods and decided on the classification of the methods used. During these two workshops, the authors selected the four essential elements — experts (category no. 3), stakeholder involvement (category no. 7), review of literature (category no. 11), and ranking methods (category no. 12) (see Table 1) — for further use in interpreting the results of the subsequent scoping review.

Table 1 List of unique element categories and synonyms. For full reference to the included evidence syntheses, see Additional file 4

Eligibility criteria

Studies satisfying the eligibility criteria were evidence syntheses (i.e. studies combining information from multiple studies investigating similar questions to come to an overall understanding of their findings). For an evidence synthesis to be deemed trustworthy, it needed, as a minimum, to include a method section explaining how the studies were identified (a systematic search strategy) and selected and how the data were extracted and either mapped or synthesised to arrive at “[a]n accurate, concise and unbiased synthesis of the available evidence” [18]. The evidence synthesis also needed to include original studies presenting a research prioritisation process within health research. Finally, the process needed to specifically identify and describe the methods applied to create a research agenda in the included original studies. No date or language limitations were involved in the search process.

Information sources

On 4 December 2019, we searched the following electronic databases, without restricting by date or language: MEDLINE OVID (from 1946 onwards), Embase OVID (from 1947 onwards), EPISTEMONIKOS (this covers systematic reviews indexed in Cochrane, PubMed, Embase, CINAHL, PsycINFO, LILACS, Campbell Collaboration, JBI, and the EPPI-Centre Evidence Library), and CINAHL EBSCO (from 1981 onwards). We used a version of MEDLINE Ovid that contains records with the following possible statuses in addition to MEDLINE: Publisher, In-Data-Review, In-Process, and PubMed-not-MEDLINE records from NLM.

In addition, reference lists of the included evidence syntheses were used to identify other evidence syntheses potentially relevant to this scoping review.


The following search terms were used: “health priorities”, “consensus development conference”, “research agenda”, “funding priorities”, “priority setting”, “agenda setting”, and “research priorities”. These were combined with search terms for systematic reviews (i.e. “systematic review” and “scoping review”) and limited to searches in the title field (see the detailed search strategy in the appendix). No limitations were used in the search strategy. The search was prepared by TP, HL, and a librarian with expertise in searching for evidence syntheses. TP performed the final search.

Study selection

After removing duplicates, two persons (TP, HL) independently screened the search results for inclusion and exclusion and retrieved all references selected by at least one person for further examination in full text. Title/abstract screening, full-text screening, and removal of duplicates were accomplished with the use of Rayyan [19]. The reviewers were not blinded to the journals or authors. Any discrepancies were resolved through discussion.

Data collection process and management

All of the authors and MBR independently extracted the data using a standard data extraction form developed for this scoping review (Microsoft Excel, version 16.46). We pilot-tested the data extraction form and modified it accordingly before use. Data were extracted independently by all authors in groups of two independent reviewers. The authors resolved any discrepancies by discussing these until they reached a consensus. TM and HL quality checked all of the data and performed the analyses when the data were extracted. We extracted the following data:

  • Review characteristics (author, year, journal, review type [systematic review, scoping review], number of studies included, table used for data extraction, aim/objective of the systematic review, conclusion)

  • Research prioritisation process method(s) identified for each original study in the included evidence syntheses


All of the methods in each evidence synthesis were listed and tagged to the evidence syntheses and the included original studies. The list of methods in the included evidence syntheses consisted of (1) one single method, (2) two or more methods, and/or (3) named methods (including one or more elements) (Fig. 1). A named method includes one or more elements and has a specific name, such as CAM [9]. The exact wording of the elements used in each method was extracted from each original study included in the evidence synthesis and tagged to the systematic review and the original study from which they originated. All of the named methods were replaced with all of the single elements used and tagged to the evidence syntheses and original studies from which they originated.

Fig. 1
figure 1

The mapping processes. SR, systematic review

The total list of elements included multiple terms for the various elements and was therefore collapsed into 13 unique element categories, where each category was exclusive and exhaustive and represented each term in that category every time the term was identified (Table 1). The categorisation was based upon a simple content analysis of the different methods referred to in the table and the text of the included evidence syntheses. This was done to ensure that the other terms could be meaningfully categorised, as in Table 1. All of the authors validated the 13 categories in two consensus workshops. Prior to each consensus workshop, HL and TM prepared suggestions for categorising all of the terms identified in the included evidence syntheses. During the workshop, all categories and review terms were discussed until consensus was reached. In some cases, debate among the co-authors led to a term being moved to another category. For the sake of clarity, all of the similar terms in each element category are presented in Table 1.

For the purposes of the study, we formulated the following questions/tasks:

  1. 1.

    Which categories have been used to create a research prioritisation process?

  2. 2.

    How often was only one category used?

  3. 3.

    How often were two or more categories used?

  4. 4.

    How often were the four essential categories combined in the same study?

  5. 5.

    How often were the four essential categories combined with other categories?

  6. 6.

    How often were the named methods used?

On the basis of the named methods referred to in similar earlier studies, the following named methods were used: the ENHR, the CAM, the James Lind Alliance method, the COHRED, and the Child Health and Nutrition Initiative (CHNRI).


The database searches were conducted in November 2019 and yielded 2068 records. Following the removal of duplicates, 1541 unique records remained. After the title and abstract screening, we retrieved 152 unique evidence syntheses for full-text screening, from which we excluded 140 studies: 58 for wrong study design, 47 for wrong outcome, 11 for not describing the research prioritisation process, 10 for not being evidence syntheses, 6 for wrong aim, 6 for impossibility of extracting data, and 1 for being a background paper (Fig. 2). Twelve of the evidence syntheses met the inclusion criteria (see Table 2 and Fig. 2 for the PRISMA flowchart). A total of 416 original studies were included in these 12 evidence syntheses. One of the included reviews was referred to as a narrative review in the title [10]. As the term “narrative review” is often used as a synonym for a nonsystematic review, the study by Bryant et al. (2014) [10] should, in principle, have been excluded according to our eligibility criteria. However, Bryant et al. [10] included a method section explaining the search process (sources, search strategy, screening procedure, etc.), inclusion and exclusion criteria, and data extraction. We therefore included this study in our review.

Fig. 2
figure 2

PRISMA flowchart

Table 2 Characteristics of included studies. For full reference to the included evidence syntheses, see Additional file 4

The six questions

  • Question no. 1: Which categories have been used to create a research prioritisation process? Table 1 lists the 13 unique categories of elements, and Fig. 3 illustrates the distribution of the 13 unique categories of elements taken from the 12 evidence syntheses (Table 1 and Fig. 3). Table 3 presents how often a specific category was used in each of the 12 included reviews.

  • Question no. 2: How often was only one category used? On average, the 13 categories were each used 51.29 times (12.3%) in the 416 included original studies in the 12 evidence syntheses. The most frequently used element was workshops (22.6%), and the most infrequently used element was observations (2.4%).

  • Question no. 3: How often were two or more categories used? Of the original studies, 42% used only one category, while 32% used two categories, and 25% used three categories or more.

  • Question no. 4: How often were the four essential categories combined in the same study? Table 4 presents the combinations of the four essential categories. No combinations of three categories were identified, and only one study combined experts (no. 3), stakeholder involvement (no. 7), review of literature (no. 11), and ranking methods (no. 12) [30].

  • Question no. 5: How often were the four essential categories combined with other categories? The combinations of each of the four essential categories with other categories are presented in Table 5. The most frequent combination was literature review with another category, found in 36 studies (8.65%), while 6 (1.49%) combined one of the essential categories with any other category.

  • Question no. 6: How often were the named methods used? A few named methods were identified in similar earlier studies and in the included reviews and original papers (Table 6).

Fig. 3
figure 3

The distribution and relative use of the 13 unique categories. The orange columns illustrate the close use of the four essential categories

Table 3 Overview of the categories in each of the included reviews. One category may be reported more than once in an original study; thus, the number of times a category is reported may exceed the total number of included original studies in a review
Table 4 Distribution and frequency of combinations of the four essential categories prepared by the authors
Table 5 The use of the four essential categories prepared by the authors alone or in combination with other categories (not specified). Several studies (percentage)
Table 6 The use of named methods in our material and in similar studies


The process yielded a list of 13 unique element categories for the methods used to prepare a research agenda. Initially, we hypothesised that four essential element categories should be included in all ideal research prioritisation process studies, while others could be added. These four essential categories were “experts” (no. 3), “stakeholder involvement” (no. 7), “literature review” (no. 11), and “ranking methods” (no. 12) (Table 1). Notably, only one study used all four of these categories [30] (included the evidence synthesis by McGregor (2014) [9]). Furthermore, the four essential categories were used in other combinations in fewer than 4% of the included original studies. The most frequently used category (review of literature, no. 11) was used in 16% of the cases, while the other three categories were used in fewer than 8% of the cases. The thirteen categories of elements identified covered a variety of elements used in the different methods reported; only two types of elements — surveys (no. 2, 21%) and workshops (no. 4, 23%) — were used in more than 20% of the cases.

We conducted this study in order to develop a framework for preparing research priority processes within rehabilitation. However, a preliminary search did not show evidence syntheses that specifically targeted rehabilitation. The methods used for preparing research priority processes in areas other than rehabilitation are thought to be similar to methods in rehabilitation. Hence, the scoping review was planned to identify and map methods for all areas of health.

Preparing a new study or a list of suggested studies (as in research agendas) without considering the existing evidence represents “a lack of scientific self-discipline that results in an inexcusable waste of public resources” (Sir Iain Chalmers’ comments [31]). Science is referred to as a cumulative enterprise right from the beginning. As Lord Rayleigh stated in 1884, “Two processes are thus at work side by side, the reception of new material and the digestion and assimilation of the old; and as both are essential, we may spare ourselves the discussion of their relative importance” (here cited from [32]). Considering that the digital revolution has fundamentally improved our present ability to “assimilate” the old studies, the ideal already set out in the seventeenth century can be expected to be apparent in any planning of new studies or research agendas. It is inexcusable that only 16% of all research prioritisation processes, according to our findings, include a literature review, as one can imagine how thrilled earlier scientists would have been had they heard of our present capabilities [33, 34]. Hence, a systematic and transparent synthesis of earlier studies with relevance to the topic of a research prioritisation process should be mandatory in all such processes, regardless of context or theme. As stated in 1984, “for science to be cumulative, an intermediate step between past and future research is necessary: synthesis of existing evidence” [35].

Furthermore, not only should a research agenda be based upon the systematic identification of research gaps, it should also systematically include the end-users’ perspectives, values, preferences, experiences, and concerns. In this context, we define end-users as those who either use the research results and/or are affected by the results. Thus, every proposal to initiate a new study should be tested to ensure its relevance to or need among end-users [11, 13] and should therefore systematically identify the prepared research agenda’s value to society [36,37,38]. End-users’ perspectives should be considered whenever a research prioritisation process is initiated. Category no. 7, stakeholder involvement, goes beyond end-users (including consumers, public consultations). However, only 8% of the research prioritisation processes included these groups of stakeholders in the research prioritisation process. To ensure the consideration of all critical aspects of the prioritised list of research questions in the research prioritisation process, all key stakeholders must be involved and not just end-users. This includes involving scientific and clinical experts so as to avoid the implementation of impracticable, infeasible, and unrealisable project suggestions, even though only 8% of the research prioritisation processes did so.

The preparation of a useful research agenda within a specific scientific area should include a process for prioritising agenda items. While identified research questions need to be prioritised in view of limited resources, the question is how this prioritisation should be conducted. In the list of categories (Table 1), several elements directly or indirectly include a sort of prioritisation (Delphi approaches (no. 6), consensus (no. 9), and ranking methods (no. 11)). However, in all cases, the prioritisation is based mainly on a hidden process rooted in the opinions and experiences of those invited to participate in the research prioritisation process (with JLA as an exception). Within health research, an ethical criterion could be formulated as a cornerstone of the prioritisation process: “Every major code of ethics concerning research with human subjects, from the Nuremberg Code to the present, has recognised that for clinical research to be ethically justifiable it must satisfy at least the following requirements: value, validity, and protection of the rights and welfare of participants” [36].

We acknowledge that research prioritisation processes should periodically be reviewed and updated [39], and, thus, that a transparent and systematic strategy should be applied to repeat the process. Even though several named methods were identified in this scoping review and in an earlier study [8], none of the named methods included all four of the element categories argued for here. The named methods are the ENHR, the CAM, the James Lind Alliance method, and the COHRED [1, 8].

Earlier related studies

However, earlier studies have identified several limitations in research prioritisation processes. The studies emphasised the considerable lack of documentation of the process [1] and even of the transparency of the process [1, 2]. Two studies emphasised that most of the methods also lacked a systematic approach [1, 3], and one study showed that 78% of the processes lacked follow-up after the publication of the research agenda [9].

One study calculated how often a category was used in research priority settings in WHO [40]. In contrast to our results, Terry et al. found that 86% of the identified research prioritisation processes used experts (compared to 8.4% of the studies in our research), and 52% used literature reviews (compared to 16.1% of the studies in our research). However, the data from Terry et al. showed a very select group of studies from WHO’s technical units [40]. Another study included all of our systematic reviews and ten other reviews but made no attempt to identify how often specific categories had been used [41]. Two earlier studies reported how often a named method and three element categories were used [8, 9] (see Table 6). Another study pointed out the lack of end-user/stakeholder involvement in research prioritisation processes, and that such involvement is crucial to the research prioritisation process [1]. The involvement of end-users and other key stakeholders helps to answer questions related, for instance, to benefit, evidence, costs, efficiency, equity, equality, usefulness to a county’s economy, severity of disease, prevalence of disease, solidarity, protection of the vulnerable, and more [1]. An important topic is the relationship between the different categories. A recent study examined the importance of expert consensus versus the use of systematic reviews and showed that there is no clear answer to which is more important [42].

Several studies argued against the seeking a standard method for performing research prioritisation processes [1, 3]. According to one of the studies, “Because of the many different contexts for which priorities can be set, attempting to produce one best practice is not appropriate, as the optimal approach varies per exercise” [3]. This potentially lends support to our finding of many different methods being used. However, although context may change, certain key categories should always be included in any research prioritisation process. Hence, we suggest always including at least the four essential categories (no. 3, no. 7, no. 11, and no. 12 — see Table 1). Other items may also be considered relevant to any research prioritisation process regardless of context, such as the need to legitimise and document the process, procedures for revision and appeal, and leadership [1]. These items do not relate to identifying and prioritising research questions, but they are important prerequisites for the performance and dissemination of the final research agenda. Two studies made valuable recommendations for the entire process [3, 43], and, finally, one study argued that the whole research prioritisation process should, if possible, avoid the influence of political, economic, environmental, and idiosyncratic elements on the agenda [1]. The application of options for appeal, hearing, or revision will allow for change and adaptation with regard to different opinions [1].

Research prioritisation processes can strengthen the national health research system [3,4,5] and may help to better harmonise health research globally [3]. Research prioritisation processes that take a systematic, transparent approach is essential for a more transparent distribution of public and private health research funding [2, 3].

Strengths and limitations

Almost half of the evidence syntheses limited their search in time. However, as more than 50% of the included original studies either had no time-limited search or had a search dating back to 1990, we expect the results to be unaffected by the limited searches. Our search ended in 2019, but as this study has sought to illustrate how research prioritisation processes have been performed rather to recommend how to treat patients or do research prioritisation, we have found no reason to carry out a new search. Furthermore, even though the search was performed in 2019, a later scoping review by Tan et al. (2022) conducted a search in May and June 2021 that identified several reviews, including all of the reviews included in our review. Even in this later search, Tan et al. did not identify any recent evidence syntheses that we could have included that complied with our eligibility criteria [41].

We indicated the category of ranking (no. 12) as one of four crucial categories that we argue should be used in all research prioritisation processes. However, the category of consensus (no. 10) could also include a prioritisation process; thus, 12% of the original studies had some form of ranking rather than 3% as stated (no. 12 only). The consensus process needs to be more transparent, however, and thus may not prioritise as transparently and systematically as a clear ranking process.

It was impossible to identify which end-users had been involved in the research prioritisation process, as only one review provided this information and only for two types of end-users (health professionals and patients) [25].

We did not search specifically for grey literature, as we intended to identify evidence syntheses that included all kinds of studies including grey literature. Among the 416 included original studies, reports were included that could be regarded as grey literature. For example, the only study that included all four of the essential categories prepared by the authors was a report from WHO [30].

For the sake of transparency in the collapsing of the many different elements, we have provided a detailed description of the basis for the categories in Table 1. Furthermore, as our search was comprehensive and we managed to include 416 unique original studies using earlier evidence syntheses, we have provided a highly realistic, potential picture of how research prioritisation processes within health research have been conducted over the past 25 to 30 years. Our analyses also clearly show the variation and diversity in the performance of these prioritisation processes.

Implications for research

As we need a clearer understanding of how the different categories have been used and the reason for so much diversity in research prioritisation processes, further studies evaluating earlier prioritisation processes are needed in order to obtain further in-depth knowledge of these critical processes. An evidence synthesis covering all of the original studies and an in-depth analysis of what has occurred during the prioritisation process are thus needed. In addition, surveys and qualitative studies, including in-depth text analyses and interviews of persons involved in the research prioritisation process, would be very beneficial. Most importantly, there is a great need for studies evaluating the impact of using the essential four categories prepared by the authors. Finally, there is also a need for studies assessing the impact of including legitimisation and/or documentation of the process, procedures for revisions and appeals, and leadership in the research prioritisation process.


Before mapping and analysing the results of this scoping review, we defined four essential categories for an optimal research prioritisation process regardless of the topic or context of the prioritisation process. Our results show that very few studies used one or more of these four essential categories, and only one study used all four. Even though topic and context will change, these four categories should still be used. This needs to be promoted, and the impact of using these four elements should be examined further.


We have aimed primarily to establish an overview of the methods used in the performance of the research prioritisation process to develop an evidence-based research agenda within a given topic. The many different methods were collapsed into 13 categories, four of which were defined as essential — use of experts, stakeholder involvement, literature review, and the ranking of strategies — in addition to other methods. The results indicate that none of the identified categories was used in all of the original studies. Surprisingly, all four of the essential categories were used in only one of the 416 original studies identified. Thus, we conclude that there is not yet an international consensus on the preparation and prioritisation of research processes.

Availability of data and materials

The datasets used and analysed in the current study are available from the publication website.


  1. Tomlinson M, Chopra M, Hoosain N, Rudan I. A review of selected research priority setting processes at national level in low and middle income countries: towards fair and legitimate priority setting. Health Res Policy Syst. 2011;9:19.

    Article  Google Scholar 

  2. Chalmers I, Bracken MB, Djulbegovic B, Garattini S, Grant J, Gulmezoglu AM, et al. How to increase value and reduce waste when research priorities are set. Lancet. 2014;383(9912):156–65.

    Article  Google Scholar 

  3. Viergever RF, Olifson S, Ghaffar A, Terry RF. A checklist for health research priority setting: nine common themes of good practice. Health Res Policy Syst. 2010;8:36.

    Article  Google Scholar 

  4. Ranson MK, Bennett SC. Priority setting and health policy and systems research. Health Res Policy Syst. 2009;7:27.

    Article  Google Scholar 

  5. Conceicao C, Leandro A, McCarthy M. National support to public health research: a survey of European ministries. BMC Public Health. 2009;9:203.

    Article  Google Scholar 

  6. WHO. WHO’s role and responsibilities in health research: WHO; 2010.

    Google Scholar 

  7. Rudan I, Kapiriri L, Tomlinson M, Balliet M, Cohen B, Chopra M. Evidence-based priority setting for health care and research: tools to support policy in maternal, neonatal, and child health in Africa. PLoS Med. 2010;7(7):e1000308.

    Article  Google Scholar 

  8. Yoshida S. Approaches, tools and methods used for setting priorities in health research in the 21(st) century. J Glob Health. 2016;6(1):010507.

    Google Scholar 

  9. McGregor S, Henderson KJ, Kaldor JM. How are health research priorities set in low and middle income countries? A systematic review of published reports. PLoS One. 2014;9(9):e108787.

    Article  Google Scholar 

  10. Bryant J, Sanson-Fisher R, Walsh J, Stewart J. Health research priority setting in selected high income countries: a narrative review of methods used and recommendations for future practice. Cost Eff Resour Alloc. 2014;12:23.

  11. Lund H, Juhl CB, Norgaard B, Draborg E, Henriksen M, Andreasen J, et al. Evidence-based research series-paper 2: using an evidence-based research approach before a new study is conducted to ensure value. J Clin Epidemiol. 2021;129:158–66.

    Article  Google Scholar 

  12. Lund H, Juhl CB, Norgaard B, Draborg E, Henriksen M, Andreasen J, et al. Evidence-based research series-paper 3: using an evidence-based research approach to place your results into context after the study is performed to ensure usefulness of the conclusion. J Clin Epidemiol. 2021;129:167–71.

    Article  Google Scholar 

  13. Robinson KA, Brunnhuber K, Ciliska D, Juhl CB, Christensen R, Lund H, et al. Evidence-based research series-paper 1: what evidence-based research is and why is it important? J Clin Epidemiol. 2021;129:151–7.

    Article  Google Scholar 

  14. Lund H, Brunnhuber K, Juhl C, Robinson K, Leenaars M, Dorch BF, et al. Towards evidence based research. BMJ. 2016;355:i5440.

    Article  Google Scholar 

  15. Li T, Vedula SS, Scherer R, Dickersin K. What comparative effectiveness research is needed? A framework for using guidelines and systematic reviews to identify evidence gaps and research priorities. Ann Intern Med. 2012;156(5):367–77.

    Article  Google Scholar 

  16. Tricco AC, Lillie E, Zarin W, O’Brien KK, Colquhoun H, Levac D, et al. PRISMA extension for scoping reviews (PRISMA-ScR): checklist and explanation. Ann Intern Med. 2018;169(7):467–73.

    Article  Google Scholar 

  17. [cited 2022 14 August]. SEVERIN - network for rehabilitation research]. Available from:

  18. Donnelly CA, Boyd I, Campbell P, Craig C, Vallance P, Walport M, et al. Four principles to make evidence synthesis more useful for policy. Nature. 2018;558(7710):361–4.

    Article  CAS  Google Scholar 

  19. Ouzzani M, Hammady H, Fedorowicz Z, Elmagarmid A. Rayyan-a web and mobile app for systematic reviews. Syst Rev. 2016;5(1):210.

    Article  Google Scholar 

  20. Badakhshan A, Arab M, Rashidian A, Gholipour M, Mohebbi E, Zendehdel K. Systematic review of priority setting studies in health research in the Islamic Republic of Iran. East Mediterr Health J. 2018;24(8):753-69.

  21. Booth A, Maddison J, Wright K, Fraser L, Beresford B. Research prioritisation exercises related to the care of children and young people with life-limiting conditions, their parents and all those who care for them: A systematic scoping review. Palliat Med. 2018;32(10):1552-66.

  22. Erntoft S. Pharmaceutical priority setting and the use of health economic evaluations: a systematic literature review. Value in health : the journal of the International Society for Pharmacoeconomics and Outcomes Research. 2011;14(4):587-99.

  23. Garcia AB, Cassiani SH, Reveiz L. A systematic review of nursing research priorities on health system and services in the Americas. Rev Panam Salud Publica. 2015;37(3):162-71.

  24. Manafo E, Petermann L, Vandall-Walker V, Mason-Lai P. Patient and public engagement in priority setting: A systematic rapid review of the literature. PLoS One. 2018;13(3):e0193579.

  25. Pii KH, Schou LH, Piil K, Jarden M. Current trends in patient and public involvement in cancer research: a systematic review. Health expectations: an international journal of public participation in health care and health policy. 2019;22(1):3–20.

    Article  Google Scholar 

  26. Reveiz L, Elias V, Terry RF, Alger J, Becerra-Posada F. Comparison of national health research priority-setting methods and characteristics in Latin America and the Caribbean, 2002-2012. Rev Panam Salud Publica. 2013;34(1):1-13.

  27. Rylance J, Pai M, Lienhardt C, Garner P. Priorities for tuberculosis research: a systematic review. Lancet Infect Dis. 2010;10(12):886-92.

  28. Tong A, Sautenet B, Chapman JR, Harper C, MacDonald P, Shackel N, et al. Research priority setting in organ transplantation: a systematic review. Transplant international: official journal of the European Society for Organ Transplantation. 2017;30(4):327-43.

  29. Tong A, Chando S, Crowe S, Manns B, Winkelmayer WC, Hemmelgarn B, et al. Research priority setting in kidney disease: a systematic review. Am J Kidney Dis. 2015;65(5):674-83.

  30. WHO. Research priorities for the environment, agriculture and infectious diseases of poverty: technical report of the TDR Thematic Reference Group on Environment, Agriculture and Infectious Diseases of Poverty. Geneva: World Health Organization; 2013.

    Google Scholar 

  31. Fergusson D, Glass KC, Hutton B, Shapiro S. Randomized controlled trials of aprotinin in cardiac surgery: could clinical equipoise have stopped the bleeding? Clinical Trials. 2005;2(3):218-29; discussion 29-32.

  32. Chalmers I, Hedges LV, Cooper H. A brief history of research synthesis. Eval Health Prof. 2002;25(1):12-37.

  33. Warren J. Remarks on angina pectoris. New England Journal of Medicine. 1812;1(1):1-11.

  34. Clarke M. Partially systematic thoughts on the history of systematic reviews. Systematic Reviews. 2018;7(1):176.

  35. Light RJ, Pillemer DB. Summing up. The science of reviewing research. Boston: HarvardUniversity Press; 1984.

  36. Grady C. Science in the service of healing. Hastings Cent Rep. 1998;28(6):34-8.

  37. Freedman B. Scientific value and validity as ethical requirements for research: a proposed explication. IRB: Ethics & Human Research. 1987;9(6):7-10.

  38. Emanuel EJ, Wendler D, Grady C. What makes clinical research ethical? JAMA. 2000;283(20):2701-11.

  39. Andrews J. Prioritization criteria methodology for future research needs proposals within the effective health care program: PiCMe-prioritization criteria methods. AHRQ methods for effective health care. Rockville: Agency for Healthcare Research and Quality (US); 2013.

  40. Terry RF, Charles E, Purdy B, Sanford A. An analysis of research priority-setting at the World Health Organization - how mapping to a standard template allows for comparison between research priority-setting approaches. Health Res Policy Syst. 2018;16(1):116.

    Article  CAS  Google Scholar 

  41. Tan A, Nagraj SK, Nasser M, Sharma T, Kuchenmüller T. What do we know about evidenceinformed priority setting processes to set population-level health-research agendas: an overview of reviews. Bulletin of the National Research Centre. 2022;46(1):6.

  42. Uttley L, Indave BI, Hyde C, White V, Lokuhetty D, Cree I. Invited commentary-WHO classification of tumours: how should tumors be classified? Expert consensus, systematic reviews or both? Int J Cancer. 2020;146(12):3516–21.

    Article  CAS  Google Scholar 

  43. Nasser M, Welch V, Ueffing E, Crowe S, Oliver S, Carlo R. Evidence in agenda setting: new directions for the Cochrane Collaboration. J Clin Epidemiol. 2013;66(5):469–71.

    Article  Google Scholar 

Download references


Thank you to Thomas Potrebny (T. P.) for performing the search and participating in screening search hits. Thank you to Maiken Bay Ravn (M. B. R.) for assisting with data extraction.


This study is part of the European COST Action “EVBRES” (CA 17117).

Author information

Authors and Affiliations



All of the authors have made substantial contributions to the design of the work, have approved the submitted version (and any substantially modified version that involves the author’s contribution to the study), and have agreed both to be personally accountable for their own contributions and to ensure that questions related to the accuracy or integrity of any part of the work, even parts in which the author has not been personally involved, are appropriately investigated and resolved and the resolution documented in the literature. The authors read and approved the final manuscript.

Corresponding author

Correspondence to Hans Lund.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1.

The filled-in PRISMA checklist.

Additional file 2.

The search strategy for MEDLINE OVID, Embase OVID, and CINAHL EBSCO.

Additional file 3.

Differences between protocol and paper.

Additional file 4.

Reference list of the 12 included evidence syntheses.

Additional file 5.

Reference list of all 140 excluded studies based on full-text screening.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lund, H., Tang, L., Poulsen, I. et al. Lack of systematicity in research prioritisation processes — a scoping review of evidence syntheses. Syst Rev 11, 277 (2022).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: