The process yielded a list of 13 unique element categories for the methods used to prepare a research agenda. Initially, we hypothesised that four essential element categories should be included in all ideal research prioritisation process studies, while others could be added. These four essential categories were “experts” (no. 3), “stakeholder involvement” (no. 7), “literature review” (no. 11), and “ranking methods” (no. 12) (Table 1). Notably, only one study used all four of these categories [30] (included the evidence synthesis by McGregor (2014) [9]). Furthermore, the four essential categories were used in other combinations in fewer than 4% of the included original studies. The most frequently used category (review of literature, no. 11) was used in 16% of the cases, while the other three categories were used in fewer than 8% of the cases. The thirteen categories of elements identified covered a variety of elements used in the different methods reported; only two types of elements — surveys (no. 2, 21%) and workshops (no. 4, 23%) — were used in more than 20% of the cases.
We conducted this study in order to develop a framework for preparing research priority processes within rehabilitation. However, a preliminary search did not show evidence syntheses that specifically targeted rehabilitation. The methods used for preparing research priority processes in areas other than rehabilitation are thought to be similar to methods in rehabilitation. Hence, the scoping review was planned to identify and map methods for all areas of health.
Preparing a new study or a list of suggested studies (as in research agendas) without considering the existing evidence represents “a lack of scientific self-discipline that results in an inexcusable waste of public resources” (Sir Iain Chalmers’ comments [31]). Science is referred to as a cumulative enterprise right from the beginning. As Lord Rayleigh stated in 1884, “Two processes are thus at work side by side, the reception of new material and the digestion and assimilation of the old; and as both are essential, we may spare ourselves the discussion of their relative importance” (here cited from [32]). Considering that the digital revolution has fundamentally improved our present ability to “assimilate” the old studies, the ideal already set out in the seventeenth century can be expected to be apparent in any planning of new studies or research agendas. It is inexcusable that only 16% of all research prioritisation processes, according to our findings, include a literature review, as one can imagine how thrilled earlier scientists would have been had they heard of our present capabilities [33, 34]. Hence, a systematic and transparent synthesis of earlier studies with relevance to the topic of a research prioritisation process should be mandatory in all such processes, regardless of context or theme. As stated in 1984, “for science to be cumulative, an intermediate step between past and future research is necessary: synthesis of existing evidence” [35].
Furthermore, not only should a research agenda be based upon the systematic identification of research gaps, it should also systematically include the end-users’ perspectives, values, preferences, experiences, and concerns. In this context, we define end-users as those who either use the research results and/or are affected by the results. Thus, every proposal to initiate a new study should be tested to ensure its relevance to or need among end-users [11, 13] and should therefore systematically identify the prepared research agenda’s value to society [36,37,38]. End-users’ perspectives should be considered whenever a research prioritisation process is initiated. Category no. 7, stakeholder involvement, goes beyond end-users (including consumers, public consultations). However, only 8% of the research prioritisation processes included these groups of stakeholders in the research prioritisation process. To ensure the consideration of all critical aspects of the prioritised list of research questions in the research prioritisation process, all key stakeholders must be involved and not just end-users. This includes involving scientific and clinical experts so as to avoid the implementation of impracticable, infeasible, and unrealisable project suggestions, even though only 8% of the research prioritisation processes did so.
The preparation of a useful research agenda within a specific scientific area should include a process for prioritising agenda items. While identified research questions need to be prioritised in view of limited resources, the question is how this prioritisation should be conducted. In the list of categories (Table 1), several elements directly or indirectly include a sort of prioritisation (Delphi approaches (no. 6), consensus (no. 9), and ranking methods (no. 11)). However, in all cases, the prioritisation is based mainly on a hidden process rooted in the opinions and experiences of those invited to participate in the research prioritisation process (with JLA as an exception). Within health research, an ethical criterion could be formulated as a cornerstone of the prioritisation process: “Every major code of ethics concerning research with human subjects, from the Nuremberg Code to the present, has recognised that for clinical research to be ethically justifiable it must satisfy at least the following requirements: value, validity, and protection of the rights and welfare of participants” [36].
We acknowledge that research prioritisation processes should periodically be reviewed and updated [39], and, thus, that a transparent and systematic strategy should be applied to repeat the process. Even though several named methods were identified in this scoping review and in an earlier study [8], none of the named methods included all four of the element categories argued for here. The named methods are the ENHR, the CAM, the James Lind Alliance method, and the COHRED [1, 8].
Earlier related studies
However, earlier studies have identified several limitations in research prioritisation processes. The studies emphasised the considerable lack of documentation of the process [1] and even of the transparency of the process [1, 2]. Two studies emphasised that most of the methods also lacked a systematic approach [1, 3], and one study showed that 78% of the processes lacked follow-up after the publication of the research agenda [9].
One study calculated how often a category was used in research priority settings in WHO [40]. In contrast to our results, Terry et al. found that 86% of the identified research prioritisation processes used experts (compared to 8.4% of the studies in our research), and 52% used literature reviews (compared to 16.1% of the studies in our research). However, the data from Terry et al. showed a very select group of studies from WHO’s technical units [40]. Another study included all of our systematic reviews and ten other reviews but made no attempt to identify how often specific categories had been used [41]. Two earlier studies reported how often a named method and three element categories were used [8, 9] (see Table 6). Another study pointed out the lack of end-user/stakeholder involvement in research prioritisation processes, and that such involvement is crucial to the research prioritisation process [1]. The involvement of end-users and other key stakeholders helps to answer questions related, for instance, to benefit, evidence, costs, efficiency, equity, equality, usefulness to a county’s economy, severity of disease, prevalence of disease, solidarity, protection of the vulnerable, and more [1]. An important topic is the relationship between the different categories. A recent study examined the importance of expert consensus versus the use of systematic reviews and showed that there is no clear answer to which is more important [42].
Several studies argued against the seeking a standard method for performing research prioritisation processes [1, 3]. According to one of the studies, “Because of the many different contexts for which priorities can be set, attempting to produce one best practice is not appropriate, as the optimal approach varies per exercise” [3]. This potentially lends support to our finding of many different methods being used. However, although context may change, certain key categories should always be included in any research prioritisation process. Hence, we suggest always including at least the four essential categories (no. 3, no. 7, no. 11, and no. 12 — see Table 1). Other items may also be considered relevant to any research prioritisation process regardless of context, such as the need to legitimise and document the process, procedures for revision and appeal, and leadership [1]. These items do not relate to identifying and prioritising research questions, but they are important prerequisites for the performance and dissemination of the final research agenda. Two studies made valuable recommendations for the entire process [3, 43], and, finally, one study argued that the whole research prioritisation process should, if possible, avoid the influence of political, economic, environmental, and idiosyncratic elements on the agenda [1]. The application of options for appeal, hearing, or revision will allow for change and adaptation with regard to different opinions [1].
Research prioritisation processes can strengthen the national health research system [3,4,5] and may help to better harmonise health research globally [3]. Research prioritisation processes that take a systematic, transparent approach is essential for a more transparent distribution of public and private health research funding [2, 3].
Strengths and limitations
Almost half of the evidence syntheses limited their search in time. However, as more than 50% of the included original studies either had no time-limited search or had a search dating back to 1990, we expect the results to be unaffected by the limited searches. Our search ended in 2019, but as this study has sought to illustrate how research prioritisation processes have been performed rather to recommend how to treat patients or do research prioritisation, we have found no reason to carry out a new search. Furthermore, even though the search was performed in 2019, a later scoping review by Tan et al. (2022) conducted a search in May and June 2021 that identified several reviews, including all of the reviews included in our review. Even in this later search, Tan et al. did not identify any recent evidence syntheses that we could have included that complied with our eligibility criteria [41].
We indicated the category of ranking (no. 12) as one of four crucial categories that we argue should be used in all research prioritisation processes. However, the category of consensus (no. 10) could also include a prioritisation process; thus, 12% of the original studies had some form of ranking rather than 3% as stated (no. 12 only). The consensus process needs to be more transparent, however, and thus may not prioritise as transparently and systematically as a clear ranking process.
It was impossible to identify which end-users had been involved in the research prioritisation process, as only one review provided this information and only for two types of end-users (health professionals and patients) [25].
We did not search specifically for grey literature, as we intended to identify evidence syntheses that included all kinds of studies including grey literature. Among the 416 included original studies, reports were included that could be regarded as grey literature. For example, the only study that included all four of the essential categories prepared by the authors was a report from WHO [30].
For the sake of transparency in the collapsing of the many different elements, we have provided a detailed description of the basis for the categories in Table 1. Furthermore, as our search was comprehensive and we managed to include 416 unique original studies using earlier evidence syntheses, we have provided a highly realistic, potential picture of how research prioritisation processes within health research have been conducted over the past 25 to 30 years. Our analyses also clearly show the variation and diversity in the performance of these prioritisation processes.
Implications for research
As we need a clearer understanding of how the different categories have been used and the reason for so much diversity in research prioritisation processes, further studies evaluating earlier prioritisation processes are needed in order to obtain further in-depth knowledge of these critical processes. An evidence synthesis covering all of the original studies and an in-depth analysis of what has occurred during the prioritisation process are thus needed. In addition, surveys and qualitative studies, including in-depth text analyses and interviews of persons involved in the research prioritisation process, would be very beneficial. Most importantly, there is a great need for studies evaluating the impact of using the essential four categories prepared by the authors. Finally, there is also a need for studies assessing the impact of including legitimisation and/or documentation of the process, procedures for revisions and appeals, and leadership in the research prioritisation process.
Perspectives
Before mapping and analysing the results of this scoping review, we defined four essential categories for an optimal research prioritisation process regardless of the topic or context of the prioritisation process. Our results show that very few studies used one or more of these four essential categories, and only one study used all four. Even though topic and context will change, these four categories should still be used. This needs to be promoted, and the impact of using these four elements should be examined further.