Skip to main content

Table 2 Qualitative analysis—themes, subthemes, and participants’ statements

From: Delphi survey on the most promising areas and methods to improve systematic reviews’ production and updating

Theme

Subthemes

Participants’ statements

The most important tools and methodological approaches

Existing tools and approaches that can improve efficiency

P9: Already methods (SUAMRI, What review is right for you, IEBHC Review Wizard)

P2: Revman and most other organizations give templates to fill in details

P16: […] but there are a number of excellent tools being developed for this at the Institute for Evidence-Based Healthcare at Bond University in Australia—https://iebh.bond.edu.au/education-services/research-tools

P16: I know there are tools for this [De-duplicating], either standalone (e.g., Institute for Evidence-Based Practice at Bond University) or integrated into systematic review software—Covidence, EPPI-Reviewer

P11: [Screening full-texts] Already explored in some commercial platforms (i.e., Distiller, EPPI-Reviewer)

P16: This [Screening full-texts] is harder to automate at the moment, and is quite a manual task, so a productive area for research

P7: Using a program like Covidence or DistillerSR or EPPI-Reviewer to screen the search, perform a risk of bias, extract data—such a program will make sure that all parts of the process are accounted for

P32: this [Snowballing] is important but great tools like Citation chaser already exist

P7: Use Google Translate whenever there is a non-English publication—Google Translate only trustworthy from any language to English

P11: Several platforms for translation are already available (i.e., DeepL)

P16: Google Translate as a tool has increased dramatically in quality over the years, and guidance now recommends it can be used for study screening, if not final full-text translation for incorporation into the review

P14: Prioritisation of teaching how to use systematic review automation tools in training (e.g., JBI, Cochrane, other organisations)

P21: Regular and in-depth training is important

P14: Prioritisation of teaching how to use systematic review automation tools in training (e.g., JBI, Cochrane, other organisations)

Tools and approaches that need to be developed

P2: Finding previous systematic reviews is straightforward (provided they are published or registered) hence I believe not much development and automation is required. However, if there is a platform wherein all the registered and published reviews are pulled together like a repository it may benefit researchers

P16: Repositories for sharing translations might also be useful, reducing resource use through duplication of effort

P16: I have also heard discussions about possible platforms for sharing extracted data across reviews, which could reduce duplication

P2: Also, rather than searching in various databases separately, each database can be linked to having one common search strategy, which I believe will save a lot of time

P32: One of the most time-consuming parts of a review is wading through irrelevant records made necessary by inconsistent indexing in databases and different syntax in different databases. Findings a way to run a search in one place that would automatically search all sources would be wonderful

P8: Machine learning is helping but even more could be done

P16: [Screening full-texts] Tools which make this easier, including summarising the results/reasons (e.g., Covidence, EPPI-Reviewer) are already useful

P27: This [Screening abstracts] is very tedious and time-consuming task. Artificial intelligence or citizen science might help with that

P24: [Extracting data] More challenging to automate, but high value if it can be made to work

P13: I think that there are many nice tools already. But two areas which are very time consuming and would benefit from new tools (in my opinion) are finding full text and extracting data

P2: Many researchers have to restrict the search to English languages because of resource constraints. If this area is developed, it will be very good

P12: this [Translating non-English studies] would be a game changer….If…it went beyond quantitative data, and qualitative researchers could be confident the 'meaning' of text was captured

Different areas and methods require a different level of automation

General

P24: I think the potential for automation to help is limited here [Project management]

P11: I am not sure if this step [Formulating the review question] could be automated

P27: Anything related to search obviously has a potential for automation, therefore worth researching

P12: [Screening abstracts] potentially the strongest area of contribution for smart automation

Full automation is not suitable for areas and methods that need complex human judgment

P32: The most time-consuming parts of the process are those that lend themselves best to some degree of automation—deduplication, screening, data extraction. The other parts, I feel, require expert human input especially where complex decisions need to be made

P22: […] the systematic review process is a multi-step process, nearly all of which require judgments. And you cannot automate judgments, and perhaps should not, even in the face of artificial intelligence (AI)

P6: [Formulating the review question] this is a piece of the endeavour that I can't see being done for a machine. Whether it is theoretically possible to find ways around this limitation, I doubt research on automation in this area will provide many things

P5: [Writing the protocol] Some parts of the protocol follow rules and standards, that can be supported by software and text template, while other elements (e.g. study selection criteria, inclusion of non-randomized evidence) are complex questions, which have to be left to human minds

P5: [Writing up the review manuscript] Automatic links between text and key statistical findings are useful, but writing the introduction and discussion section requires a human mind

P32: [Constructing the search strategy] I don't know that automation will improve things, a more pressing issue is SRs conducted without input from an info specialist

P7: [Literature searching] Difficult to automate, always needs a human to make decisions, but crucial for a systematic review

P7: [Screening full-texts] I am afraid that human decisions are needed

P32: I am sceptical on automation here [Screening full-texts] especially in the social sciences where there is so much inconsistency on how studies are reported that it takes a human to find the relevant information

P7: [GRADE-ing] This process is dependent upon judgements that I can´t imagine a meaningful automation or semi-automation

P8: [GRADE-ing] I do not think that GRADE decisions should be automatic as they require a lot of reflection and thought. There are already existing tools that are functional

P16: [Synthesizing data] I think this is a stage of the review where it's really productive and important for authors to spend time and energy to do this thoughtfully and appropriately—at this stage I think there are risks in trying to automate this, as judgement is needed before proceeding to synthesise data

P32: [Synthesizing data] with more complex data sets having a human checking statistics and conversions of different measures to a common effect estimate takes real skill and knowledge of stats so I am sceptical about automating that process

P7: [Extracting data] My experience is that it is very demanding and includes a lot of decisions by humans, but a combination would be good

P3: [Extracting data] Semi automation perhaps?

Repetitive tasks and searching have strong potential for automation

P4: Researchers need to understand that automation should prevent them from having to do rote, complicated, repetitive tasks—thus freeing them up to do more interesting and critical tasks. I.e. automation is a tool for them to have more of a difference, whether in evidence-based medicine or policy. It is not a replacement for them

P17: Searching and analysing relevance are most likely places for automatization, perhaps also the data extraction

P23: AI helps reducing the same work

P27: Anything related to search obviously has a potential for automation, therefore worth researching. Especially in the area in qualitative evidence synthesis we still have a lot to research and learn

Prioritization concerning future research of particular methods is crucial to improve efficiency

General

P13: I think that there are many nice tools already. But two areas which are very time consuming and would benefit from new tools (in my opinion) are finding full text and extracting data

P3: [De-duplicating] Less important, as already fairly advanced

P27: [De-duplicating] Isn't that automated already?

Time-consuming methods require a higher priority in future research

P32: [Literature searching] One of the most time-consuming parts of a review is wading through irrelevant records made necessary by inconsistent indexing in databases and different syntax in different databases. Findings a way to run a search in one place that would automatically search all sources would be wonderful

P2: [Obtaining full-text] This step takes a lot of time and thus to save time prioritization in future research in needed

P13: [Extracting data] This is the most time consuming part where to my knowledge there are no good tool available

P16: [Extracting data] This is a very time consuming process and existing tools can be challenging to use, especially for complex reviews with multi-component, highly variable interventions and a lot of variability in how outcomes are measured and reported. Ongoing improvement of data extraction tools would be great, as would semi-automation to assist in identifying and classifying relevant information and reduce author workload. I have also heard discussions about possible platforms for sharing extracted data across reviews, which could reduce duplication

Already available and reasonably developed methods

P32: [Writing the protocol] Guidance on protocols is widely available and experienced reviewers can write a quality protocol swiftly

P11: [Obtaining full-text] Already available in most of the commercial software for management of references

P6: Whether that [Translating non-English studies] could be extremely valuable in reviews, a lot of research and experimenting is already been done in this area, so prioritizing it in the context of reviews seems unlikely to produce additional benefits

Open-ended comments

General

P10: “We should develop methods to combine different study designs and generate evidence.”

P10: “Non-English databases are usually not included in the systematic reviews. For example, it is difficult to get access or translation facilities for Chinese databases. Thereby we are missing a huge chunk of information that could have an impact on the results of the systematic review.”

P24: “You cannot automate judgements, and perhaps should not, even in the face of artificial intelligence (AI)” “In a bit more ‘intellectual’ activities like RoB, synthesis and GRADE-ing I am sceptic towards the automatization” “I think one of the challenges when thinking about automation is that people tend to think (as in the case in the survey) of automation is that people tend to think (as in the case in the survey) of automation supplement/replacing/assisting in existing human processes.”

P22: “The theoretical framework regarding the SR methodology is clear and valid. Available research shows inconsistent judgements on the risk of bias, methods around data synthesis that are not always appropriate, and we may question the assessment of certainty of evidence…We should start with this; identifying areas of SR methodology that have shown to be inconsistent.”