Items of guidance | Subgroups of items & N of papers | Narrative summary of extracted data |
---|---|---|
Criteria/rationale for conducting LSR | Rationale (N = 10/17) | • High prevalence of condition/RQ [13, 15] • Existing results change [3, 15] • Priority for decision making [3, 20, 22, 26, 27, 29] • Low certainty of evidence or rapidly emerging evidence [3, 15, 18, 21, 26, 27, 29] |
Inclusion criteria | Emerging change (N = 1/17) | • Adaption is needed, if inclusion criteria are changed [3] |
Re-evaluate (N = 2/17) | • Based on the evolving quality of evidence, a new understanding of context, with the involvement of experts with different expertise [20] • Identify and re-define most relevant RQs [13] | |
Search | Frequency (N = 8/17) | • Set up auto alerts to provide a regular feed of new citations [14] • Continuous search (e.g., varying between weekly and monthly) [1, 3, 13, 14, 16, 19, 28, 29] |
Database (N = 2/17) | • Bibliographic databases, clinical trials registries, gray literature [3, 14] | |
Who (N = 1/17) | • Information specialists or librarians, using technological enablers [3] | |
Screening tool (N = 10/17) | • Computer-supported & automated [3, 13,14,15, 17, 19, 26,27,28,29] • Continuous database search with push notification [25, 26] • Guidance on eligibility: machine-learning classifier, crowdsourced inclusion decisions [25] | |
Data extraction | Frequency (N = 3/17) | • Continuous search (trigger-dependent) [1] • Immediately after study identification [22] • Once new evidence has been identified for inclusion, the update process including data extraction starts [29] |
Who (N = 1/17) | • Machine-learning information-extraction systems [25] • Linkage of existing structured data sources (e.g., clinical trials registries) [25] | |
How (N = 6/17) | • AI, machine learning, and automated structured data [3, 13, 15, 26, 29] | |
Quality & bias assessment | Frequency (N = 2/17) | • Regular updating, at a defined time interval [3] • Once new evidence has been identified for inclusion, the update process including RoB assessment starts [29] |
Who (N = 0/17)a | ||
How (N = 2/17) | • Machine learning-assisted RoB tools (e.g., RobotReviewer) [25] • AI-assited tools [26] | |
Data synthesis with meta-analysis (if applicable) | Frequency (N = 5/17) | • Immediately after new study inclusion [22, 24] • When deciding to update [14], on a continuous base [1] • Once new evidence has been identified for inclusion, the update process including data synthesis starts [29] |
Who (N = 1/17) | • People responsible for performing the initial evidence synthesis [21] | |
How (N = 5/17) | • AI, e.g., automatic text generation tools [3] • Error controls, e.g., by trial sequential analysis [24, 29], sequential methods, or Bayesian framework [1] • Follow the description of the planned statistical approach to update a meta-analyze [14] | |
Certainty of the evidence assessment | Frequency (N = 1/17) | • Regular updating [3] |
Who (N = 0/17)a | ||
Authorship changes | Authorship (N = 4/17) | • Regularly updated for each new review version, according to contribution [1, 3] • Contribution of each member of the group was assessed as sufficient for authorship (and meeting ICMJE criteria) or not [14, 29] |
Ongoing method support | Method support (N = 2/17) | • Involvement of different methodological expertise [20] • Team of clinicians, researchers, and graduate students with SR expertise [29] |
Funding | Funding (N = 4/17) | • Impact on maintaining LSR [3] • Direct funding for personnel [19], a consistent flow of funding to research groups is needed [13, 16] |