|
Title
| |
1
|
Identify the report as a systematic review, meta-analysis or both
|
[12]
|
2
|
Identify the report as a study of diagnostic accuracy using at least one measure of accuracy
|
[28]
|
3
|
State whether the report is a comparative (one diagnostic test vs. another) or a non-comparative review
|
[37, 38]
|
|
Introduction
| |
4
|
State the scientific and clinical background, including the intended use and clinical role of the index test (e.g., triage test, add-on test, or replacement test
|
[39]
|
5
|
List review objective using PICO format (participant characteristics, intervention, comparison, outcome)
|
[12]
|
|
Methods: protocol eligibility, and search
| |
6
|
Indicate if a review protocol exists, where it can be accessed and, if available, registration number
|
[12]
|
7
|
Report deviations from the original protocol
|
[31]
|
8
|
Report which outcomes are considered primary and secondary
|
[31]
|
9
|
Describe all information sources and the date of search
|
[12]
|
10
|
Report restrictions to search strategy (language, publication status, dates)
|
[31]
|
11
|
Present full electronic search strategy for at least one database, including any limits used, such that it could be repeated
|
[12]
|
12
|
Report whether hand searching of reference lists was done
|
[31]
|
13
|
Describe methods to ensure that overlapping patient populations were identified and accounted for
|
[31]
|
14
|
List any search of the gray literature including search of study registries
|
[31]
|
15
|
Specify criteria for eligibility
|
[12]
|
|
Methods: study selection and data collection
| |
16
|
Report the process for selecting studies (i.e., screening, full-text eligibility)
|
[12]
|
17
|
Provide an appendix with studies excluded, with reasons for exclusion, during full-text screening
|
[12]
|
18
|
Describe method of data extraction from reports
|
[12]
|
19
|
Report which data items were extracted from included studies
|
[12]
|
20
|
Report how studies for which only a subgroup of participants is relevant to the review will be handled
|
[31]
|
21
|
Report how “indeterminate” or “missing” results for either the index test or reference standard were dealt with in the analysis
|
[40]
|
22
|
Report if and how any parameters beyond test accuracy will be evaluated (e.g., cost-effectiveness, mortality)
|
[46]
|
|
Methods: primary study data items
| |
23
|
(a) Patient demographic information (age, gender)
|
[2, 12, 28]
|
|
(b) Target condition definition
| |
|
(c) Index test
| |
|
(d) Reference standard
| |
|
(e) Positivity thresholds
| |
|
(f) Blinding information
| |
|
(g) Clinical setting
| |
|
(h) Disease prevalence
| |
|
(i) Cross-tabulation of index test with reference standard (2 × 2 table)
| |
|
(j) Funding sources
| |
|
Methods: risk of bias and heterogeneity
| |
24
|
Report how included individual studies will be assessed for methodological quality (e.g., QUADAS-2)
|
[14]
|
25
|
Describe if and how “piloting” the risk of bias tool was done
|
[14]
|
26
|
List criteria used for risk of bias ratings applied during the review
|
[31]
|
27
|
Describe methods for study quality assessment
|
[12]
|
28
|
Provide measures of consistency (e.g., tau2) for each meta-analysis
|
[12]
|
29
|
Describe test used to assess for publication bias
|
[12]
|
|
Methods: summary measures and statistics
| |
30
|
State the principal summary measures of diagnostic accuracy to be assessed
|
[28]
|
31
|
Report whether summary measures were calculated on a per-patient or per-lesion basis
|
[31]
|
32
|
Report pre-defined criteria for minimally acceptable test performance
|
[42]
|
33
|
State how multiple readers of an index test were accounted for
|
[17]
|
34
|
Report the statistical method used for meta-analysis (e.g., hierarchical model)
|
[2]
|
35
|
State which software package and macros was used for meta-analysis
|
[6]
|
36
|
Report any programming deviations made from published software packages
|
[6]
|
37
|
If comparative design, state the statistical methods used to compare test accuracy
|
[28]
|
38
|
Describe methods of additional analyses (e.g., subgroup), indicating whether pre-specified
|
[12]
|
39
|
Report how subgroup analyses were performed
|
[31]
|
40
|
When performing meta-regression report the form of factors being explored (categorical vs. continuous) and the cut-off points used
|
[41]
|
|
Results
| |
41
|
Report studies from screen to inclusion, ideally with a flow diagram
|
[12]
|
42
|
For each study, present characteristics for which data were extracted and provide the citations
|
[12]
|
43
|
Present data on risk of bias of each study on a per-item or per-domain basis
|
[12, 14, 35]
|
44
|
Present results of any assessment of publication bias
|
[12]
|
45
|
Report any adverse events or harms from index test or reference standard
|
[31]
|
46
|
For each study report 2 × 2 data (TP, FN, FP, TN)
|
[43, 45]
|
47
|
For each study report summary estimates of accuracy and confidence intervals
|
[28]
|
48
|
Report each meta-analysis including confidence intervals and measures of consistency (e.g., tau2)
|
[12]
|
49
|
Graphically display results with an ROC curve or forest plots of sensitivity and specificity
|
[44]
|
50
|
Report additional analyses (e.g., meta-regression)
|
[12]
|
51
|
Report risk of bias in the synthesis (e.g., analyses stratified by risk of bias)
|
[31]
|
52
|
Report summary of findings table with main outcomes and issues re: applicability of results
|
[31]
|
53
|
Report “frequency” tables of 2 × 2 data demonstrating potential findings in a patient population based on the prevalence
|
[45]
|
|
Discussion
| |
54
|
Summarize findings including implications for practice
|
[12, 28]
|
55
|
Provide a general interpretation of the results in the context of other evidence and implications for future research
|
[12]
|
56
|
For comparative design, report whether conclusions were based on direct vs. indirect comparisons
|
[37]
|
57
|
Discuss the implications of any missing data
|
[31]
|
58
|
Discuss applicability concerns to different populations/settings
|
[14, 45]
|
59
|
Discuss quality of included studies when forming conclusions
|
[36]
|
60
|
Account for any statistical heterogeneity when interpreting the results
|
[31]
|
61
|
Discuss the potential impact of reporting biases
|
[31]
|
62
|
Discuss the five GRADE considerations (study limitations, consistency of effect, imprecision, indirectness, and publication bias)
|
[31]
|
|
Disclosure
| |
63
|
Describe sources of funding for the review and role of funders
|
[12]
|
64
|
Report potential relevant conflicts of interest for review investigators
|
[36]
|