Skip to main content

Table 2 Proposed additional items for inclusion in a shared dataset for a classification experiment for automation of systematic review processes

From: A question of trust: can we build an evidence base to gain trust in systematic review automation technologies?

Column Item
1 Title of source—publication name, report name, etc.
2 Indexing data (e.g., PubMed identifier, ISBN, doi)
3 Author names
4 Publication venue (e.g., journal name)
5 Serial data (e.g., volume, issue, and page numbers)
6 A final classification field. This would be a final category used in the systematic review. For example, if the dataset is designed for screening, this field might refer to inclusion status in the final systematic review (“yes” or “no”), or if the classification task is bias assessment this might refer to bias assessment in the final systematic review (“low”, “high”, “unclear”).
7 Reviewer 1 classification, i.e., whether Reviewer 1 recommended inclusion of the article in the systematic review
  Reviewer 1 notes field (free text) whenever notes were provided by the reviewer
8 Reviewer 1 notes field supporting text from the manuscript if extracted (optional)
9 Reviewer 2 classification, i.e., whether Reviewer 2 recommended inclusion of the article in the systematic review
10 Reviewer 2 notes field (free text) whenever notes were provided by the reviewer
  Reviewer 2 notes field supporting text from the manuscript if extracted (optional)
11 Arbiter notes field (free text) whenever notes were provided by the arbiter
12 A training field (“yes” or “no”) on whether the entry was used to train human reviewers