Skip to main content

Table 3 System Usability Scale responses for each item, per toola

From: Performance and usability of machine learning for screening in systematic reviews: a comparative evaluation of three tools

ItemAbstrackrDistillerSRRobotAnalyst
I think that I would like to use the tool frequently3.5 (1)4 (0.5)1 (1)
I found the tool to be unnecessarily complex2 (1)3.5 (1.25)3 (0.5)
I thought the tool was easy to use4 (1.25)2.5 (2)2 (1.5)
I think that I would need the support of a technical person to be able to use the tool1 (1)2.5 (1.25)4 (1.25)
I found the various function in the tool were well integrated4 (1.25)3.5 (2.25)3 (1.25)
I thought there was too much inconsistency in the tool2 (0.25)1 (1.25)4 (1.25)
I would imagine that most people would learn to use the tool very quickly4.5 (1)3 (1.25)3 (0.25)
I found the tool very cumbersome to use2 (0.5)3 (1.25)5 (0)
I felt very confident using the tool4 (1)3.5 (1.25)2 (2.25)
I needed to learn a lot of things before I could get going with the tool2 (0.25)3 (0.5)2.5 (1)
Overall score (/100)79 (23)64 (31)31 (8)
  1. Likert-like scale: 1 = strongly disagree, 3 = neutral, and 5 = strongly agree. Values represent the median (interquartile range) of responses