A comparison of two assessment tools used in overviews of systematic reviews: ROBIS versus AMSTAR-2

Background AMSTAR-2 is a 16-item assessment tool to check the quality of a systematic review and establish whether the most important elements are reported. ROBIS is another assessment tool which was designed to evaluate the level of bias present within a systematic review. Our objective was to compare, contrast and establish both inter-rater reliability and usability of both tools as part of two overviews of systematic reviews. Strictly speaking, one tool assesses methodological quality (AMSTAR-2) and the other assesses risk of bias (ROBIS), but there is considerable overlap between the tools in terms of the signalling questions. Methods Three reviewers independently assessed 31 systematic reviews using both tools. The inter-rater reliability of all sub-sections using each instrument (AMSTAR-2 and ROBIS) was calculated using Gwet’s agreement coefficient (AC1 for unweighted analysis and AC2 for weighted analysis). Results Thirty-one systematic reviews were included. For AMSTAR-2, the median agreement for all questions was 0.61. Eight of the 16 AMSTAR-2 questions had substantial agreement or higher (> 0.61). For ROBIS, the median agreement for all questions was also 0.61. Eleven of the 24 ROBIS questions had substantial agreement or higher. Conclusion ROBIS is an effective tool for assessing risk of bias in systematic reviews and AMSTAR-2 is an effective tool at assessing quality. The median agreement between raters for both tools was identical (0.61). Reviews that included a meta-analysis were easier to rate with ROBIS; however, further developmental work could improve its use in reviews without a formal synthesis. AMSTAR-2 was more straightforward to use; however, more response options would be beneficial.


Background
Systematic reviews have become a fundamental part of evidence-based medicine; they are considered the highest form of evidence as they synthesise all available evidence on a given topic [1]. Many will also combine data to give an overall effect estimate using a meta-analysis.
However, the quality and standard of reviews varies considerably. If this is not understood, or in some way established, the results of many reviews might be overstated. Quality assessment tools have been developed to assess such variation in standards.
One previously heavily cited tool is the Assessment of Multiple Systematic Reviews (AMSTAR) scale [2] which has been widely used since its development in 2007. This scale was shown to be both reliable and valid [3]. However, it came under criticism for some issues with its design. It was argued by Burda et al. [4] that AMSTAR was lacking in some key constructs, in particular, the confidence in the estimates of effect. It also lacks an item to assess subgroup and sensitivity analysis. Further criticisms include issues such as the inclusion of foreign language papers as "grey literature" and the idea that the items can often partially but not fully meet the criteria was highlighted. Also, each item was not weighted evenly and there is a lack of overall score, which became problematic when trying to compare scores. Thus, an upgraded version (AMSTAR-2) was developed in 2017. The new version promised to simplify the response categories, align the definition of research questions with the PICO (population, intervention, control group, outcome) framework, seek justification for the review authors' selection of different study designs (randomised and nonrandomised) and included numerical rating scales for inclusion in systematic reviews, seek reasons for exclusion of studies from the review, and determine whether the review authors had made a sufficiently detailed assessment of risk of bias for the included studies and whether risk of bias was considered adequately during statistical pooling and when interpreting the results [5].
A second novel assessment tool that has undergone rigorous development was published in 2016 (Risk of Bias in Systematic reviews [ROBIS [6]]). It aimed to provide a thorough and robust assessment of the level of bias within the systematic review.

Description of the assessment tools Assessment of multiple systematic reviews (AMSTAR-2)
The main aim of the AMSTAR-2 is a tool to assess the methodological quality of the review. It is made up of 16 items in total and has simpler response categories than the original AMSTAR version. Some sections are considered by the authors to be critical domains, which can be used for determining an overall score (see Appendix, Table 12 for more information on the critical domains). AMSTAR-2 is intended for assessing effectiveness. The tool can also be applied to reviews of both randomised and non-randomised studies.

ROBIS tool
The main aim of the ROBIS tool is to evaluate the level of bias present within a systematic review. The tool is made up of three distinct phases. Firstly, there is an optional first phase to assess the applicability of the review to the research question of interest. The second phase is made up of 20-items within four main domains: study eligibility criteria, identification and selection of studies, data collection and study appraisal, synthesis and findings. This phase is to identify concerns about the review conduct. Each domain has signalling questions and ends with a judgement of concerns of each domain (low, high or unclear). There is also a third phase consisting of three signalling questions to enable an overall assessment of bias rating to be given. ROBIS has a wide application and is intended for assessing effectiveness, diagnostic test accuracy, prognosis and aetiology [6].

Previous research
Due to the novelty of both tools, there is limited available literature comparing them; however, some work has been recently published.
One review team [7,8] compared all three tools (AMSTAR, AMSTAR-2 and ROBIS), applying them to reviews that reported both randomised and non-randomised trials. The inter-rater reliability between four raters' across 30 systematic reviews was analysed. Minor differences were found between AMSTAR-2 and ROBIS in the assessment of systematic reviews including a mix of study type. On average, the inter-rater reliability (IRR) was higher for AMSTAR-2 compared to ROBIS. They assumed that scoring ROBIS would take more time in general, and it was always applied after AMSTAR-2, but in fact the mean time for scoring AMSTAR-2 was slightly higher than for ROBIS (18 vs. 16 min), with huge variation between the reviewers. They also reported that some signalling questions in ROBIS were judged to be very difficult to assess.

Aim
The overarching aim of our work is to add to the literature and make a further comparison of both assessment tools in two overviews of reviews. Our team had previously completed two overviews on complementary and alternative medicine (CAM) therapies for two hard-totreat conditions. One overview evaluated systematic reviews of various CAM therapies for fibromyalgia (FM) [9], and the other evaluated systematic reviews of CAM therapies for infantile colic [10].

Objectives
Due to some of the challenges we had using both tools in our overview of reviews work, we planned a formal assessment of both tools by completing the following comparisons and evaluations: 1. To compare the content of the tools 2. To compare the percentage agreement (IRR) 3. To assess the useability/user experience of both tools.

Methods
Two overviews of reviews were conducted by our team [9,10]. The first reviewed CAM for fibromyalgia and assessed the included reviews using both the original AMSTAR tool [2] and ROBIS [6]. This review was published in 2016, prior to the development and publication of AMSTAR-2 [5]. Here, we reported on 15 systematic reviews of CAM for fibromyalgia, published between 2003 and 2014 which assessed several CAM therapies. Eight of the reviews included a quantitative synthesis.
We subsequently completed a second overview of reviews of CAM for infantile colic published in 2019 [10]. Here, we used the new AMSTAR-2 tool alongside ROBIS. We reported on 16 systematic reviews of CAM for colic, published between 2011 and 2018. The reviews investigated several CAM therapies, 12 of which included a quantitative synthesis.
We later returned to the fibromyalgia review papers and reassessed them all using the AMSTAR-2 scale, for consistency. This results in a total comparison of 31 reviews. The reviewers were not strict about the order of ratings.

Assessment of methodological quality/bias of the included reviews
Three reviewers (RP, VL, PD) independently assessed each systematic review using both tools. Any reported meta-analyses were checked by a statistician experienced in meta-analyses (CP). The final score was agreed after discussion between the authors.

Results
Our first objective was to compare the content of the tools (see Table 1). Any overlaps and discrepancies between the two scales are identified. Overall, we found considerable overlap on the signalling questions. However, ROBIS does not assess whether there is a comprehensive list of studies (both included and excluded) or whether any conflicts of interest were declared (both at the individual trial level and for the reviews), as these are considered issues of methodology quality rather than bias. AMSTAR-2 also assessed possible conflicts of interest, which is not assessed in ROBIS, despite being a potential risk of bias. However, the section on synthesis was given more in-depth consideration in ROBIS tool.

Section 2: Comparison of the inter-rater reliability of the tools AMSTAR-2
The consensus results for AMSTAR-2 of both fibromyalgia and colic overviews can be found in Table 2. We report on 15 systematic reviews of CAM for fibromyalgia and found all but one review [13] rated as having critically low confidence in the results (see Appendix, Table 15 for scoring information). This was the only Cochrane review included in the FM overview. We also report on 16 systematic reviews of CAM for colic. Most were rated as having critically low confidence in the results, 4 were rated as low and 1 (a Cochrane review) was considered to have high confidence in the results. The comparison of the ratings for each review can be found in the Appendix (see Tables 9,10,13,and 14). There were a greater number of discrepancies between the overall risk of bias and quality ratings in the fibromyalgia reviews. The overall risk of bias/quality ratings was more consistent in the colic reviews.
Results of inter-rater reliability analysis for AMSTAR-2 A summary of the inter-rater reliability [IRR] for AMSTAR-2 can be found in Table 3. Seven questions that relate to critical domains were identified by Shea et al. [5]; more information about these domains can be found in Appendix (Table 15).
Summary of the findings on Inter-rater reliability In total, 460 comparisons were included in the analysis for AMSTAR-2. The median agreement for all questions was 0.61. Eight of the 16 AMSTAR-2 questions had substantial agreement or higher. There was almost perfect agreement for questions 2 (did the report of the review contain an explicit statement that the review methods were established prior to the conduct of the review and 2. Did the report of the review contain an explicit statement that the review methods were established prior to the conduct of the review and did the report justify any significant deviations from the protocol? 3. Did the review authors explain their selection of the study designs for inclusion in the review? 3.2 Were sufficient study characteristics available for both review authors and readers to be able to interpret the results? Quality assessment 9. Did the review authors use a satisfactory technique for assessing the risk of bias (RoB) in individual studies that were included in the review?
3.4 Was risk of bias (or methodological quality) formally assessed using appropriate criteria? 3.5 Were efforts made to minimise error in risk of bias assessment?

Synthesis of the findings
If meta-analysis was performed did the review authors use appropriate methods for statistical combination of results?
12. If meta-analysis was performed, did the review authors assess the potential impact of RoB in individual studies on the results of the meta-analysis or other evidence synthesis?    All included studies were by the author team but did not indicate how this was dealt with Italicised columns represent the critical domains (see Appendix, Table 15) did the report justify any significant deviations from the protocol?), 7 (did the review authors provide a list of excluded studies and justify the exclusions?) and 10 (did the review authors report on the sources of funding for the studies included in the review?). The lowest agreement was for question 14 (did the review authors provide a satisfactory explanation for, and discussion of, any heterogeneity observed in the results of the review?). Ratings were missing in 35 cases. The results are displayed in Fig. 1.
The AMSTAR-2 critical questions, in particular, seemed to have good agreement compared to the other questions. There was at least substantial agreement for all critical questions except question 13 which had moderate agreement. Questions 2 and 7 both had almost perfect agreement and had the highest agreement of all AMSTAR-2 questions.
Gwet's AC2 statistic was used for questions 2, 4, 7, 8 and 9. Gwet's AC1 statistic was used for all other questions. The markers represent the Gwet's statistic and the error bars represent the 95% confidence intervals. The italicised data represent the median value for all questions.
Further information on the separate reviews can be found in the Appendix (Tables 7 and 11). The overall median IRR agreement for AMSTAR-2 questions for fibromyalgia is 0.65 and for colic is 0.60.

Summary of the ROBIS results
The consensus results for ROBIS for both fibromyalgia and colic overviews can be found in Table 4. With regard to the ROBIS results, domain 1 (which assessed any concerns regarding specification of study eligibility criteria), 9 fibromyalgia Table 3 The inter-rater agreement between the three raters for AMSTAR-2 Italicised questions are considered critical by the tool authors  reviews achieved a low risk of bias rating overall and 6 colic reviews achieved a low risk of bias rating overall. In domain 2 (which assessed concerns regarding methods used to identify and/or select studies), 7 fibromyalgia reviews achieved a low risk of bias rating overall and 6 colic reviews achieved a low risk of bias rating overall.
Domain 3 assessed concerns regarding methods used to collect data and appraise studies; 7 fibromyalgia studies and 10 colic reviews achieved a low risk of bias rating overall.
With regard to domain 4 (which assessed concerns regarding the synthesis and findings), more variation in the fibromyalgia scores was found, whereas most colic reviews were rated as high risk of bias in this domain. The reviews that did not conduct a meta-analysis were hard to assess using ROBIS.
The final section provides a rating for the overall risk of bias of the reviews; 7 fibromyalgia reviews achieved a low rating; 6, a high rating; and 2, were rated as unclear. Four colic reviews achieved a low rating; 4, an unclear rating; and 8, a high rating.

Results of inter-rater reliability analysis for ROBIS
A summary of the inter-rater reliability for ROBIS can be found in Table 5. At least one rater said "no information" in 159 comparisons. Rater 1 used "no information" 73 times; rater 2, 50 times; and rater 3, 93 times. In 107 comparisons only one rater said "no information" and the raters all agreed only in 10 comparisons. "No information" was used most frequently for question 1.1 (did the review adhere to predefined objectives and eligibility criteria? 23 studies), Gwet's AC2 statistic was used for the ROBIS questions (filled markers) and Gwet's AC1 statistic was used for the ROBIS domains (hollow markers). The error bars represent the 95% confidence intervals. The italicised data represent the median value for all ROBIS questions.
Further information on the separate reviews can be found in the appendix (Tables 8 and 12). The median IRR agreement for all ROBIS questions for FM is 0.55 and for colic is 0.63.

Section 3: Usability of the tools
All three raters felt AMSTAR-2 was more straightforward and user-friendly than ROBIS. This might be because it does not require expertise in systematic reviewing to complete this tool, just knowledge of trial design. Several issues arose from using the ROBIS tool as it required more consideration to complete. Within each domain, each question had five possible responses (yes, probably yes, probably no, no, no information), although at times it was difficult to distinguish between yes/probably yes and no/probably no. It also might be more helpful to have a choice of "no concerns/minor concerns/ major concerns/considerable concerns", instead of "low/high/ unclear" judgements that are currently at the end of each domain when assessing the overall judgement of concerns. Although there were perceived differences in the individual answers to each signalling question between reviewers, the overall rating of the domains was more consistent. Overall, domains 1-3 were easier to follow and score.
The most difficult domain to score was domain 4 which covers "synthesis of evidence". This was reflected in the lowest agreement between raters (0.17). We found that this domain is currently better designed for a review with a meta-analysis, rather than a narrative synthesis. The The ROBIS tool provides an overall sense of risk of bias of the review. There is better coverage overall than AMSTAR-2 and more precision with the use of a final rating. From our observations only, higher quality reviews were quicker to appraise. In our analysis, the "no information" rating for ROBIS questions was treated as missing. The raters rarely agreed on when to use this rating. In most cases, when one rater reported "no information" for a ROBIS question, the other two raters gave a different rating.
Several issues arose from using AMSTAR-2. Sometimes, the raters would have opted for a "partially yes" option when only a binary option (yes/no) was available (Q13, Q14, Q16). Also, some questions were ambiguous; in particular, Q3 asks if authors explain their selection of study design (e.g., use of RCTs/non RCTs); some reviews merely report they included RCTs rather than justifying their selection, which caused discrepancies between raters.
Also, some questions might elicit a different response depending on the outcome, e.g., Q13 (whether risk of bias was discussed/interpreted within the results), which may vary depending on whether there were multiple outcomes, and thus, which outcome is being referred to.
The raters also felt it would be helpful to have a formal space to add comments to justify their decision to help with discussions, as in the more ambiguous reviews; decisions were more open to interpretation. ROBIS, on the other hand, has a large section where the reviewer is expected to add selected text to support their decision.
Regarding completion timings, we were able to establish how long it took to complete both tools for one of the overviews (colic). There was little difference in timings between rater 1 and 2 to complete both tools; in fact, it took rater 2 slightly longer to complete AMSTAR-2 than ROBIS which is surprising, considering the issues reported above. However, rater 3 took considerably longer to complete ROBIS than AMSTAR-2 (see Table 6).
Rater 3 was the most experienced reviewer and helped develop the ROBIS tool. They spent longer on bringing the evidence forward from the individual reviews into the ROBIS extraction form as recommended by the guidance document, whereas the other two raters only wrote cursory notes.
It is important to highlight that it is advised in the ROBIS guidance document that it is a tool aimed at experienced systematic reviewers and methodologists. We would agree with this recommendation but recognise that this is not often the case in many groups undertaking reviews.

Summary of findings
The median inter-rater reliability (IRR) agreement for both AMSTAR-2 and ROBIS questions was substantial: 50% of AMSTAR-2 questions and 46% of ROBIS questions had substantial agreement or higher. For AMSTAR-2, 460 comparisons were included in the analysis. The median agreement for all questions was 0.61. For ROBIS, there were 734 comparisons considered for the 24 questions. The median agreement for all  questions was also 0.61. It is interesting that the median IRR agreement for both tools was 0.61, demonstrating a similar level of rating between the two scales.
Results were similar when conducting the analysis for fibromyalgia and colic reviews separately (see appendix for independent overview results). For fibromyalgia, the median IRR value was 0.66 for the AMSTAR-2 questions compared to 0.56 for the ROBIS questions. For the colic studies both AMSTAR-2 and ROBIS had a similar median (0.60 for AMSTAR-2 and 0.63 for ROBIS).
It must also be considered that the ROBIS questions include more categories than most of the AMSTAR-2 questions. Most AMSTAR-2 questions are binary. Interrater agreement tends to be lower when there are more categories, as there are more possibilities for disagreement. Similarly, ROBIS includes more questions than AMSTAR-2 which can also result in more disagreement. However, despite these differences, the median agreement was the same for the AMSTAR-2 and ROBIS questions.

Usability of the tools
Several issues arose when using the ROBIS tool as it required more consideration to complete, which could become problematic in a large review. All three raters felt AMSTAR-2 was more straightforward and user-friendly than ROBIS. This might be because it does not require expertise in systematic reviewing to complete this tool, just knowledge of trial design. AMSTAR-2 was considered quicker to work through than ROBIS, yet the median timings demonstrated only a slight increase in timing on AMSTAR-2 than ROBIS in two raters, although one rater did take considerably longer on ROBIS than AMSTAR-2. All raters felt domain 4 of ROBIS was particularly difficult to complete if there was no meta-analysis. Domain 4 would benefit from further development in order to assess reviews without a meta-analysis, as in some ways it is biassed against these types of reviews.
We were unable to directly compare our results against Pieper's work, as the Fleiss' kappa ignores the order of the categories (when there are more than two categories), which is why we used Gwet's as it takes the order into account and allows for "partial agreement". Also, Gwet scores tend to be higher than Fleiss scores in general, which makes comparisons difficult to conduct.
In Pieper et al. 's [7] study, ROBIS was always applied after AMSTAR-2, and the mean time for scoring AMSTAR-2 was slightly higher than for ROBIS (18 vs. 16 min), with huge variation between the reviewers, whereas in our study, the overall mean time (calculated for colic reviews only) was slightly higher for ROBIS Table 9 The risk of bias and study quality for each fibromyalgia review When AMSTAR-2 is low, this should correspond to ROBIS being of high risk of bias. The italicised reviews show discrepancies between the overall rating of quality/bias  Table 10 To compare the distribution of risk of bias and study quality for the fibromyalgia reviews

Potential bias in the overview process
One author evaluated their own work using AMSTAR-2 and ROBIS (RP: [19,29]), although this work was also independently assessed by two other reviewers (VL, PD). In addition, one of the developers of ROBIS (PD) applied the ROBIS tool to assess the included reviews.
We had not planned to complete an IRR assessment of the two scales whilst completing these two overviews of reviews; therefore, we did not apply strict criteria to our assessment schedule, i.e., we did not apply the tools in any particular order. We also did not complete timings for some of our assessments in a systematic way.
Another issue is we compared our ratings over time, i.e., a batch of five papers were discussed before the next batch was assessed; this is likely to have led to greater consistency between the raters over time, but our numbers were too small to check this. on the sources of funding for the studies included in the review?). The median agreement for all questions was 0.65. Ratings from a reviewer were missing in 20 instances overall. Ten out of 24 (41.7%) questions for ROBIS had at least substantial agreement. Questions 3.1 (were efforts made to minimise error in data collection?) and 3.5 (were efforts made to minimise error in risk of bias assessment?) had almost perfect agreement. The median agreement for all questions was 0.55. The agreement was different for each ROBIS domain with substantial being the highest agreement (for missing in 6 instances). The raters gave a rating of "no information" in 93 cases. In most of these cases (65), the other two raters gave a different rating. There were 5 instances where all reviewers reported "no information". The most common questions for "no information" were questions 1.1 (did the review adhere to predefined objectives and eligibility criteria? 13 times), 4.2 (were all pre-defined analyses reported or departures explained? 13 times) and 4.5 (were the findings robust, e.g. as demonstrated through funnel plot or sensitivity analyses? 11 times) . Tables 9 and 10 Results of AMSTAR-2: CAM for colic The inter-rater agreement between the three raters is shown in Table 11.

Results of ROBIS: CAM for colic
The inter-rater agreement between the three raters is shown in Table 12 Inter-rater agreement for colic Eight of 16 (50%) AMSTAR-2 questions had substantial agreement or higher. There was almost perfect agreement for questions 2 (did the report of the review contain an explicit statement that the review methods were established prior to the conduct of the review and did the report justify any significant deviations from the protocol?), 7 (did the review authors provide a list of excluded studies and justify the exclusions?) and 9 (did the review authors use a satisfactory technique for assessing the risk of bias (RoB) in individual studies that were included in the review?). The median score for all questions was 0.60. Ratings from a reviewer were missing in 15 instances overall. Thirteen of 24 (54.2%) ROBIS questions had substantial agreement or higher. There was almost perfect agreement for questions 3.1 (were efforts made to minimise error in data collection?) and 3.4 (was risk of bias (or methodological quality) formally assessed using appropriate criteria?). The median score for all questions was 0.63. The agreement was different for each ROBIS domain with substantial being the highest agreement (for domain 3). The agreement for the risk of bias was moderate. Ratings from a reviewer were missing in 3 instances. There were 66 ratings of "no information". There were 3 instances where the reviewers were in agreement. In 42 cases, only one reviewer said "no information". The most common questions were questions 1.1 (did the review adhere to predefined objectives and eligibility criteria? 10 times), 3.5 (were efforts made to minimise error in risk of bias assessment? 8 times) and 4.2 (were all pre-defined analyses reported or departures explained? 9 times). Tables 13, 14 and 15 Abbreviations CAM: Complementary and alternative medicine; AMSTAR : Assessment of Multiple Systematic Reviews; PICO: Population, intervention, control group, outcome; ROBIS: Risk of Bias in Systematic reviews; IRR: Inter-rater reliability.