Skip to main content

Critical appraisal in rapid systematic reviews of COVID-19 studies: implementation of the Quality Criteria Checklist (QCC)

Abstract

In this letter, we briefly describe how we selected and implemented the quality criteria checklist (QCC) as a critical appraisal tool in rapid systematic reviews conducted to inform public health advice, guidance and policy during the COVID-19 pandemic. As these rapid reviews usually included a range of study designs, it was key to identify a single tool that would allow for reliable critical appraisal across most experimental and observational study designs and applicable to a range of topics. After carefully considering a number of existing tools, the QCC was selected as it had good interrater agreement between three reviewers (Fleiss kappa coefficient 0.639) and was found to be easy and fast to apply once familiar with the tool. The QCC consists of 10 questions, with sub-questions to specify how it should be applied to a specific study design. Four of these questions are considered as critical (on selection bias, group comparability, intervention/exposure assessment and outcome assessment) and the rating of a study (high, moderate or low methodological quality) depends on the responses to these four critical questions. Our results suggest that the QCC is an appropriate critical appraisal tool to assess experimental and observational studies within COVID-19 rapid reviews. This study was done at pace during the COVID-19 pandemic; further reliability analyses should be conducted, and more research is needed to validate the QCC across a range of public health topics.

Peer Review reports

Introduction

The COVID-19 Rapid Evidence Service at the UK Health Security Agency (UKHSA; formerly Public Health England, PHE) was set-up in April 2020 to support the use of evidence within UKHSA’s national COVID-19 response [1]. We have produced rapid reviews on a range of topics, including effectiveness of face coverings in community settings, effect of vaccination on transmission, effectiveness of interventions to reduce transmission in adult and social care settings, or risk of transmission from the deceased [2]. Our reviews focused mainly on evidence from the COVID-19 pandemic and, in the early stage, the evidence available was mostly from small descriptive epidemiological studies. As the pandemic progressed, evidence from other study designs, including cohort, case–control and interventional studies emerged. As a result, most of our rapid reviews included a range of study designs.

In the first months of the pandemic, we critically appraised studies without using formal tools based on study design and main source of bias such as selection bias, confounding, and exposure and outcome assessment. But, as the body of evidence grew and our methods evolved, we moved to implement a formal critical appraisal tool. Due to the pace at which our reviews were conducted, it was important to identify a single tool that could be used across a range of experimental and observational studies. In this letter, we describe the identification of existing critical appraisal tools, the assessment of the candidate tools, and the selection of the most relevant for use in our rapid systematic reviews. Finally, the characteristics of the tool selected are presented.

Identification of critical appraisal tools

We conducted electronic searches on 18 January 2021 in Ovid Medline and Embase using search terms such as ‘risk of bias tool’ and ‘critical appraisal tool’, with no date limits. Results were first screened on title for relevance by an Information Scientist and 78 records reporting on critical appraisal were selected. These 78 records were screened on title and abstract by a Reviewer and records were excluded if they reported on critical appraisal tools specific to one study design (such as the Cochrane risk-of-bias-tool for randomised trials) or review question (for instance, one paper reported on tools to review the acquired brain injury literature). Of the 78 records, 32 were screened on full text. We also screened the list of tools used by the National Institute for Health and Care Excellence (NICE) [3]. In addition, members of the COVID-19 Rapid Evidence Service (four Reviewers and one Information Scientist) identified any critical appraisal tools they considered relevant for this project.

Potentially relevant critical appraisal tools identified included checklists from the Joanna Briggs Institute (JBI) [4], the Critical Appraisal Skills Programme (CASP) [5] and the National Heart, Lung and Blood Institute (NHLBI) [6], as well as the Quality Criteria Checklist (QCC) [7] and the Mixed Methods Appraisal Tool (MMAT) [8]. The JBI [4], CASP [5], and NHLBI [6] tools are applicable to multiple study designs, including case series, but have one checklist for each study design. They therefore did not meet our criteria of having one single tool to assess a range of study designs.

The MMAT [8, 9] divides studies into five main categories (qualitative research, randomised controlled trials, non-randomised studies, quantitative descriptive studies, and mixed methods studies), with one checklist for each category. The QCC has one unified checklist applicable to all study designs [10]. Both of these tools were therefore potentially of use within our rapid reviews.

Selection of a critical appraisal tool

The QCC is composed of 10 questions that requires responses of ‘yes, ‘no’, ‘unclear’ or ‘N/A’ (not applicable) [7, 10]. Each question includes sub-questions that identify important aspects to consider and specify how it should be applied to a specific study design. The MMAT consists of five questions which can be answered as ‘yes’, ‘no’ or ‘can’t tell’ [8].

Three reviewers, with different levels of experience, independently assessed four individual studies with both the MMAT and the QCC. The four studies reported on COVID-19 transmission but were of different design: one ecological study, one case report, one cross-sectional study and one retrospective cohort study. Agreement between the three reviewers was assessed using the Fleiss kappa coefficient, which showed good agreement for the QCC tool (k = 0.639) compared to only fair agreement for the MMAT tool (k = 0.261). The QCC contained a greater number of questions than the MMAT although the presence of sub-questions resulted in the reviewers finding the QCC clearer and more comprehensive than the MMAT. As a result, the reviewers felt more confident using the QCC tool than the MMAT tool and they found it easier to use. The time taken to complete was similar for both tools, usually ranging from 30 min to 1 h depending on the difficulty of the study, previous knowledge of the study (especially whether the reviewer had done the data extraction beforehand), and overall experience of the reviewer.

We agreed, based on these results, that the QCC tool would be used to critically appraise the studies included in all future COVID-19 rapid reviews. In a non-pandemic situation, it would have been preferable to compare agreement between tools using more studies, and to conduct qualitative research to record reviewers’ experience of using the tools in a more formal way. However, due to the speed of our work in this emergency pandemic context, this proved unfeasible. The clear difference in agreement and reviewers’ experience observed between tools enabled us to make the decision to use the QCC.

Characteristics of the QCC

The 10 questions of the QCC are presented in the supplementary material (see Box 1, Additional file 1). The QCC tool was developed by the Academy of Nutrition and Dietetics and the questions are based on criteria and domains identified by the Agency for Healthcare Research and Quality (AHRQ) to assess the methodological quality of a study [11]. The QCC was developed for nutrition-related studies, but the questions are also relevant to the COVID-19 studies assessed for our rapid reviews as both rely mainly on epidemiological observational studies.

The rating of the methodological quality of the studies is based on a system of critical questions rather than a numerical score. In the QCC, the four questions considered as critical are question 2 on selection bias, question 3 on group comparability (not applicable in descriptive studies), question 6 on intervention/exposure assessment and question 7 on outcome assessment.

The rating terminology in the original QCC was ‘positive’, ‘neutral’ and ‘negative’, which we amended to ‘high’, ‘medium’ and ‘low’ (see box 2, Additional file 1). A study is rated as ‘high’ methodological quality if the answer to the four critical questions is ‘yes’, plus at least one of the non-critical questions. A study is rated as ‘low’ methodological quality if 50% or more of the critical questions answered ‘no’ (or more than 50% of the non-critical questions answered ‘no’). Otherwise, the study is rated as ‘medium’ methodological quality. Judgment on the rating is made on a case-by-case basis for questions answered as ‘unclear’.

As with any critical appraisal tool, the QCC tool should be used by reviewers with appropriate knowledge and training. In our rapid reviews, critical appraisal was usually conducted by one reviewer and checked by a second.

Discussion and conclusion

We started using the QCC tool in January 2021, and it has been used in all our rapid systematic reviews completed since then, including six published on our website and one in the BMJ [2, 12]. Although the tool had originally been developed for assessment of evidence in the nutrition and dietetics field, the reviewers who have used it in our COVID-19 rapid reviews have found it relevant and reliable. Once the reviewers were familiar with the tool, it was easy and fast to apply, which is an important consideration when conducting a rapid review. As expected, being able to use one single tool across a range of experimental and observational studies was paramount to conducting those reviews at pace.

Our results suggest that the QCC is an appropriate tool for appraising experimental and observational studies included within COVID-19 rapid reviews, although further reliability analyses should be conducted. More research is also needed to validate the QCC across a range of different public health issues.

Availability of data and materials

The dataset supporting the conclusions of this letter is available from the corresponding author on reasonable request.

Abbreviations

AHRQ:

Agency for Healthcare Research and Quality

CASP:

Critical Appraisal Skills Programme

JBI:

Joanna Briggs Institute

MMAT:

Mixed Methods Appraisal Tool

NHLBI:

National Heart, Lung and Blood Institute

NICE:

National Institute for Health and Care Excellence

N/A:

Not applicable

PHE:

Public Health England

QCC:

Quality criteria checklist

UKHSA:

UK Health Security Agency

References

  1. UKHSA COVID-19 Rapid Evidence Service. UKHSA COVID-19 rapid reviews. https://ukhsalibrary.koha-ptfs.co.uk/covid19rapidreviews/. Accessed 21 Oct 2022.

  2. UKHSA COVID-19 Rapid Evidence Service. Table of publications showing review questions, current stage of review and last search date. https://ukhsalibrary.koha-ptfs.co.uk/covid19rapidreviews/#Table. Accessed 21 Oct 2022.

  3. The National Institute for Health and Care Excellence’s (NICE). Appendix H: appraisal checklists, evidence tables, GRADE and economic profiles. www.nice.org.uk/process/pmg20/resources/developing-nice-guidelines-the-manual-appendices-2549710189/chapter/appendix-h-appraisal-checklists-evidence-tables-grade-and-economic-profiles. Accessed 21 Oct 2022.

  4. The Joanna Briggs Institute (JBI). Critical Appraisal Tools. https://jbi.global/critical-appraisal-tools. Accessed 21 Oct 2022.

  5. The critical appraisal skill programme (CASP). CASP checklists. https://casp-uk.net/casp-tools-checklists/. Accessed 21 Oct 2022.

  6. The National Heart, Lung, and Blood Institute (NHLBI). Study quality assessment tools. https://www.nhlbi.nih.gov/health-topics/study-quality-assessment-tools. Accessed 21 Oct 2022.

  7. Handu D, Moloney L, Wolfram T, Ziegler P, Acosta A, Steiber A. Academy of nutrition and dietetics methodology for conducting systematic reviews for the evidence analysis library. J Acad Nutr Diet. 2016;116:311–8.

    Article  PubMed  Google Scholar 

  8. Hong QN, Pluye P, Fàbregues S, Bartlett G, Boardman F, Cargo M, et al. Mixed Methods Appraisal Tool (MMAT) version 2018 - user guide 2018. http://mixedmethodsappraisaltoolpublic.pbworks.com/w/file/fetch/127916259/MMAT_2018_criteria-manual_2018-08-01_ENG.pdf. Accessed 21 Oct 2022.

  9. Hong QN, Fàbregues S, Bartlett G, Boardman F, Cargo M, Dagenais P, et al. The Mixed Methods Appraisal Tool (MMAT) version 2018 for information professionals and researchers. Educ Inf. 2018;34:285–91. https://doi.org/10.3233/EFI-180221.

    Article  Google Scholar 

  10. Academy of Nutrition and Dietetics. Evidence analysis manual: steps in the academy evidence analysis process 2016. www.andeal.org/evidence-analysis-manual. Accessed 21 Oct 2022.

  11. West S, King V, Carey TS, Lohr KN, McKoy N, Surron SF, Lux L. Systems to rate the strength of scientific evidence. AHRQ Evidence Report Summaries. Agency for Healthcare Research and Quality (US). 2002.

    Google Scholar 

  12. Duval D, Palmer JC, Tudge I, Pearce-Smith N, O’connell E, Bennett A, Clark R. Long distance airborne transmission of SARS-CoV-2: rapid systematic review. BMJ. 2022;377:e068743.

    Article  PubMed  Google Scholar 

Download references

Acknowledgements

The authors would like to thank Sean Harrison, Marialena Trivella and Caryl Beynon for their insights and comments. JCP is supported by a British Heart Foundation accelerator award (AA/18/7/34219) and works in a unit that is supported by the University of Bristol and UK Medical Research Council (MC_UU_00011/6) and was also supported by a secondment to the COVID-19 Rapid Evidence Service. JKSA (NIHR Academic Clinical Fellow, ACF-2017-25-007) is funded by the NIHR. The views expressed in this publication are those of the authors and not necessarily those of the NIHR, NHS, the UK Health Security Agency or the UK Department of Health and Social Care.

Funding

We did not receive any funding for this work.

Author information

Authors and Affiliations

Authors

Contributions

DD designed the study and drafted the manuscript, with input from RC and NPS; PR conducted the searches, with input from NPS; DD screened the results and selected the tools to assess, with input from NPS and JCP; DD, JCP and JKSA conducted the critical appraisal. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Daphne Duval.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1.

The list of questions and the quality ratings of the QCC are provided.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Duval, D., Pearce-Smith, N., Palmer, J.C. et al. Critical appraisal in rapid systematic reviews of COVID-19 studies: implementation of the Quality Criteria Checklist (QCC). Syst Rev 12, 55 (2023). https://doi.org/10.1186/s13643-023-02219-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13643-023-02219-z

Keywords