Screening for Cervical Cancer: Protocol for Systematic Reviews to Inform Canadian Recommendations

Purpose. To inform recommendations by the Canadian Task Force on Preventive Health Care on cervical cancer screening by systematically reviewing evidence of: (a) the effectiveness; (b) test accuracy; (c) individuals’ values and preferences, and (d) strategies aimed at improving screening rates. Methods. De novo reviews will be conducted to evaluate effectiveness and to assess values and preferences. For test accuracy and strategies to improve screening rates, we will integrate studies from existing systematic reviews with search updates to the present. Two Cochrane reviews will provide evidence of adverse pregnancy outcomes from the conservative management of cervical intraepithelial neoplasia. We will search Medline, Embase, and Cochrane Central (except for individuals’ values and preferences, where Medline, Scopus, and EconLit will be searched) via peer-reviewed search strategies, and the reference lists of included studies and reviews. We will search ClinicalTrials.gov and the World Health Organization International Clinical Trials Registry Platform for ongoing trials. Two reviewers will screen potentially eligible studies and agree on those to include. Data will be extracted by one reviewer with verication by another. Two reviewers will independently assess risk of bias and reach consensus. Where possible and suitable, we will pool studies via meta-analysis. We will compare accuracy data per outcome and per comparison using the Rutter and Gatsonis hierarchical summary receiver operating characteristic model and report relative sensitivities and specicities. Findings on values and preferences will be synthesized using a narrative synthesis approach and thematic analysis, depending on study designs. Two reviewers will appraise the certainty of evidence for all outcomes using GRADE (Grading of Recommendations Assessment, Development and Evaluation) and come to consensus. Discussion. The publication of guidance on cervical cancer screening by the Task Force in 2013 focused on cytology. Since 2013, new studies using human papillomavirus tests for detection of cervical cancer have been published that will improve our understanding of screening in primary care settings. This review will inform updated recommendations based on currently available studies and address key evidence gaps noted in our previous review. Systematic review registration: not registered. opposed to a shorter period in perfect health.

The changing epidemiology of HPV infection following the introduction of prophylactic vaccines [31,32] has implications for the relative bene ts and harms of screening, test accuracy, and patients' values and preferences [33]. In the context of high vaccine coverage over a su cient period of time, the pre-cancerous lesions targeted by cervical screening tests may eventually become so rare that the harms from screening may outweigh its bene ts [34]. Screening at the population level as the prevalence of cervical lesions diminishes may become very ine cient [33,34] and the predictive value of screening tests will be reduced [35]. Future screening guidelines will need to consider vaccine uptake and the prevalence of HPV infection. Dependent on the local context, there may be interest in personalizing screening based on vaccination status [36].

Screening for Cervical Cancer
The initial test developed for use in cervical screening was the Papanicolaou (Pap) test. The test, which can detect precancerous abnormalities in cells collected from the cervix, was rst introduced in Canadian centres as local trials in the late 1940s and 1950s [37]. More recently in the 1990s, the strong causal association between persistent infection with hrHPV types and cervical cancer led to the development of cervical screening tests that detect HPV DNA [38]. Randomized controlled trials (RCTs) have investigated the use of hrHPV tests, alone as the primary screening tool for cervical cancer [39][40][41][42], for co-testing with cytology [43][44][45][46], and followed by various forms of triage [47]. Longterm follow-up of women enrolled in these trials is ongoing; thus, evidence of the bene ts and harms of hrHPV testing requires continuing review. Unlike the Pap test, cervical samples for the hrHPV test can be self-collected, which has the potential to reduce barriers to screening; however, in previous reviews authors have suggested that further evidence on the agreement of ndings between self-and clinician-sampled testing is required before recommendations on the use of self-collected samples are made [48]. High-risk HPV tests may also have a lower speci city than cytology resulting in higher rates of false positives and potentially unnecessary colposcopies [49]. Because hrHPV testing is not yet offered in many Canadian jurisdictions [30], its adoption within organized cervical screening programs would require changes to laboratory con guration, work ow and human resources.
In Canada, all provinces except Quebec have organized cervical screening programs. In Quebec as well as Nunavut, Yukon, and the Northwest Territories opportunistic screening is offered by primary care providers (plans are underway to implement an organized screening program in the Yukon Territory) [30]. These programs screen eligible women who have no signs or symptoms of cervical cancer and are not at elevated risk for cervical cancer [50]. Five jurisdictions (Alberta, Saskatchewan, Manitoba, Ontario, and New Brunswick) use initial letters of invitation as a recruitment method for never-screened women [50]. The letters provide information on screening and eligibility and invite women to participate in screening [50]. In Newfoundland and Labrador, a letter of invitation is pending implementation, and other recruitment methods include generating a routine recall list for primary care providers [50]. In Nunavut, phone calls are used for recruitment. Other jurisdictions do not use standardized recruitment methods [50].
In eight jurisdictions (Nunavut, British Columbia, Alberta, Saskatchewan, Manitoba, Ontario, New Brunswick, and Newfoundland and Labrador) if the screening result is normal, participants and/or their primary care providers receive a recall telephone call and/or letter at a prede ned interval [50]. In the case that a screening result is abnormal, the participant and/or their primary care provider is sent a letter of noti cation [50]. Individuals with abnormal results may have repeat cytology testing or HPV triage, or be referred directly for colposcopy, for evaluation and biopsy, the exact pathway and algorithm varying by jurisdiction [50]. Those identi ed via follow-up testing as having precancerous lesions or ICC are referred for appropriate management.
Different guideline panels, and individual patients and providers, may make different choices for screening based in part on their values and preferences [51]. Preferences for or against a screening strategy are a consequence of the relative importance people place on the expected or experienced outcomes incurred [52]. Despite the anticipated bene ts from cervical cancer screening, including the early detection and treatment of pre-cancerous lesions, potential harms exist, including frequent follow-up testing and invasive diagnostic procedures (e.g., biopsy, colposcopy), unnecessary treatment of false-positive results, and psychological harms associated with positive tests [53]. As many pre-cancerous lesions will never become clinically important over an individual's lifetime, overdiagnosis of such lesions is of concern for patients and providers as it can lead to unnecessary testing and treatment and the harms associated with these procedures.
In Canada, 74% of women aged 25 to 69 years receive a Pap test every three years [54]; however, some population subgroups, including Indigenous populations [55], individuals with very low socioeconomic status [56], individuals living in rural or remote communities, new immigrants, and other underserved groups [57] are more likely to be inadequately screened. Transgender individuals (e.g., female-to-male transgender men) have also been identi ed as a group at risk for inadequate cervical cancer screening [58]. There is a need to evaluate the effectiveness of interventions aimed at improving screening rates, especially among under-and never-screened populations.

Rationale and Scope of Systematic Review
At present, cervical cancer screening programs in provinces and territories use cytology-based screening methods using the Pap test. High-risk HPV testing is available in only six Canadian jurisdictions (Northwest Territories, British Columbia, Saskatchewan, Manitoba, Quebec, and Nova Scotia), and its use is often limited to special requests or speci c clinics, or privately paid by individual patients [50]. In 2013 the Canadian Task Force on Preventive Health Care published a guideline on cervical cancer screening which recommended women aged 25 to 69 years be screened every three years with Pap testing; women aged 24 years and younger not be routinely screened; and women aged 70 years or older, who have undergone adequate screening, not be screened [59]. Uptake of these recommendations across the country has been mixed with most provinces and territories initiating screening at age 21 years, with the exception of British Columbia, Alberta, and Prince Edward Island where screening recently follows the Task Force recommendation of starting at age 25 years [30,60].
The 2013 Task Force screening guidelines were limited to cytological screening for cervical cancer. At the time, the Task Force felt it was premature to make recommendations on the use of hrHPV testing due to the limited evidence identi ed; this was identi ed as a gap that should be addressed as more evidence became available. Since the release of the 2013 guideline, more recent international guidelines (including Australia, the United Kingdom, the Netherlands, and the United States) have provided recommendations on the use of hrHPV testing in cervical cancer screening [53,[61][62][63]. New studies have also been published that are likely to improve our understanding of screening in primary care settings for cervical cancer. Thus, we will undertake several systematic reviews to inform an update of the 2013 Task Force guideline. Speci cally, we aim to identify and synthesize evidence on: a. the effectiveness (bene ts and harms) and comparative effectiveness of various cervical cancer screening strategies; b. the comparative accuracy of various screening tests and strategies; c. values and preferences for outcomes from cervical cancer screening; and d. the effectiveness of interventions aimed at improving screening rates in under-screened and never-screened individuals.

Systematic Review Conduct
The Evidence Review and Synthesis Centre (ERSC) at the University of Alberta (AG, JP, DK-L, BV, LH) will conduct the systematic reviews on behalf of the Task Force following the research methods outlined in the Task Force methods manual [64]. We will follow a pre-de ned protocol, reported in accordance to current standards (Additional File 1) [65], as documented herein. During protocol development, a working group was formed consisting of Task Force members (DR, CK, AM, GTh, BDT), with input from clinical experts (JL, CP, DvN), and scienti c support from the Global Health and Guidelines Division at the Public Health Agency of Canada (RS, GTr). The working group contributed to the development of the Key Questions (KQs) and PICOTS (population, intervention(s) or exposure(s), comparator(s), outcomes, timing, setting, and study design) elements. Work on the reviews has not yet begun.
Task Force members made the nal decisions with regard to the KQs and PICOTS. Task Force members and clinical experts rated the proposed outcomes based on their importance for clinical decision-making, according to methods of Grading of Recommendations Assessment, Development and Evaluation (GRADE) [66]. Ratings by the clinical experts were solicited to ensure acceptable alignment with the views of Task Force working members (clinical decision-makers), but Task Force members determined the nal ratings. Final critical outcomes (rated at 7 or above on 9-point scale) pertaining to the effectiveness and comparative effectiveness of screening included: the rate of ICC, cervical cancer mortality, all-cause mortality, the rate of CIN 2 and CIN 3, and overdiagnosis of CIN 2, CIN 3, and ICC. Final important outcomes (rated 4-6) for inclusion were: the number and rate of colposcopy and/or biopsy (or referral rate), adverse pregnancy-related outcomes from conservative management of CIN, and the false-positive rates for detecting CIN 2, CIN 3, and ICC. These outcomes are de ned in Additional File 2. Other outcomes relevant to comparative accuracy, values, preferences, and the effectiveness of interventions to improve screening rates were selected by the Task Force working members in collaboration with the ERSC. The classi cation of bene t or harm for all outcomes will be based on the effects observed for different comparisons.
This version of the protocol was reviewed by the entire Task Force. Stakeholders (n = pending) reviewed a draft version of this protocol, and all comments were considered. Throughout the conduct of the systematic reviews, we will document any changes to the protocol (including timing), with justi cation. We will report on these within the nal report. We will report our ndings in accordance to available standards at the time of writing (i.e., v. 2009 [67] or updated version of the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-analyses) Statement, should it become available prior to submission of the nal report).

Key Questions and Analytical Framework
The Task Force has delineated ve KQs to inform their recommendations, as follows: KQ 1: What are the effectiveness (bene ts and harms) and comparative effectiveness of different screening strategies for cervical cancer? KQ 1a: Do the effectiveness and comparative effectiveness of different screening strategies for cervical cancer screening differ by age or by other population subgroups? KQ 2: What is the comparative accuracy of screening tests for cervical cancer? KQ 2a: Does the comparative accuracy of screening tests differ by age or by HPV vaccination status? KQ 3: What are the adverse pregnancy outcomes associated with conservative management of CIN? (NB. will not require a new or updated systematic review) KQ 4: What is the relative importance individuals place on the potential outcomes from screening for cervical cancer? KQ 5: What is the effectiveness of primary care-based interventions to increase rates of cervical cancer screening for under-and never screened individuals?
For the purpose of these reviews, we will consider effectiveness to include both bene ts and harms. The analytical framework in Figure 2 shows the population (and population subgroups), KQs, and outcomes in the context of the screening, diagnosis, management, and treatment modalities under consideration.
The systematic reviews for KQs 1 and 2 focus on the effectiveness and comparative effectiveness (KQ 1) and the comparative accuracy (KQ 2) of various screening strategies. The intent for KQ 2 is to ll gaps for the outcomes from KQ 1. The main goal is to compare detection rates and harms (i.e., false positives, false negatives) between different screening strategies, and to provide indirect evidence for KQ 1 with respect to false-positive rates, as we expect evidence for this outcome to be of low or very low certainty from studies contributing to KQ 1. It may also provide information about the comparative accuracy of screening tests not studied in KQ 1 to help determine if these may be appropriate to use in practice in the absence of KQ 1 evidence. KQ 3 focuses on the adverse pregnancy outcomes (the only direct treatment or management harm rated as important by the working group) associated with conservative management of CIN 2 and CIN 3. The intent of this KQ is to ll gaps for adverse pregnancy outcomes identi ed in the studies for KQ 1. The rationale for a separate KQ is that adverse pregnancy outcomes are unlikely to be reported in studies focusing primarily on screening effectiveness. In the United States Preventive Services Task Force (USPSTF) 2018 review of screening for cervical cancer with hrHPV testing [49], none of the included screening trials (n = 8) [39][40][41][42][43][44][45][46][68][69][70][71][72][73][74][75][76][77][78] reported on adverse pregnancy outcomes.
The ERSC will not undertake de novo searches or syntheses for KQ 3. During protocol development, a research librarian undertook a comprehensive search of existing systematic reviews published between 2014 and March 2019. These systematic reviews were scrutinized for suitability, with careful consideration for the comprehensiveness of their searches, scope (i.e., ability to capture the studies of interest), and reporting quality. We identi ed two Cochrane systematic reviews, published in 2015 [79] and 2017 [80], that answer our KQ 3. The Cochrane Review Group has con rmed that both reviews are presently being updated to incorporate the latest evidence, and these reviews will be used by the Task Force.
Of the two Cochrane reviews, the review by Kyrgiou et al. published in 2015 [79] synthesized evidence on fertility and early pregnancy outcomes (i.e., pregnancy rates, miscarriage rates, ectopic pregnancies) following conservative excisional or ablative management of CIN. Fifteen observational studies (>2 million participants) were included. The review by Kyrgiou et al. published in 2017 synthesized evidence on the obstetric outcomes (i.e., preterm birth, low birth weight, cervical cerclage) following conservative excisional or ablative management of CIN. Sixty-nine observational studies (>6 million participants) were included. Due to the observational study design of the available evidence, both reviews reported very low to low certainty evidence for the effects of the interventions for our outcomes of interest. Given that it would be unethical to conduct RCTs to address this question (i.e., where women with CIN would be randomized to a non-treatment control group), the probability of identifying a newly published trial that will improve the certainty of evidence for the outcomes of interest is assessed as virtually zero. Additional observational evidence is also unlikely to improve the certainty of evidence, but could impact the pooled effect estimates. As the two systematic reviews are presently undergoing updates, to avoid duplication of research effort the Task Force will rely on these two reviews to inform KQ 3. The ERSC will review, contextualize, and summarize the available evidence (i.e., in text, tables, and gures) from the two Cochrane systematic reviews to facilitate interpretation by the Task Force during guideline development.
The review for KQ 4 will synthesize evidence of the relative importance individuals place on the outcomes from cervical cancer screening [81,82], including all critical and important outcomes as de ned for KQ 1 ( Table 2). It will also provide information to the Task Force on whether there is important uncertainty about or variability in how much people value the main outcomes [81].
Given that certain Canadian sub-populations remain under-screened or never-screened despite recommendations for cervical screening, the review for KQ 5 will inform primary care interventions that may improve screening rates. Tables 2 to 5 show the PICOTS elements for KQs 1, 2, 4, and 5. These are described in detail in Additional File 3. Given that we will not undertake de novo synthesis for KQ 3, we have not included PICOTS elements for this KQ.

E ciencies by Integrating or Using Existing Systematic Reviews
Where possible, we will either update one or more existing systematic reviews or, if we are not aware of systematic reviews that are good candidates for an update, integrate studies from existing systematic reviews [83]. When available, we may use existing high quality, up-to-date systematic reviews as is without de novo searches or syntheses if they align well with the scope of our KQs and PICOTS elements (fully or in part, i.e., for one of multiple eligible comparisons; as is noted above for KQ3). In this case, we will contextualize and summarize the available evidence and perform certainty of evidence appraisals (based on information reported in the review) as needed to facilitate interpretation by the Task Force during guideline development. For the integration approach (detailed in Additional File 3), we will identify relevant studies in multiple previously published systematic reviews and develop and run update searches to present to identify contemporary studies not included in earlier reviews. The existing reviews will be used primarily to locate primary studies, although we may rely on reporting by reviews for some data extraction or risk of bias assessments, and will re-analyze the data using the primary studies and assess the overall certainty of the evidence in all cases. To identify potential candidate reviews, we undertook a comprehensive search for relevant systematic reviews, published between 2014 and March 2019, and scrutinized each for suitability. Important considerations included the comprehensiveness of the original searches, the scope of the review (i.e., ability to capture the studies of interest), and the reporting quality. Details of the reviews that we will use as a source of studies are in Additional File 4.

Literature Searches
We developed all database searches in collaboration with our research librarian. The searches, available in Additional File 5 for KQs 1, 2, and 4, have been peer reviewed by an external librarian according to PRESS (Peer Review of Electronic Search Strategies) guidance [84]. The searches for KQ 5 will be updated from previous reviews [48,85,86], with adaptations as needed. Unless otherwise indicated, all searches will be limited to studies published in English or French. We will not apply geographic lters to any of the searches. For KQ 1, we will contact ve content experts by e-mail to inquire about their knowledge of additional relevant studies. We will contact each expert twice, two weeks apart, before ceasing contact if we do not receive a reply. In all cases we will also search the reference lists of the included studies and of relevant systematic reviews identi ed during screening for additional records. We will search ClinicalTrials.gov and the World Health Organization International Clinical Trials Registry Platform for ongoing trials. Although we will exclude studies available only as conference proceedings, letters, or abstracts, we will contact the corresponding authors twice, two weeks apart, to ask about relevant full reports before ceasing contact if we do not receive a reply. The following are details of the strategies speci c to each KQ. The results of the electronic database searches for all KQs will ultimately be combined into a single database (removing duplicates) to create e ciencies in screening (due to inevitable overlap across the searches).
For KQ 1, we will search Ovid Medline (1946-), Ovid Embase (1996-), and Cochrane Central (1996-) from 1995 onward using MeSH terms and keywords for cervical cancer and screening, and study design lters for RCTs and observational studies. We have chosen to develop and run de novo searches rather than updating the searches from the 2013 CTFPHC guideline review because that review did not include the incidence of CIN as an outcome, nor screening with hrHPV.
For KQ 2, we will integrate studies from the 2019 health technology assessment (HTA) on HPV testing for primary cervical cancer screening by the Canadian Agency for Drugs and Technologies in Health (CADTH) [48], and the 2018 systematic review by Arbyn et al. [85] on the comparative accuracy of self-versus clinician-sample hrHPV tests. We will update the searches for the CADTH review in Ovid Medline (1946-), Ovid Embase (1996-), and Cochrane Central (1996-) from 2016 onward to identify studies published after the last date searched (March 2017 for the full search), undertaking edits to the searches as necessary (e.g., removing concepts that are not relevant to our KQ 2). We will update the searches for the Arbyn et al. (2018) review in the same databases from 2017 onward (last date searched, April 2018). We anticipate the possibility that an update to the systematic review by  may become available before we undertake our review for KQ 2. If such is the case, we will use the updated review as is without de novo searches or syntheses for the comparison of self-and clinician-sampled hrHPV testing.
CADTH sought to include systematic reviews and subsequently searched for primary studies published after the most recent systematic review. The inclusion of systematic reviews is not consistent with standard Task Force procedures for evidence synthesis [64]. Thus, we will supplement the updated database searches by screening the reference lists of the systematic reviews included in the CADTH HTA to identify the primary studies published prior to 2016.
For KQ 5, we will integrate studies (eligible for our review) from the 2011 Cochrane systematic review by Everett et al. on interventions to encourage cervical cancer screening uptake [86], and the 2018 systematic review by Arbyn et al. on hrHPV self-screening compared with reminders to encourage cervical cancer screening rates [85]. The Cochrane review by Everett et al. included studies of interventions targeted at women to improve cervical cancer screening rates, compared with no intervention or routine care [86]. We will update the Ovid Medline (1946-), Ovid Embase (1996-), and Cochrane Central (1996-) searches from 2008 onward to identify contemporary studies not included in the Cochrane review, undertaking edits to the searches as necessary. We expect the update search to capture studies of hrHPV self-screening compared with reminders (as per Arbyn et al.'s (2018) review), and other effectiveness studies published since the last date searched in the review by Everett et al. As per KQ 2, we anticipate the possibility that an update to the systematic review by  may become available before we undertake our review for KQ 5. If such is the case, we will use the updated review as is without de novo searches or syntheses for the comparison of self-and clinician-sampled hrHPV testing.

Study Selection
Electronic database searches We will upload the results of the electronic searches to EndNote (v.X7, Clarivate Analytics, Philadelphia, Pennsylvania) and remove duplicates.
We will transfer the titles and abstracts to DistillerSR (Evidence Partners, Ottawa, Canada) for screening. Two reviewers will independently screen the studies for eligibility in two stages (titles and abstracts, then full texts) following the pre-de ned selection criteria (Tables 2 to 5) and mark each as include/unsure or exclude. At the title and abstract screening stage, we will use the liberal-accelerated approach [87,88], whereby any record marked as include/unsure by either of two reviewers will be considered eligible for full text screening. Records excluded by either reviewer will be screened by a second reviewer to con rm or refute their exclusion. At the full text screening stage, the reviewers will agree upon the included studies, with arbitration by a third reviewer if necessary. We will record the reasons for excluding full texts and illustrate the study selection process via a ow diagram. We will append a detailed list of the excluded studies, with full text exclusion reasons, to the nal report. Before each screening stage, we will undertake a pilot round of 200 titles and abstracts and 10 full texts, or as many as needed to achieve a mutual understanding of the selection criteria. To create e ciencies, we will screen for studies meeting the eligibility criteria for all KQs simultaneously (the searches for all KQs will ultimately be combined into one database).
When inadequate detail is reported in a study to con rm or refute its eligibility we will contact the corresponding author by e-mail to request the additional information required. We will contact authors twice, two weeks apart, before ceasing contact if we do not receive a reply.

Studies identi ed via other sources
Studies identi ed via content experts and reference lists (i.e., of known systematic reviews that we are using as sources of studies, systematic reviews identi ed during screening, included studies) will be uploaded to separate folders in EndNote for storage and management. These will be screened following the same procedures as described for those identi ed via the electronic database searches. The selection process for these studies will be incorporated into the aforementioned ow diagram.

Data Extraction
For all KQs we will develop standard forms in Excel v. 2016 (Microsoft Corporation, Redmond, Washington) to guide data extraction. We will pilot test the forms on a random sample of 3 to 5 included studies for each KQ to ensure the complete and accurate extraction of all relevant data. Additional File 6 outlines the data extraction items for each KQ.
Data for the studies included in the review for each KQ will be extracted by one reviewer with veri cation by another, with the exception of results data (i.e., ndings for the outcomes of interest) which will be independently extracted by two reviewers with consensus. A third reviewer will arbitrate if agreement on the extracted data cannot be reached. For qualitative studies (KQ 4), one reviewer will copy the relevant 'Results' or 'Findings' texts and paste them into a Word (Microsoft Corporation, Redmond, Washington) document for analysis [89]. A second reviewer will verify the completeness of the extraction.
To create e ciencies we will rely on the study characteristics and results data for primary studies reported in earlier systematic reviews, where feasible. In the case of reviews with high quality conduct and reporting (e.g., Cochrane systematic reviews), one reviewer will perform a quality check of 10% of the data speci c to the outcomes of interest, and unless substantial errors or omissions are noted, we will rely on the reported data without further re-extraction from the primary study. When the data of interest are incompletely reported, one reviewer will extract data from the primary study and compare data speci c to the outcomes of interest (as previously described) to that reported in the earlier review(s) for consistency. A second reviewer will provide input only in cases where discrepancies between the extracted data and that reported in reports of earlier systematic reviews cannot be resolved.
Speci c to KQ 2, we expect heterogeneity in the criteria (thresholds) used to de ne a positive test result across studies. Differences in the criteria for test-positivity across studies could affect whether and how we pool and interpret their results. We are not able to judge a priori the possible array of reported de nitions. Thus, to inform our analyses we will extract the de nition of a positive test reported in the individual studies and present the range of de nitions (without further study details) to clinical experts supporting the working group. Based on clinical expert judgment and improved familiarity with the range of de nitions reported across studies, we will nalize our data analysis plan (i.e., which types of studies we may be able to pool). Only after the clinical experts have deliberated on the consistency and compatibility of available de nitions and we have developed a suitable analysis plan will we move forward with the extraction of results data. Because we will nalize the analysis details prior to the extraction of data, and based on the input of clinicians who will not be aware of study details, the risk of biasing the analyses will be minimal.

Risk of Bias Assessment
Considering the array of available risk of bias tools [90][91][92], we will use study design-speci c tools that we believe best account for potential sources of bias [81,[93][94][95][96][97]. The planned methods are described in detail in Additional File 3. For all KQs we will develop standard forms in Excel to guide risk of bias appraisal. We will pilot test the forms on 3 to 5 included studies for each KQ to ensure a mutual understanding of the requirements of each tool. We will report domain-speci c risk of bias ratings for each included study, with justi cation for each rating, in an appendix to the nal report. Two reviewers will independently appraise the risk of bias of each included study and reach consensus. A third reviewer will be consulted if an agreement cannot be reached. We will extract and use risk of bias appraisals reported in available systematic reviews where possible, to create e ciencies.

Key Question 1: Effectiveness and Comparative Effectiveness
Where appropriate, we will pool studies reporting on mortality from cervical cancer, all-cause mortality, the incidence of ICC, the incidence of CIN 2 and 3, the number and rate of colposcopy and/or biopsy, and/or adverse pregnancy outcomes, per outcome-comparison. The measure of effect will be the relative risk (RR) or odds ratio (OR) with 95% con dence intervals (CIs), where appropriate. These will be calculated in Review Manager version 5.3 (The Nordic Cochrane Centre, The Cochrane Collaboration, Copenhagen, Denmark) from raw data reported in the studies or, if not provided, we will use the reported relative measures. When available, we will use adjusted ORs from observational studies, as these usually reduce the impact of confounding [98]. We will pool data using DerSimonian and Laird random effects models [99] to account for expected clinical and methodological heterogeneity across studies [100]. For rare events, we will use the Peto one-step odds ratio method to provide a less biased effect estimate [101], unless control groups are of unequal sizes, a large magnitude of effect is observed, or when events become more frequent (5% to 10%). In these cases, the reciprocal of the opposite treatment arm size correction will be used [101]. We will pool data from RCTs and controlled clinical trials separately from observational studies. We will present separate analyses for each comparison. In some cases, we may also deem it appropriate to combine intervention groups (e.g., for the comparisons of any screening versus no screening) using standard methods to avoid unit of analysis issues [98]. We will transform the pooled RR for each outcome to the absolute risk reduction (ARR) via standard methods [102]. We will calculate the number needed to screen for an additional bene cial outcome for outcomes with statistically signi cant results.
We will consider false-positives to be cervical screening tests that are positive (according to the primary testing strategy used in the individual studies, recognizing that de nitions of test positivity will differ across studies) and lead to diagnostic follow-up testing, but that are not histologically con rmed as CIN 2, CIN 3, or more severe disease. We will calculate the false-positive rate using available data in the individual studies, as follows: (# of individuals with a positive screening test result who are not histologically diagnosed with the relevant condition / # individuals not diagnosed with the relevant condition, regardless of screening test result). This calculation necessitates histological examination for pre-cancerous lesions on all participants. Should this information not be available (in the published report and following attempts to contact the study authors), we will report the number of positive tests and the total number of tests, as reported by the authors.
The range of false-positive rates across studies will be reported narratively and in tables, per test.
We are not aware of a standard formula for estimating overdiagnosis in the context of cervical cancer screening. Thus, we expect studies reporting on overdiagnosis to be highly methodologically heterogeneous. For this reason, we will synthesize data on this outcome narratively and in tables, including the method (formula) used to derive each estimate.

Key Question 2: Comparative Accuracy
We will populate 2 x 2 tables with the TP, FP, TN, and FN for each screening test used in each study. If we identify more than three studies that we deem suitable for statistical pooling, we will compare accuracy data per outcome and per comparison using the Rutter and Gatsonis hierarchical summary receiver operating characteristic (HSROC) model [103], as recommended by Cochrane [104]. This model allows for the exploration of heterogeneity in test positivity (threshold for a positive test), position of the HSROC curve (accuracy of the test), and the shape of the HSROC curve [104]. Compared to the binomial regression model, the HSROC model also more fully accounts for within-and betweenstudy variability in TP and FP rates [103]. We will investigate whether test strategies are associated with the shape and position of the summary ROC curve by tting a binary covariate to the model representing the type of test that informed each 2 x 2 table [104]. In the event that preliminary plots of the study level estimates of sensitivity and speci city in ROC space reveal substantial differences in heterogeneity between studies for the two tests being investigated, we will assess whether the assumption of equal variances of the random effects of the two tests is reasonable by comparing the t of the alternative models (i.e., where variances do or do not depend on the covariate for test strategy) [104]. For each screening strategy, we will report the pooled relative sensitivity and speci city across studies, with 95% CIs. In the event that the data are not suitable for statistical pooling, we will report their ndings narratively and in tables.

Key Question 4: Relative Importance of Potential Outcomes from Screening
We will synthesize the quantitative data separately from the qualitative data. For the quantitative data, we expect to undertake a narrative synthesis given the likely heterogeneity in study designs, exposures, comparisons, and outcomes reported across studies. We will synthesize the included studies and draw conclusions based on the body of evidence using standard methods for narrative syntheses, as described by Popay et al. (2006) [105]. Adaptations to standard methodology may be necessary, as our review aims to investigate peoples' values and preferences, so the outcomes differ, to a certain extent, when compared with intervention or implementation reviews. We will rst present an overall synthesis of each included study, including their characteristics and reported ndings. We will then describe relationships within and between studies, focusing on our exposure subgroups and comparators of interest and other factors such as methodological quality. As much as possible, we intend to report a best estimate of values and preferences for various exposures, and potential moderating factors.
We will analyze the qualitative data following standard procedures for thematic analysis [89,106]. One reviewer will initially read through the data to familiarize themselves with the prevailing ideas. Next, the reviewer will use line-by-line coding in Microsoft Word to apply one or more codes to each line of text. The reviewer will then compare codes across the data, combine similar codes, categorize common codes into themes, and develop memos for each theme. To reduce the risk of interpretive biases, a second reviewer will review the codes and themes for differences in interpretation. The two reviewers will agree upon the nal themes, with the input of a third reviewer if necessary. We will report on each theme narratively.

Key Question 5: Effectiveness and Comparative Effectives of Interventions to Increase Screening Rates
We will incorporate newly identi ed studies into the analyses previously reported in the Cochrane systematic review by Everett et al. (2011) [86]. Additional studies extracted from the review by   [85] will be pooled via the same methods. In some cases, we may also deem it appropriate to combine intervention groups from multi-arm trials using standard methods to avoid unit of analysis issues [98]. We will transform the pooled RR for each outcome to the absolute values via standard methods [102]. We will calculate the number needed to treat for an additional bene cial outcome (i.e., participation) for outcomes with statistically signi cant results. We will report on studies that are not appropriate for statistical pooling narratively.

Dealing with Missing Data
When data required for statistical pooling are not reported by the individual studies, we will contact the corresponding author via e-mail to inquire about the availability of the data. We will contact authors twice, two weeks apart, before ceasing contact if we do not receive a response.
For randomized trials, we anticipate that many will report their ndings based on a "number of individuals screened" denominator, rather than intention-to-screen calculations using all individuals randomized. Our primary analysis will use outcome data derived by analyzing all individuals randomized (i.e., intention-to-screen). We will extract data as reported in the individual studies using the number randomized as the denominator for each arm. We will also analyze based on the ndings as reported in the individual studies, undertaking separate analyses for studies reporting only the number of individuals screened and those reporting on all individuals randomized.

Unit of Analysis Issues
In the event of the inclusion of cluster-randomized trials, we will take appropriate measures to avoid unit-of-analysis errors when reporting their ndings and/or incorporating them into meta-analysis [107]. When available, we will use the intracluster correlation coe cient (ICC) reported in the trial to apply a design effect to the sample size and number of events in each of the treatment and control groups [108]. If not reported, we will use an external estimate from similar studies. We will clearly identify cluster-randomized trial data when it is included in meta-analysis with individually randomized trials. Decisions about whether it is reasonable to pool data from cluster-randomized and individually randomized trials will be undertaken on a case-by-case basis. We will investigate the robustness of the conclusions from any meta-analysis including cluster-randomized trials via sensitivity analysis.

Assessment of Heterogeneity
We will explore heterogeneity via subgroup analyses. First, we will report within-study subgroup data from our pre-speci ed subgroups of interest (Tables 2 to 5). We will also stratify the meta-analyses by subgroups (between-study analysis), or use other relevant statistical techniques like meta-regression to investigate heterogeneity. For population subgroups, we will use a large majority (i.e., >80% of participants) to decide the relevant subgroup for each study. We will interpret the plausibility of subgroup differences cautiously using available guidance [109,110].

Small Study Bias
When meta-analyses of trials contain at least eight studies of varying size, we will test for small study bias visually by inspecting funnel plots for asymmetry and statistically via the Egger's test [111].

Certainty in the Body of Evidence
We will use GRADE methods [112] to assess the certainty of evidence for all outcomes, without relying on the appraisals reported in earlier systematic reviews. In the event that we use one or multiple systematic reviews as is to answer a KQ (e.g., the Kyrgiou et al. [79,80] reviews for KQ 3), we will review the reported certainty of evidence appraisals and undertake amendments as necessary to ensure that the appraisals are appropriately contextualized. In cases where studies of interventions cannot be pooled in meta-analysis, we will use GRADE guidance for rating the certainty of evidence in the absence of a single estimate of effect [113]. Two reviewers will independently assess the certainty of evidence for each outcome nd agree on the nal assessments. A third reviewer will arbitrate if necessary.
We will assess the certainty of evidence (very low, low, moderate, or high) based on ve considerations: study limitations (risk of bias), inconsistency of results, indirectness of evidence, imprecision, and publication (small study) bias [114][115][116][117][118][119]. We will assess the certainty of evidence from trials and observational studies separately, for each outcome. For KQs of intervention effects (KQs 1 and 5), data from RCTs will begin at high certainty, and be downgraded for aws in each of the aforementioned domains (or, rarely, upgraded for strengths) [120], whereas observational studies will begin at low certainty. For KQ2 on diagnostic accuracy, all studies will begin at high certainty [121,122]. For KQ 4, we will adhere to GRADE methods for assessing the certainty of evidence in the importance of outcomes or values and preferences [82,96]. We will report our appraisals comprehensively and transparently, including justi cation for downgrading on any of the considered domains. We will use a partially contextualized approach; thus we will express our certainty that the true estimate lies within a range of magnitudes for each outcome. We will not account for other outcomes when assessing the magnitude of effect for individual outcomes, nor consider the certainty of any one outcome versus another [123].
For each KQ we will create a separate GRADE summary of ndings table [115]. Justi cations for rating up or down in any of the considered domains will be explained. We will also note where differences were observed between the data from trials and that from observational studies, or when we have relied solely on either the trial or observational evidence. The certainty of evidence assessments for each outcome will be incorporated into the Task Force's evidence-to-decision framework [124]. The Task Force may choose to fully contextualize the range of possible effects on all outcomes (including bene ts and harms). The Task Force will consider the net bene ts and harms of screening and other elements (e.g., costs, feasibility, patient values and preferences) to develop updated recommendations for screening for cervical cancer [124].

Task Force Involvement
The Task Force and clinical experts will not be involved in the selection of studies, extraction of data, appraisal of risk of bias (or methodological quality), nor synthesis of data, but will contribute to the interpretation of the ndings and comment on the draft report. Clinical experts and/or Task Force members may be called upon to contribute to the certainty of evidence appraisals, e.g., to interpret directness (applicability) of included studies to the population of interest for the recommendation.

Discussion
Since the publication of the 2013 Task Force guideline for cervical cancer screening, new studies have become available that may alter recommendations. The proposed systematic reviews will identify and synthesize newly available studies, which will inform an update of the guideline. We anticipate some challenges to integrating studies reported in earlier systematic reviews. To mitigate potential challenges, we have planned methods (e.g., searching references lists, contacting experts, independent data extraction and/or quality checks) consistent with the highest standards for evidence synthesis. We are con dent that the planned methods will identify and provide a rigorous evaluation of all studies critical to the update of the guideline.  ). This protocol and the subsequent reviews will be conducted for the Public Health Agency of Canada; however, they do not necessarily represent the views of the Government of Canada. Staff of the Global Health and Guidelines Division at the Public Health Agency of Canada provided input during the development of this protocol and have reviewed the protocol, but will not be taking part in the selection of studies, data extraction, analysis or interpretation of the ndings. The funder will give approval to the nal version of the review. For the conduct of the review, the funder will also be given opportunity to comment, but nal decisions will be made by the review team.  False-positive rate for detecting CIN 2 and CIN 3 and invasive cancer*** ***The ability to report and analyze ndings by CIN 2, CIN 3 and invasive cervical cancer will be determined after reviewing the outcomes used in the identi ed studies (e.g. CIN 2+ and CIN 3+ will be considered if necessary, and may be considered indirect)

Timing
No limitation on the duration of follow-up; results will be reported by screening round and longest follow-up c) Qualitative information indicating relative importance between outcomes d) Rank-order of importance of outcomes, based on data from a) to c) above, as applicable.
Data must relate to the outcomes considered critical to the Task Force. Outcome groupings a) to c) above will be included in a hierarchical manner for each critical screening outcome. Any quantitative or qualitative study design using the methods described below: -Utility values/weights measured directly using time tradeoff*, standard gamble**, visual analogue scales, conjoint analysis with choice experiments or probability trade-offs -Utility values/weights measured or estimated indirectly, e.g., from transforming several health state domains from multiattribute utility indexes such as EQ-5D to utilities using general population preferences, including mapping from generic or disease-speci c health-related quality of life instruments -Non-utility, quantitative information about relative importance of different outcomes, e.g., rating scales using ordinal or interval variables, ranking; preference for or against screening (screening attendance, intentions, or acceptance) or preferred screening strategy based on different outcome risk descriptions, strength of associations about outcome ratings with screening behaviours or intentions) -Qualitative information indicating relative importance between bene ts and harms -Rank-order of importance of outcomes *Time trade-off measures the value placed on attributes of a commodity by requiring individuals to choose between different scenarios, where in each scenario the commodity in question has varying levels of different attributes.
**Standard gamble approaches require that respondents choose between a lifetime in a certain health state or a gamble between different health states, whereas time trade-off requires respondents to choose between living for a period in less than perfect health, as opposed to a shorter period in perfect health.  Intervention -Mail-out or opt-in (invitation to request) self-sampling for hrHPV screening -Other interventions aimed at individuals or primary care providers with the intent to increase acceptability of screening (e.g., screening reminders, education, counselling, provider recommendation, addressing cultural practices and beliefs, lay health workers, patient-provider communication) Interventions not targeted to primary care providers or feasible for primary care to deliver to their patients