Skip to main content

Quantitative methods used to evaluate impact of health promotion interventions to prevent HIV infections: a methodological systematic review protocol

Abstract

Background

Combination prevention is currently considered the best approach to combat HIV epidemic. It is based upon the combination of structural, behavioral, and biomedical interventions. Such interventions are frequently implemented in a health-promoting manner due to their aims, the approach that was adopted, and their complexity. The impact evaluation of these interventions often relies on methods inherited from the biomedical field. However, these methods have limitations and should be adapted to be relevant for these complex interventions. This systematic review aims to map the evidence-based methods used to quantify the impact of these interventions and analyze how these methods are implemented.

Methods

Three databases (Web of Science, Scopus, PubMed) will be used to identify impact evaluation studies of health promotion interventions that aimed at reducing the incidence or prevalence of HIV infection. Only studies based on quantitative design assessing intervention impact on HIV prevalence or incidence will be included. Two reviewers will independently screen studies based on titles and abstracts and then on the full text. The information about study characteristics will be extracted to understand the context in which the interventions are implemented. The information specific to quantitative methods of impact evaluation will be extracted using items from the Mixed Methods Appraisal Tool (MMAT), the guidelines for reporting Statistical Analyses and Methods in the Published Literature (SAMPL), and the guidelines for Strengthening The Reporting of Empirical Simulation Studies (STRESS). This review will be conducted according to the Preferred Reporting Items for Systematic Review and Meta-Analyses (PRISMA) statement.

Discussion

The impact evaluation of HIV prevention interventions is a matter of substantial importance given the growing need for evidence of the effectiveness of these interventions, whereas they are increasingly complex. These evaluations allow to identify the most effective strategies to be implemented to fight the epidemic. It is therefore relevant to map the methods to better implement them and adapt them according to the type of intervention to be evaluated.

Systematic review registration

PROSPERO CRD42020210825

Peer Review reports

Background

The importance of health promotion interventions is acknowledged, thanks to their impacts on social determinants of health and the need to fight against health inequities [1, 2]. Despite this recognition, their effectiveness is often debated and criticized due to the lack of available evidence from impact evaluations using quantitative data, especially randomized controlled trials. However, these methods, inherited from the biomedical field, have several methodological and practical limitations for quantifying the impact of this type of intervention and should be adapted to be relevant [3, 4].

In the context of the fight against HIV epidemic, the lack of curative treatment has brought health promotion interventions at the forefront, particularly those aimed at preventing the transmission of the virus. Currently, biomedical interventions alone have shown their limits, and combination prevention is considered the best approach to curb the epidemic [5, 6]. It is based upon the combination of structural, behavioral, and biomedical interventions that address specific prevention needs at the individual and community levels. The interventions which fall within this framework are frequently implemented in a health-promoting manner due to their aims, the approach adopted, and their complexity. Over the past several years, such interventions have been proposed and evaluated for effectiveness in a variety of ways. Among those evaluated using evidence-based quantitative methods, some have been shown to be effective, while others have not. The challenge of impact evaluation of these interventions is paramount in helping to identify and decide which programs and policies should be implemented and supported to address the epidemic. It is essential that these evaluations are conducted in a rigorous manner which is appropriate to the nature and the complexity of health promotion interventions. Hence, we decided to conduct a review to identify and assess the implementation of the evidence-based impact of evaluation methods that are used to assess these interventions.

Objectives

This review is conducted to systematically review quantitative methods for evaluating the impact of interventions that aimed at preventing HIV transmission during sexual exposure. The specific questions this review attempts to answer are as follows:

  • What quantitative methods (statistical or mathematical) are used to quantify intervention impact on HIV incidence or prevalence?

  • What are the designs of these studies?

  • Are quantitative methods implemented appropriately?

Answering these questions will allow a critical synthesis of the methods used to assess the impact of the interventions concerned by this systematic review and more broadly to identify directions for methodological development to adapt these methods when assessing the impact of interventions designed to address social and behavioral determinants of health.

Methods/design

This systematic review is developed in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Protocols (PRISMA-P) statement [7] (Additional file 1). The protocol has been registered in the International Prospective Register of Systematic Reviews (PROSPERO): CRD42020210825.

Eligibility criteria

This review focuses on impact evaluation studies of health promotion interventions designed to prevent HIV transmission during sexual risk exposure. Health promotion is defined as a process of enabling individuals to improve their health and their control over their health [8]. For this reason, the studies to be included involve behavioral and/or structural components that may or may not be supplemented by the use of biomedical prevention tools. More specifically, this review will include studies based solely on existing interventions; studies based on hypothetical interventions and exclusively simulated data will be excluded. Hence, modeling studies may be included as long as the model inputs are based on data from an existing intervention to be evaluated. The outcomes studied in this review are quantitative methods based on strictly quantitative designs (randomized controlled intervention studies, nonrandomized controlled intervention studies, observational studies) that meet the classification proposed in Deeks et al. [9] where controlled studies are those where the intervention assignment is done by researchers. Studies that do not meet these conditions will be excluded. Finally, the review focuses on interventions designed to reduce HIV transmission; eligible studies are those that assess intervention impact on HIV prevalence or incidence as a primary or secondary outcome.

Literature search

Five databases were considered in order to identify the studies to be included in the review: Web of Science, Scopus, PubMed, CINAHL, and PsycINFO. Finally, a sensitivity analysis of the search equation showed that three databases (Web of Science, Scopus, PubMed) were sufficient to uncover the studies relevant to this review. Moreover, it has been shown that the joint use of these databases allows to ensure an adequate literature search performance [10]. The search strategy consisted of using three groups of terms and two filters:

  1. i)

    Terms associated with HIV prevalence and incidence: to restrict the results to studies that aim to reduce HIV transmission

  2. ii)

    Terms associated with the notion of impact of interventions: to focus the search on articles referring to intervention impact evaluation

  3. iii)

    Terms associated with prevention interventions and sexual risk exposure: to focus the search on interventions designed to reduce the occurrence of new HIV cases and to focus the search on interventions targeting behavioral and structural barriers.

  4. iv)

    Restricting the search to articles written in English and French

  5. v)

    Restricting the search to scientific articles and not retracted publications

The search strategy is reported in Table 1 and was developed and tailored to each database with the assistance of a trained librarian.

Table 1 Search strategy

Data collection

Study selection

References management will be done using a bibliographic management software (Zotero 5.0.96.2). Screening and data extraction will be done using a systematic review management software (Covidence). Duplicates will be removed automatically (using Covidence) and also manually. The study selection will be done in two steps: (i) a selection based on titles and abstracts and (ii) a selection based on full texts. At each stage of the selection, two reviewers will independently select the studies taking into account the eligibility criteria. At the end of each stage, disagreements on which studies to include will be resolved with the help of a third senior reviewer. Only peer-reviewed scientific articles will be considered.

Data extraction

Two types of data will be extracted from the studies: general information about the included studies and specific information about the methods used to evaluate interventions.

Data concerning the characteristics of the studies

These information will help to describe the included studies and the contexts in which the interventions are implemented. These information concern the authors, the title of the article, the date of publication, the place where the studies were carried out, the purpose, and the results of the studies.

Data on impact evaluation methods

A data extraction grid was developed for the purposes of this review (Table 2). This grid has three sections that allow for the extraction of (i) information about the design of the studies, (ii) information about the statistical methods (when appropriate), and (iii) information about the mathematical methods (when appropriate). The items used in the extraction grid are based on tools and recommendations consistent with each of the abovementioned sections and with the investigated research question in the review [11,12,13]. We also verified that the information to be extracted is consistent with the standards of reporting according to the study design, the CONSORT statement and the relevant extensions for randomized controlled studies [14,15,16,17], the TREND statement for non-randomized controlled studies [18], and the STROBE statement for observational studies [19].

Table 2 Quality assessment grid

To test the applicability of the grid, a sample of 14 articles selected on the basis of their content was used (Additional file 2): 6 articles concerning randomized controlled studies (2 of which present mathematical impact evaluation), 3 articles concerning nonrandomized controlled studies, and 5 articles concerning observational studies (2 of which present mathematical impact evaluation). These articles were selected independently of the research questions addressed in the review to be conducted. The item development for each section of the extraction grid is presented below.

Study design information

Eligible studies are based on strictly quantitative designs, i.e., randomized controlled studies, non-randomized controlled studies, or observational studies. Items to extract study design-specific information originate from the Mixed Methods Appraisal Tool (MMAT). This tool allows to assess the design quality of quantitative studies [12, 20, 21]. Because these questions are specific to study design, the information to be extracted is different according to the study type. These items offer a choice of three possible answers (yes, no, unclear).

Information about statistical methods

Items concerning statistical methods are formulated using the guidelines for reporting Statistical Analyses and Methods in the Published Literature (SAMPL) [11]. The criteria identified in these guidelines are related to the implementation of statistical methods. The developed items allow the extraction of information common to different statistical methods for purpose to compare included studies. These items can be categorized into two types: (i) those that answer questions about the sample size and (ii) those that answer questions about the implementation of statistical methods in the analyses. The formulated items offer a choice of four possible answers (yes, no, imprecise, not concerned) except for one item allowing the classification of statistical methods (seven categories: correlation analysis, regression analysis, ANOVA/ANCOVA, bayesian analyses, statistical tests, other, not concerned).

Information about mathematical methods

Items concerning mathematical methods are formulated using the guidelines for Strengthening The Reporting of Empirical Simulation Studies (STRESS) [13]. Several criteria identified from these guidelines allow the collection of information which is consistent with several types of models. The developed items are based on these criteria and can be categorized into four types: (i) description of the objectives of the model, (ii) description of the assumptions made by the model, (iii) description of the data used in the model, and (iv) description of the implementation of the model. These items offer a choice of four possible answers (yes, no, unclear, not concerned).

Data analysis

This review will be conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement [22]. The data synthesis will report all applicable items from this checklist. This synthesis will present four parts concerning the general characteristics of the included studies and the three sections of the developed extraction grid. The first part, which deals with general information about the studies, provides a summary of the context, the scope of the studies, the assumptions made, and the presented results. The second part, concerning the study design, gives insights about the methodological quality of the considered studies. The third and fourth sections allow to identify and assess the implementation of quantitative methods. This synthesis will be presented using descriptive tables and graphs and interpreted accordingly. All analyses will be presented according to study design and regardless of the design when relevant.

Discussion

Intervention impact evaluation is one of many tools that support evidence-based program evaluation. It allows to determine whether an intervention improves significantly the situation of those who benefit from it compared to those who do not in the context of the study. Henceforth, this type of evaluation has been rooted in evaluation practices for the purpose of quantifying intervention effects and has legitimated a form of hierarchy among methods in terms of evidence [23]. Yet, these methods (including randomized controlled trials) are not without limitations, and while useful, their implementation requires better understanding of what kind of evidence they provide as well as how they may be used to inform research and practices [24,25,26]. In addition to the general criticisms, these methods have specific limitations regarding the field of health promotion and should be tailored before being used [27,28,29].

In the context of the fight against HIV epidemic, there is an urgent demand for evidence-based health promotion interventions to make available enough prevention strategies to stem the epidemic [30]. This dual demand for health promotion interventions and evidence warrants this review to map the methods commonly used to evaluate the impact of these interventions. This review will not only help to identify these methods but will also discuss the operationalization of these methods and the adaptations that have been made or could be made. Therefore, it is not intended to question the relevance of these studies which attempt to evaluate and make available useful interventions. On the contrary, it responds to the need for knowledge about existing methods and to consider how to use them, improve them, and even design complementary tools to better evaluate interventions, especially in the field of health promotion.

This protocol has limitations. The included studies are expected to quantify the intervention impact on HIV transmission. Such studies rely on confirmatory methods based on quantitative data, thus leading to exclude mixed studies whose aims do not focus on hypothesis or theory testing [31]. Nonetheless, as long as the eligbility criteria are met and the data extraction grid remains applicable, such study will be considered. Similarly, all studies that do not refer to a measure of HIV prevalence or incidence as an outcome are not considered. Thus, we will not be able to identify all health promotion studies used to stem the epidemic. However, this choice is warranted given that the review focuses on quantitative methods, while the selected studies are supposed to assess the extent to which these interventions curb the epidemic.

Despite its limitations, this review will be useful in informing practices of impact evaluation of HIV programs. The current recommendations for evaluating HIV programs [32] acknowledge the relevance of numerous methods depending on the questions to be answered and the context of these programs. This review will help to identify the gap between these recommendations and what actually is being done, by identifying what methods are implemented and how they are implemented. In addition, the analysis of the implementation of the methods according to the study characteristics will allow to discuss in what circumstances each method applies. Thus, the results of this review will highlight some methodological challenges concerning the impact evaluation of complex interventions in order to guide future methodological developments. Given the need for evidence of the effectiveness of complex interventions, reviewing impact evaluation methods is necessary in order to map and improve these methods and consequently to improve decision-making pertaining to the concerned interventions.

Availability of data and materials

All data relevant to the study are included in the article or uploaded as additional files.

Abbreviations

CONSORT:

Consolidated Standards of Reporting Trials

MMAT:

Mixed Methods Appraisal Tool

PRISMA:

Preferred Reporting Items for Systematic Reviews and Meta-Analyses

PRISMA-P:

Preferred Reporting Items for Systematic Reviews and Meta-Analyses Protocols

SAMPL:

Statistical Analyses and Methods in the Published Literature

STRESS:

Strengthening The Reporting of Empirical Simulation Studies

STROBE:

Strengthening the Reporting of Observational Studies in Epidemiology

TREND:

Transparent Reporting of Evaluations with Nonrandomized Designs

References

  1. World Health Organization (2007) Resolutions WHA 60.24 Health promotion in a globalized world.

    Google Scholar 

  2. World Health Organization (2009) Resolutions WHA 62.14 Reducing health inequities through action on the social determinants of health.

    Google Scholar 

  3. McQueen DV, Anderson LM. Utiliser des données probantes pour évaluer l’efficacité de la promotion de la santé: quelques enjeux fondamentaux. Promot Educ. 2004;11:11–6.

    Article  Google Scholar 

  4. O’Neill M. Le débat international sur l’efficacité de la promotion de la santé: d’où vient-il et pourquoi est-il si important? Promot Educ. 2004;11:6–10.

    Article  Google Scholar 

  5. Piot P, Bartos M, Larson H, Zewdie D, Mane P. Coming to terms with complexity: a call to action for HIV prevention. Lancet. 2008;372:845–59.

    Article  Google Scholar 

  6. Joint United Nations Programme on HIV/AIDS. Fast-tracking combination prevention. Geneva: UNAIDS; 2015.

    Google Scholar 

  7. Moher D, Shamseer L, Clarke M, Ghersi D, Liberati A, Petticrew M, et al. Preferred Reporting Items For Systematic Review And Meta-Analysis Protocols (PRISMA-P) 2015 statement. Syst Rev. 2015;4:1.

    Article  Google Scholar 

  8. World Health Organization (1986) Ottawa Charter for Health Promotion.

    Google Scholar 

  9. Deeks JJ, Dinnes J, D’Amico R, Sowden AJ, Sakarovitch C, Song F, et al. Evaluating non-randomised intervention studies. Health Technol Assess Winch Engl. 2003;7(iii–x):1–173.

    Google Scholar 

  10. Bramer WM, Rethlefsen ML, Kleijnen J, Franco OH. Optimal database combinations for literature searches in systematic reviews: a prospective exploratory study. Syst Rev. 2017;6:245.

    Article  Google Scholar 

  11. Lang TA, Altman DG. Basic statistical reporting for articles published in Biomedical Journals: the “Statistical Analyses and Methods in the Published Literature” or the SAMPL guidelines. Int J Nurs Stud. 2015;52:5–9.

    Article  Google Scholar 

  12. Hong QN, Fàbregues S, Bartlett G, et al. The Mixed Methods Appraisal Tool (MMAT) version 2018 for information professionals and researchers. Educ Inf. 2018;34:285–91.

    Google Scholar 

  13. Monks T, Currie CSM, Onggo BS, Robinson S, Kunc M, Taylor SJE. Strengthening the reporting of empirical simulation studies: introducing the STRESS guidelines. J Simul. 2019;13:55–67.

    Article  Google Scholar 

  14. Zwarenstein M, Treweek S, Gagnier JJ, Altman DG, Tunis S, Haynes B, et al. Improving the reporting of pragmatic trials: an extension of the CONSORT statement. BMJ. 2008. https://doi.org/10.1136/bmj.a2390.

  15. Schulz KF, Altman DG, Moher D. CONSORT 2010 statement: updated guidelines for reporting parallel group randomised trials. BMJ. 2010. https://doi.org/10.1136/bmj.c332.

  16. Campbell MK, Piaggio G, Elbourne DR, Altman DG. Consort 2010 statement: extension to cluster randomised trials. BMJ. 2012. https://doi.org/10.1136/bmj.e5661.

  17. Montgomery P, Grant S, Mayo-Wilson E, et al. Reporting randomised trials of social and psychological interventions: the CONSORT-SPI 2018 extension. Trials. 2018;19:407.

    Article  Google Scholar 

  18. Des Jarlais DC, Lyles C, Crepaz N. Improving the reporting quality of nonrandomized evaluations of behavioral and public health interventions: the TREND statement. Am J Public Health. 2004;94:361–6.

    Article  Google Scholar 

  19. von Elm E, Altman DG, Egger M, Pocock SJ, Gøtzsche PC, Vandenbroucke JP. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies. Int J Surg. 2014;12:1495–9.

    Article  Google Scholar 

  20. Pace R, Pluye P, Bartlett G, Macaulay AC, Salsberg J, Jagosh J, et al. Testing the reliability and efficiency of the pilot Mixed Methods Appraisal Tool (MMAT) for systematic mixed studies review. Int J Nurs Stud. 2012;49:47–53.

    Article  Google Scholar 

  21. Souto RQ, Khanassov V, Hong QN, Bush PL, Vedel I, Pluye P. Systematic mixed studies reviews: updating results on the reliability and efficiency of the mixed methods appraisal tool. Int J Nurs Stud. 2015;52:500–1.

    Article  Google Scholar 

  22. Page MJ, McKenzie JE, Bossuyt PM, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. Syst Rev. 2021;10:89.

    Article  Google Scholar 

  23. Cartwright N, Hardie J. Evidence-ranking schemes, advice guides, and choosing effective policies. In: Evid.-Based Policy Pract. Guide Doing It Better: Oxford University Press; 2012. p. 135–43.

    Chapter  Google Scholar 

  24. Gertler PJ, Martinez S, Premand P, Rawlings LB, Vermeersch CMJ. Impact Evaluation in Practice. 2nd ed: World Bank Publications; 2016.

    Book  Google Scholar 

  25. Van Belle S, Wong G, Westhorp G, Pearson M, Emmel N, Manzano A, et al. Can “realist” randomised controlled trials be genuinely realist? Trials. 2016;17:313.

    Article  Google Scholar 

  26. Deaton A, Cartwright N. Understanding and misunderstanding randomized controlled trials. Soc Sci Med. 2018;210:2–21.

    Article  Google Scholar 

  27. McQueen DV, Anderson LM. What counts as evidence: issues and debates. In: Eval. Health Promot. Princ. Perspect., WHO Regional Publications: WHO Regional Office Europe; 2001. p. 63–81.

    Google Scholar 

  28. Potvin L, McQueen DV. Practical dilemmas for health promotion evaluation. In: Potvin L, McQueen DV, Hall M, de Salazar L, Anderson LM, Hartz ZMA, editors. Health Promot Eval. Pract. Am. Values Res. New York, NY: Springer; 2009. p. 25–45.

    Google Scholar 

  29. de Salazar L, Hall M. Developing evaluation questions: beyond the technical issues. In: Potvin L, McQueen DV, Hall M, de Salazar L, Anderson LM, Hartz ZMA, editors. Health Promot. Eval. Pract. Am. Values Res. New York, NY: Springer; 2009. p. 49–62.

    Google Scholar 

  30. Kurth AE, Celum C, Baeten JM, Vermund SH, Wasserheit JN. Combination HIV prevention: significance, challenges, and opportunities. Curr HIV/AIDS Rep. 2011;8:62–72.

    Article  Google Scholar 

  31. Johnson B, Christensen LB. Quantitative, qualitative, and mixed research. In: Educ. Res. Quant. Qual. Mix. Approaches. 7th ed. Los Angeles: SAGE; 2020. p. 29–54.

    Google Scholar 

  32. Joint United Nations Programme on HIV/AIDS. Strategic guidance for evaluating HIV prevention programmes. Geneva: UNAIDS; 2010.

    Google Scholar 

Download references

Acknowledgements

We would like to thank Elisabeth Adjadj (DESP, Inserm, France) for her assistance in developing the search strategy.

Funding

The author(s) received no specific funding for this work.

Author information

Affiliations

Authors

Contributions

AR designed the study and first drafted the protocol. VR, LKS, and YY contributed to the study design and critically revised the manuscript. All authors have read and approved the manuscript.

Corresponding author

Correspondence to Andrainolo Ravalihasy.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Additional file 1.

 PRISMA-P 2015 Checklist.

Additional file 2.

 List of references used to test data extraction grid.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Ravalihasy, A., Kardaś-Słoma, L., Yazdanpanah, Y. et al. Quantitative methods used to evaluate impact of health promotion interventions to prevent HIV infections: a methodological systematic review protocol. Syst Rev 11, 87 (2022). https://doi.org/10.1186/s13643-022-01970-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13643-022-01970-z

Keywords

  • Evidence-based impact evaluation
  • Combination HIV prevention
  • Health promotion
  • Study design
  • Statistical methods
  • Mathematical methods