Skip to main content

The impact of design elements on undergraduate nursing students’ educational outcomes in simulation education: protocol for a systematic review

Abstract

Background

Although simulation-based education (SBE) has become increasingly popular as a mode of teaching in undergraduate nursing courses, its effect on associated student learning outcomes remains ambiguous. Educational outcomes are influenced by SBE quality that is governed by technology, training, resources and SBE design elements. This paper reports the protocol for a systematic review to identify, appraise and synthesise the best available evidence regarding the impact of SBE on undergraduate nurses’ learning outcomes.

Methods

Databases to be searched from 1 January 1990 include the Cumulative Index to Nursing and Allied Health Literature (CINAHL), the Medical Literature Analysis and Retrieval System Online (MEDLINE), American Psychological Association (APA) PsycInfo and the Education Resources Information Centre (ERIC) via the EBSCO host platform. The Excerpta Medica database (EMBASE) will be searched via the OVID platform. We will review the reference lists of relevant articles for additional citations. A combination of search terms including ‘nursing students’, ‘simulation training, ‘patient simulation’ and ‘immersive simulation’ with common Boolean operators will be used. Specific search terms will be combined with either MeSH or Emtree terms and appropriate permutations for each database. Search findings will be imported into the reference management software (Endnote© Version.X9) then uploaded into Covidence where two reviewers will independently screen the titles, abstracts and retrieved full text. A third reviewer will be available to resolve conflicts and moderate consensus discussions. Quantitative primary research studies evaluating the effect of SBE on undergraduate nursing students’ educational outcomes will be included. The Mixed Methods Appraisal Tool (MMAT) will be used for the quality assessment of the core criteria, in addition to the Cochrane RoB 2 and ROBINS-I to assess the risk of bias for randomised and non-randomised studies, respectively. Primary outcomes are any measure of knowledge, skills or attitude.

Discussion

SBE has been widely adopted by healthcare disciplines in tertiary teaching settings. This systematic review will reveal (i) the effect of SBE on learning outcomes, (ii) SBE element variability and (iii) interplay between SBE elements and learning outcome. Findings will specify SBE design elements to inform the design and implementation of future strategies for simulation-based undergraduate nursing education.

Systematic review registration

PROSPERO CRD42021244530

Peer Review reports

Background

Simulation-based education (SBE) is a popular mode of teaching in undergraduate nursing education [1]. This pedagogical approach provides students with an opportunity to practise non-technical skills (e.g. communication, teamwork, problem solving) and clinical skills (e.g. venepuncture), in a controlled environment that poses no risk of harm to patients and allows the possibility of direct feedback and guidance from teaching staff [2]. Educational outcomes are influenced by SBE quality that is governed by technology, training, resources and SBE design elements [3]. As SBE has evolved, its design elements have been adjusted to overcome learning barriers such as excessive student anxiety [4]. SBE has also been tailored to suit a variety of settings, populations, time frames and session frequency [5].

SBE designs include tag team simulation, simulated virtual and augmented environments and game-based themes within the traditional simulation, to accommodate ever-expanding university enrolments whilst maintaining student experience and satisfaction [6]. In the context of evolving SBE, the level of evidence for SBE effectiveness in nursing education is diverse, and educational outcomes remain ambiguous [7]. SBE can refer to a low-fidelity task trainer that takes 5 min to complete a skill whilst working one-on-one with a facilitator, or a complex high-fidelity immersive scenario involving a group of students performing complex tasks over a prolonged period of time [8]. The pedagogy of SBE generally fails to acknowledge the complex variability in design elements [9] that has implications for learning outcomes. As such, there is a need to evaluate the effect of SBE on learning and to describe SBE design elements to determine whether specific elements have a superior effect on learning outcomes. The aim of this systematic review is to identify, appraise and synthesise the best available evidence regarding the effectiveness of SBE and the impact of design elements on undergraduate nurses’ learning outcomes. To address this aim, there are three review objectives:

  1. (i)

    To determine the effect of SBE on learning outcomes

  2. (ii)

    To describe the variability in SBE elements

  3. (iii)

    To explore the interplay between SBE elements and learning outcomes

Methods

Design

This systematic review protocol was developed in accordance with the Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols (PRISMA-P) [10] statement (Additional file 1) and the accompanying elaboration and explanation guide [11]. The planned systematic review will be reported in accordance with the PRISMA statement [12] and registered on the International Prospective Register of Systematic Reviews (PROSPERO) (CRD42021244530).

Eligibility criteria

Study selection will be guided by the eligibility criteria and the Population, Intervention, Comparison, Outcome (PICO) pneumonic [13].

Types of studies

Primary quantitative research studies published in English from 1 January 1990 will be considered. There has been a sustained increase in the volume of research focused on SBE since the 1990s related to the affordability of high-fidelity simulators [5]. Relevant quantitative designs including randomised controlled trials and quasi-experimental studies with a control group for comparison will be included.

Population

The population of interest is undergraduate nursing students, aged 18 years or over, engaged in SBE in an academic setting, such as a university.

Intervention

SBE is the intervention of interest. In general terms, SBE is a teaching technique used to enhance a learning setting to appear like the real world. In healthcare, there are a variety of design elements that can impact the effectiveness of SBE that in undergraduate nursing education settings is usually conducted in a physical environment with face-to-face teaching. Elements include devices such as manikins or task trainers that are used to allow students to interact in a manner that represents real clinical practice.

Fidelity refers to the realism of the simulation environment [14], and low, medium or high-fidelity environments are elements of simulation that can influence learning outcomes. Simulated patients, including trained actors, and/or integrated simulators that incorporate technology influence fidelity and as such the realism of the interaction between the learner and the manikin. In addition, adjustment of the manikin’s physiological data based on learner decision-making or conversing with the learner through built-in speakers to simulate a patient conversation [15] is a common element. Task trainers are another element of simulation designed to mimic a part of the patient’s body such as a manikin arm for intravenous insertion or something as simple as an orange, to practise injections.

Comparator

Conventional education modalities such as didactic lectures, passive (one-way) classroom teaching or small group seminars will provide a comparative control cohort.

Outcome

The primary outcome will be the SBE effect. This effect will be measured by assessing at least one measure of knowledge, skills or attitude as an endpoint. For the purposes of this review, knowledge is defined as learnt information (e.g. theoretical knowledge relating to the intended learning outcome of the simulation activity) acquired within the simulation activity. The measurement of knowledge will be determined by an academic outcome assessment. Skills are defined as the ability to develop psychomotor function aligned with the successful performance of a particular procedure (e.g. change a wound dressing), and attitude is defined as how worthwhile the learner believes the activity is in relation to their learning [16].

Exclusion criteria

Given the primary aim of the review is to assess the effect of SBE, which necessitates the need for a comparator group, case-control, cohort, cross-sectional and single-group observational studies will be excluded. Literature, narrative, integrative mixed methods and systematic reviews, observational cohort studies, abstracts, letters, commentary, editorials, opinion pieces and grey literature will be excluded. SBE can use modified realities such as augmented reality and virtual reality. These emerging digital modalities have a limited body of evidence and are a developing area of practice [17] so beyond the scope of this review. The use of pre-simulation interventions such as online learning modules or smart device technologies may influence the relationship between SBE and traditional learning so will be excluded. Similarly, studies that combine low-fidelity SBE elements with traditional education approaches and compare these to medium- or high-fidelity SBE will be excluded. Postgraduate nursing students, midwifery students or students receiving SBE in a clinical setting (e.g. a hospital) are excluded. Manuscripts published in languages other than English will be excluded. This is an unfunded study, so translation costs are beyond the investigator’s capacity.

Information sources and search strategy

Databases to be searched from 1 January 1990 include the Cumulative Index to Nursing and Allied Health Literature (CINAHL), the Medical Literature Analysis and Retrieval System Online (MEDLINE), American Psychological Association (APA) PsycInfo and the Education Resources Information Centre (ERIC) via the EBSCO host platform. The Excerpta Medica database (EMBASE) will be searched via the OVID platform. The MEDLINE search strategy is included in Table 1.

Table 1 EBSCO MEDLINE search strategy

Primary quantitative studies included in systematic reviews captured by the search strategy and studies identified through secondary searching relevant study reference lists will also be eligible for inclusion. Specific search terms will be developed in MEDLINE with assistance from a senior librarian experienced in the conduct of systematic reviews, using text words and subject headings. Primary search terms include ‘nursing students’, ‘simulation training, ‘patient simulation’ and ‘immersive simulation’ with common Boolean operators and symbols for exploding terms (*, +) as illustrated in Table 2. Each database will be searched using these broad terms with either MeSH or Emtree terms with appropriate permutations.

Table 2 Search concepts

Data management, selection and screening

Database search results will be imported into the reference management software Endnote© VersionX9 and uploaded into Covidence. Covidence is a tool for effective collaborative title and abstract screening, full-text screening, data abstraction and quality assessment [18]. Covidence automatically identifies, sorts and removes duplicate studies. Two reviewers (MJ and RW) will independently screen the title and abstract of citations with a third reviewer (LM) available to moderate disagreements with a view to reaching consensus. Articles that meet the eligibility criteria will be sourced for full-text review. Two reviewers (MJ and RW) will complete the full-text screening with a third reviewer (LM) available to moderate disagreements to achieve consensus. Screening outcomes will be documented and reported in a PRISMA flow diagram [12].

Data extraction

Full-text data extraction will be undertaken in duplicate by two reviewers (MJ and LB) and conflicts resolved with arbitrary discussion. Data extraction will take place using a modified version of the existing Covidence data extraction template. Study characteristics to be recorded include publication details (authors, publication year), demographic characteristics, methodology, intervention and comparator group details and reported outcomes. To ensure consistency between reviewers, periodic meetings will be concurrently held during the screening process to resolve disagreements by discussion. A third reviewer (LM) will act as an adjudicator in the event of an unresolved agreement.

Data items

The following data items will be extracted from the selected studies: study setting, sample, inclusion and exclusion criteria, aim, design element/s employed, unit of allocation, start and end dates of the study, duration of participation, baseline group differences, frequency and duration of the intervention, outcome measured (e.g. knowledge acquisition; skill improvement, attitude and satisfaction), tool used to measure outcome, validity of the tool/s, comparator group education method and sustainability of outcome/s.

Outcomes and prioritisation

The primary outcome is the SBE effect measured by assessing at least one measure of knowledge, skills or attitude as an endpoint. Outcomes will be compared between the groups who do and do not participate in a SBE intervention. Secondary outcomes include describing variability in SBE design elements and sub-group analyses to explore the interplay between SBE elements and learning outcomes. In the case of discrepancies in the outcomes reported, contact with corresponding authors will be attempted by email to obtain relevant data.

Risk of bias in individual studies

Each randomised trial will be assessed for possible risk of bias using the revised Cochrane Collaboration tool for assessing the risk of bias (RoB 2). This tool focuses on trial design, conduct and reporting to obtain the information relevant to the risk of bias [19]. Based on the answers provided within the tool, trials will be categorised as ‘low’ or ‘high’ risk of bias. Where a lack of detail impacts the capacity to assess the risk bias, this will be noted as ‘unclear’ and authors contacted for more information. If there is disagreement, a third author will be consulted to act as an arbitrator. The Risk Of Bias In Non-Randomized Studies – of Interventions (ROBINS-I) tool will be used for bias assessment in studies with this design. The ROBINS-I tool shares many features with the RoB 2 tool as it focuses on a specific result, is structured into a fixed set of domains of bias, includes signalling questions that inform risk of bias judgements and leads to an overall risk-of-bias judgement [20].

Quality appraisal

Two authors (MJ and LB) will independently assess each article for quality using the Mixed Methods Appraisal Tool (MMAT) [21]. This critical appraisal tool allows the appraisal of five categories of studies including qualitative research, randomised controlled trials, non-randomised studies, quantitative descriptive studies and mixed methods studies. As recommended, each study will be reviewed by two authors by completing the appropriate study categories identified within the MMAT. An overall score will not be used to rate the quality of study, but instead, sensitivity analyses will be completed to determine the impact of study quality on outcomes by contrasting components as recommended [11].

Data synthesis

The primary unit of analysis will reflect each endpoint measure; knowledge acquisition, skill improvement or attitude. Dichotomous outcomes will be extracted and analysed using odds ratio (OR) with 95% confidence interval (CI), and continuous outcomes will be represented using the mean difference (MD) or the standardised mean difference (SMD) when outcomes are measured using different scales, with 95% CI. In the event of missing data, we will attempt to contact the primary authors to obtain relevant information. Meta-analysis will be undertaken if two or more studies with comparable design and outcome measures meet the eligibility criteria. Pooled data will be analysed using the DerSimonian and Laird method for random-effects models in RevMan [22]. This model is used when reported outcome effects differ amongst studies but follow some similar distribution [23]. Findings from meta-analysis will be illustrated using forest plots.

The I2 test of heterogeneity will be used to determine the level of variation related to diversity rather than chance. Rather than simply stating whether heterogeneity is present or not, it will be measured using the I2 test [24]. Low heterogeneity will be reflected as an I2 result between 0 and 40%, moderate heterogeneity between 30 and 60%, substantial heterogeneity between 50 and 90% and considerable heterogeneity between 75 and 100% [25]. If a high heterogeneity level (I2 > 50%) amongst trials exists, study design and characteristics will be reported and sensitivity analyses conducted to reduce variability with a view to being able to undertake meta-analysis. It is assumed that specific design elements will underpin the need for subgroup analyses. If data are not suitable for meta-analysis, findings will be presented descriptively in the form of a narrative synthesis according to categories outlined in the SWiM guideline [26]. Due to the potential of a large amount of data that could be conveyed textually, the content will be sequenced to follow the same structure to ensure consistency across results [27].

Meta-bias

Study selection for this review will be guided by the PICO pneumonic and the framework outlined in this protocol to reduce the risk of bias. Hand searching relevant systematic reviews will contribute to a reduction in publication bias [28]. To reduce bias associated with selecting studies, two independent reviewers will be used throughout the screening and data collection process [29]. To address bias when synthesising studies, this protocol has been registered prospectively to promote transparency and replicability [10]. Two reviewers will appraise the quality of studies, and low-quality studies will be excluded to avoid inappropriate influence on results [10].

Confidence in cumulative evidence

The overall effectiveness of simulation and the impact of design elements on undergraduate nurses will be assessed with the Grades of Recommendation, Assessment, Development and Evaluation Working Group (GRADE) approach. The GRADE approach allows an assessment of the certainty of the body of evidence for each individual outcome [30]. Quality of evidence is dependent on the within-study risk of bias, directness of evidence, heterogeneity, precision of effect estimates and risk of publication bias [23]. Two authors will independently assess the quality of evidence regarding the effectiveness of simulation on each outcome. All disagreements will be resolved through discussion and consensus. The certainty of evidence from pooled analyses will be categorised as high, moderate, low or very low [30]. If data are not suitable for pooled meta-analysis and studies are grouped for synthesis, the SWiM guideline [26] will provide a framework for narrative analysis and GRADE criteria will be used to examine the certainty of the evidence for each analytical grouping.

Discussion

SBE is seen as a panacea to counteract the potential for inadequate exposure to real-life clinical scenarios. It has become an increasingly popular mode of teaching yet reports of its effectiveness remain diverse. There is a variety of simulation designs, and embedded elements have implications for learning outcomes that are not well described or scrutinised in the literature. This systematic review will provide a detailed summary of evidence for SBE and SBE design elements in the context of undergraduate nursing education. Findings will contribute to the body of literature examining specific design elements by establishing the effect of SBE on learning outcomes and demonstrating the degree of variability in these elements.

Availability of data and materials

Datasets generated by searches will be made available upon reasonable request.

Abbreviations

SBE:

Simulation-based education

CINAHL:

Cumulative Index to Nursing and Allied Health Literature

MEDLINE:

Medical Literature Analysis and Retrieval System Online

APA:

American Psychological Association

ERIC:

Education Resources Information Centre

EMBASE:

Excerpta Medica database

MMAT:

Mixed Methods Appraisal Tool

PROSPERO:

Prospective Register of Systematic Reviews

PICO:

Population, Intervention, Comparison, Outcome

References

  1. Kim J, Park JH, Shin S. Effectiveness of simulation-based nursing education depending on fidelity: a meta-analysis. BMC Med Ed. 2016;16:152.

    Article  Google Scholar 

  2. Nielsen K, Norlyk A, Henriksen J. Nursing students’ learning experiences in clinical placements or simulation–a qualitative study. J Nurs Ed Prac. 2019;9(1):32–43.

    Article  Google Scholar 

  3. Al-Ghareeb AZ, Cooper SJ. Barriers and enablers to the use of high-fidelity patient simulation manikins in nurse education: an integrative review. Nurs Ed Today. 2016;36:281–6.

    Article  Google Scholar 

  4. Hayden JK, Smiley RA, Alexander M, Kardong-Edgren S, Jeffries PR. The NCSBN national simulation study: a longitudinal, randomized, controlled study replacing clinical hours with simulation in prelicensure nursing education. J Nurs Reg. 2014;5(Suppl 2):S3–S40.

    Google Scholar 

  5. Aebersold M. Simulation-based learning: no longer a novelty in undergraduate education. Online J Issues Nurs. 2018;23(2):1–13.

    Article  Google Scholar 

  6. Kinio AE, Dufresne L, Brandys T, Jetty P. Break out of the classroom: the use of escape rooms as an alternative teaching strategy in surgical education. J Surg Ed. 2019;76(1):134–9.

    Article  Google Scholar 

  7. Cant RP, Cooper SJ. Use of simulation-based learning in undergraduate nurse education: an umbrella systematic review. Nurs Ed Today. 2017;49:63–71.

    Article  Google Scholar 

  8. Bucknall TK, Forbes H, Phillips NM, Hewitt NA, Cooper S, Bogossian F, et al. An analysis of nursing students’ decision-making in teams during simulations of acute patient deterioration. J Adv Nurs. 2016;72(10):2482–94.

    Article  Google Scholar 

  9. Raurell-Torreda M, Llaurado-Serra M, Lamoglia-Puig M, Rifa-Ros R, Diaz-Agea JL, García-Mayor S, et al. Standardized language systems for the design of high-fidelity simulation scenarios: a Delphi study. Nurs Ed Today. 2020. https://doi.org/10.1016/j.nedt.2019.104319.

  10. Moher D, Shamseer L, Clarke M, Ghersi D, Liberati A, Petticrew M, et al. Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols (PRISMA-P) 2015 statement. Syst Reviews. 2015;4(1):1–9.

    Article  Google Scholar 

  11. Shamseer L, Moher D, Clarke M, Ghersi D, Liberati A, Petticrew M, et al. Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols (PRISMA-P) 2015: elaboration and explanation. BMJ. 2015;349:g7647. https://doi.org/10.1136/bmj.g7647.

    Article  Google Scholar 

  12. Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffman TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ. 2020;372(n71). https://doi.org/10.1136/bmj.n71.

  13. Richardson WS. The well built clinical question: a key to evidence-based decisions. ACP J Club. 1995;123(3):A12–3.

    CAS  Article  Google Scholar 

  14. Alconero-Camarero AR, Sarabia-Cobo CM, Catalán-Piris M, González-Gómez S, González-López JR. Nursing students’ satisfaction: a comparison between medium-and high-fidelity simulation training. Int J Environ Res Public Health. 2021;18(2):804.

    Article  Google Scholar 

  15. Massoth C, Röder H, Ohlenburg H, Hessler M, Zarbock A, Pöpping DM, et al. High-fidelity is not superior to low-fidelity simulation but leads to overconfidence in medical students. BMC Med Ed. 2019;19(1):1–8.

    Google Scholar 

  16. Gentry SV, Gauthier A, Ehrstrom BLE, Wortley D, Lilienthal A, Car LT, et al. Serious gaming and gamification education in health professions: systematic review. J Med Internet Res. 2019;21(3):e12994.

    Article  Google Scholar 

  17. Rourke S. How does virtual reality simulation compare to simulated practice in the acquisition of clinical psychomotor skills for pre-registration student nurses? A systematic review. Int J Nurs Stud. 2020;102:103466.

    Article  Google Scholar 

  18. Babineau J. Product review: Covidence (systematic review software). J Canad Health Lib Assoc. 2014;35(2):68–71.

    Article  Google Scholar 

  19. Sterne JA, Savović J, Page MJ, Elbers RG, Blencowe NS, Boutron I, et al. RoB 2: a revised tool for assessing risk of bias in randomised trials. BMJ. 2019;366:l4898

  20. Sterne JA, Hernán MA, McAleenan A, Reeves BC, Higgins JP. Assessing risk of bias in a non-randomized study. In: Cochrane handbook for systematic reviews of interventions; 2019. p. 621–41.

    Chapter  Google Scholar 

  21. Hong QN, Fabregues S, Bartlett G, Boardman F, Cargo M, Dagenais P, et al. The Mixed Methods Appraisal Tool (MMAT) version 2018 for information professionals and researchers. Ed Inform. 2019;34(4):285–91. https://doi.org/10.3233/EFI-180221.

    Article  Google Scholar 

  22. Review Manager (RevMan) [Computer program]. Version 5.3. Copenhagen: the Nordic Cochrane Centre. The Cochrane Collaboration, 2014.

  23. Higgins JP, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, et al. Cochrane handbook for systematic reviews of interventions: Wiley; 2019.

  24. Lin L. Comparison of four heterogeneity measures for meta-analysis. J Eval Clin Pract. 2020;26(1):376–84.

    Article  Google Scholar 

  25. Chandler J, Cumpston M, Li T, Page M, Welch V. Cochrane handbook for systematic reviews of interventions. Hoboken: Wiley; 2019.

    Google Scholar 

  26. Campbell M, McKenzie JE, Sowden A, Katikireddi SV, Brennan SE, Ellis S, et al. Synthesis without meta-analysis (SWiM) in systematic reviews: reporting guideline. BMJ. 2020;368:l6890

  27. Munn Z, Tufanaru C, Aromataris E. JBI’s systematic reviews: data extraction and synthesis. Am J Nurs. 2014;114(7):49–54.

    Article  Google Scholar 

  28. Paez A. Gray literature: an important resource in systematic reviews. J Evid-Based Med. 2017;10(3):233–40.

    Article  Google Scholar 

  29. Siddaway AP, Wood AM, Hedges LV. How to do a systematic review: a best practice guide for conducting and reporting narrative reviews, meta-analyses, and meta-syntheses. An Rev Psych. 2019;70:747–70.

    Article  Google Scholar 

  30. Schunemann HJ, Higgins JPT, Vist GE, Glasziou P, Akl EA, Skoetz N, et al. Chapter 14: Completing ‘summary of findings’ tables and grading the certainty of the evidence. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA, editors. Cochrane Handbook for Systematic Reviews of Interventions version 6.3 (updated February 2022): Cochrane; 2022. Available from www.training.cochrane.org/handbook.

Download references

Acknowledgements

Not applicable.

Funding

No funding has been received for the conduct of this review.

Author information

Affiliations

Authors

Contributions

MJ, LM, LB and RW conceived the idea; structured the aim, objectives and PICO elements; and reviewed the search string and terms. MJ drafted the manuscript. MJ, LM, LB and RW revised the manuscript and read and approved the final draft.

Corresponding author

Correspondence to Rochelle Wynne.

Ethics declarations

Ethics approval and consent to participate

This review does not require ethics approval, and as there are no active participants, consent is not relevant.

Competing interests   

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1.

PRISMA-P 2015 Checklist.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Jackson, M., McTier, L., Brooks, L.A. et al. The impact of design elements on undergraduate nursing students’ educational outcomes in simulation education: protocol for a systematic review. Syst Rev 11, 52 (2022). https://doi.org/10.1186/s13643-022-01926-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13643-022-01926-3

Keywords

  • Nursing
  • Simulation
  • Undergraduate
  • Student
  • Systematic review
  • Protocol