Skip to main content

Assessment of interprofessional competence in undergraduate health professions education: protocol for a systematic review of self-report instruments

Abstract

Background

Health practitioners from different professions, and with differing competencies, need to collaborate to provide quality care. Competencies in interprofessional working need developing in undergraduate educational preparation. This paper reports the protocol for a systematic review of self-report instruments to assess interprofessional learning in undergraduate health professionals’ education.

Methods

We will search PubMed, Web of Science, CINAHL and ERIC from January 2010 onwards. A combination of search terms for interprofessional learning, health professions, psychometric properties, assessment of learning and assessment tools will be used. Two reviewers will independently screen all titles, abstracts and full-texts. Potential conflicts will be resolved through discussion. Quantitative and mixed-methods studies evaluating interprofessional learning in undergraduate health professions education (e.g. medicine, nursing, occupational and physical therapy, pharmacy and psychology) will be included. Methodological quality of each reported instrument, underpinning theoretical frameworks, and the effects of reported interventions will be assessed. The overall outcome will be the effectiveness of instruments used to assess interprofessional competence. Primary outcomes will be the psychometric properties (e.g. reliability, discriminant and internal validity) of instruments used. Secondary outcomes will include time from intervention to assessment, how items relate to specific performance/competencies (or general abstract constructs) and how scores are used (e.g. to grade students, to improve courses or research purposes). Quantitative summaries in tabular format and a narrative synthesis will allow recommendations to be made on the use of self-report instruments in practice.

Discussion

Many studies use self-report questionnaires as tools for developing meaningful interprofessional education activities and assessing students’ interprofessional competence. This systematic review will evaluate both the benefits and limitations of reported instruments and help educators and researchers (i) choose the most appropriate existing self-report instruments to assess interprofessional competence and (ii) inform the design and conduct of interprofessional competency assessment using self-report instruments.

Systematic review registration

Open Science Framework [https://osf.io/vrfjn].

Peer Review reports

Background

Healthcare is increasingly complex, often involving delivering care and treatment to ageing populations, with multiple comorbid conditions [1]. Thus, health practitioners from different professions, with differing competencies, need to collaborate to provide quality care. These interprofessional competencies (IPCs) need to be prepared and developed in undergraduate health professional education. Whilst theoretically straightforward, this has proven difficult within health educational programs of preparation [2].

Interprofessional learning occurs when students from two or more professions learn about, from and with each other, to enable effective collaboration and improve health outcomes [3]. Interprofessional education (IPE) is implemented in different healthcare contexts, often focusing on, but not limited to, teamwork and communication in medicine and nursing practice [4]. Other reviews of research have identified barriers and facilitators to IPE [4], mechanisms underpinning outcomes of IPE [5], effective teaching methods [6], learner outcome levels [7] and assessment tools applicable to specific national contexts [8]. The psychometric properties of assessment tools measuring IPE have also been evaluated [9, 10]. Attempts have also been made to explain how, why and when IPE is successful [4, 11]. The effects of IPE on learning outcomes across all health professions are inconclusive [12]. There is a need for deeper knowledge of how principles of IPC can be expressed in learning activities and assessment practice [9].

IPC has several aspects, reflecting the complex interactions between professionals that can be involved. The Interprofessional Education Collaborative (IPEC) framework describes IPCs using four dimensions: values/ethics for interprofessional practice, roles/responsibilities, interprofessional communication and teams and teamwork [13]. Because of the multidimensional characteristics and heterogeneous relationships to clinical situations, these competencies present a challenge to systematic assessment. Arguably, many aspects of IPE are best assessed in real-life clinical situations. However, while direct observation of actual interprofessional behaviour is preferable, observation is hindered by limited opportunities for observing students and scarcity of trained observers [14]. Consequently, the majority of interprofessional developmental activities use self-report questionnaires to assess IPCs. Many such tools are also being used in research studies of IPE, with some systematically derived estimates as high as 70% [15]. The psychometric quality of IPE assessment instruments has been questioned, and there are reasons to believe that there is room for improvement on how outcomes should be interpreted [16]. Whilst it is hard to conceive that self-report instruments alone would ever provide valid and reliable measures of IPC, if used wisely, they can be a valuable part of IPE assessment strategies. Thus, there is a need to identify variations in the characteristics of assessment tools, the ways they are used and their effects on the educational outcomes they are intended to foster. This systematic review aims to contribute to knowledge and assessment practices surrounding IPL in undergraduate health professional education.

To achieve this aim, we have four objectives:

  1. 1.

    Determine the quality of self-report instruments used in assessment of IPE.

  2. 2.

    Describe the educational strategies utilized.

  3. 3.

    Describe which aspects of IPC that are being assessed.

  4. 4.

    Explore the theoretical basis for instruments and assessment practice.

Methods and design

This protocol has been reported in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analysis Protocols (PRISMA-P) statement [17] (see PRISMA-P checklist, provided as Additional file 1). The planned systematic review will be reported in accordance with the PRISMA statement [18]. This review protocol has been pre-registered within the Open Science Framework [https://osf.io/vrfjn].

Eligibility criteria

Studies will be selected according to the criteria outlined below.

Types of studies

We will include studies with a quantitative (e.g. experimental studies, observational studies, quasi-experimental studies) or mixed methods design.

Population

We will include studies that assess undergraduate students from two or more health professions (e.g. medicine, nursing, occupational and physical therapy, pharmacy, psychology) represented in the educational activity.

Intervention

Studies with educational interventions assessing one or more aspects of IPC (values/ethics for interprofessional practice, roles/responsibilities, interprofessional communication and teams and teamwork). Furthermore, they must have used a self-report instrument (e.g. scales, evaluation form, survey) and evaluated the psychometric properties (e.g. validity, reliability) of such an instrument.

Outcomes and prioritization

The overall outcome will be the effectiveness of instruments used to assess interprofessional competence. Primary outcomes will be the psychometric properties (e.g. reliability, discriminant and internal validity) of instruments used. Secondary outcomes will include time from intervention to assessment, how items relate to specific performance/competencies (or general abstract constructs) and how scores are used (e.g. to grade students, to improve courses or research purposes).

Exclusion criteria

The exclusion criteria are:

  • Editorial letters, commentaries, review articles and qualitative studies.

  • Studies presenting results from both students and practitioners/faculty.

  • Studies reporting only on course satisfaction.

Information sources and search strategy

A literature search will be conducted to identify relevant studies from the following electronic databases: PubMed, Cumulative Index of Nursing and Allied Health literature (CINAHL), Web of Science and ERIC. Peer reviewed articles in the English language, published from January 2010 onwards, will be included in the review. The search strategy is developed using a combination of medical subject headings (MeSH) adapted for each database and abstract/titles using the Boolean operators (OR/AND). Search strings and synonyms will be adapted for each of the three databases using a combination of the following aspects: interprofessional learning, health professions, psychometric properties, assessment of learning and assessment tools. Our search strategy was developed by the research team in collaboration with an experienced information specialist (see Additional file 2: Table S1). To maximize retrieval, reference lists of all included studies will be hand-searched to identify relevant articles missed in the electronic search.

Data selection and screening process

After completed database searches, results will be uploaded to Covidence™, an online systematic review program to facilitate efficient collaborative study screening and selection [19]. Titles and abstracts will be screened for inclusion using inclusion and exclusion criteria (see below) independently by two researchers (RA, SE). Full-text articles will then be examined in detail and further screened for eligibility. In case of disagreement, a third reviewer (CT) will be used as an arbiter and consensus reached through discussion. We will list excluded studies and reasons for exclusions.

Data extraction

A data extraction form will be developed and populated with data extracted from each study. Analyses will be conducted, and data presented, in accordance with the review questions. Data extraction will be managed using Covidence™. The extraction form will be piloted, modified and refined based on a sample of studies, first independently by two reviewers (RA, SE) then via consensus.

Data extraction will be conducted independently by two reviewers (RA, SE) in relation to relevance for the research questions. In case of disagreement about data extraction choices, consensus will be reached by involving a third reviewer (CT). Data will be extracted about author/year/country of origin/study design/measurement properties (reliability, validity)/sampling; study participants; intervention activities; underpinning theories; outcome of interventions; and approaches to data analyses.

Evaluation of study quality

Identified studies’ quality will be assessed using the Medical Education Research Study Quality Instrument (MERSQI) guidelines [20] alongside the Newcastle-Ottawa Scale-Education (NOS-E) [21] and Best Evidence in Medical education (BEME) guidelines [22]. These guidelines, developed to appraise methodological quality in medical education research, will be adapted as necessary to suit our review aims, objectives and retrieved studies. An adaption of the Consensus-based Standards for the selection of health Measurement Instruments (COSMIN) checklist [23] will be used to evaluate measurement properties.

Data synthesis

The extracted data will be systematically recorded and analyzed using descriptive statistics and narrative synthesis. General information and instrument details will be summarized using tables. Preliminary searches of the literature suggest that instruments, populations, designs and outcomes are likely to be heterogeneous. Accordingly, we do not anticipate statistical meta-analysis being suitable or warranted. Underpinning theoretical frameworks, methodological quality, measurement properties and impact of study interventions will be described and synthesized narratively. An analysis of possible subgroups, e.g. by context (simulation praxis, clinical practice, theory based), or type of IPEC dimension, will be performed using descriptive statistics and narrative synthesis.

Discussion

Developing, implementing, improving and sustaining IPC for interprofessional practice are an educational challenge. Most interprofessional activities rely on self-report questionnaires, both for developing meaningful IPE activities and assessing IPC in students. This systematic review will clarify both the benefits and limitations of commonly used instruments and serve as a guide for choosing the most appropriate existing self-report instrument to assess IPC based on their psychometric and other properties. Furthermore, the results will inform educational practice on how to design and conduct IPC assessment using self-report instruments. Potential limitations at the study and review level include the variety of IPL activities, study conditions and outcomes found in the included studies—i.e. intervention and study heterogeneity. This may negate the appropriateness of statistical synthesis. Any protocol amendments will be documented in a protocol amendment and in the final manuscript of the systematic review.

Availability of data and materials

Not applicable.

Abbreviations

IPC:

Interprofessional competence

IPE:

Interprofessional education

IPEC:

Interprofessional Education Collaborative

PRISMA-P:

Preferred Reporting Items for Systematic Reviews and Meta-Analysis Protocols

CINAHL:

Cumulative Index of Nursing and Allied Health literature

MERSQI:

Medical Education Research Study Quality Instrument

NOS-E:

Newcastle-Ottawa Scale-Education

BEME:

Best Evidence in Medical Education

COSMIN:

Consensus-based Standards for the selection of health Measurement instruments

References

  1. Rotter T, Kinsman L, James E, Machotta A, Gothe H, Willis J, et al. Clinical pathways: effects on professional practice, patient outcomes, length of stay and hospital costs. The Cochrane database of systematic reviews. 2010;3:Cd006632.

    Google Scholar 

  2. Olson R, Bialocerkowski A. Interprofessional education in allied health: a systematic review. Med Educ. 2014;48(3):236–46.

    Article  Google Scholar 

  3. World Health Organization (WHO). Framework for action on interprofessional education and collaborative practice. http://www.who.int/hrh/resources/framework_action/en/. Assessed 24 April 2020.

  4. Visser CLF, Ket JCF, Croiset G, Kusurkar RA. Perceptions of residents, medical and nursing students about Interprofessional education: a systematic review of the quantitative and qualitative literature. BMC Med Educ. 2017;17(1):77.

    Article  Google Scholar 

  5. Hammick M, Freeth D, Koppel I, Reeves S, Barr H. A best evidence systematic review of interprofessional education: BEME Guide no. 9. Med Teach. 2007;29(8):735–51.

    Article  CAS  Google Scholar 

  6. Fox L, Onders R, Hermansen-Kobulnicky CJ, Nguyen TN, Myran L. Linn Bet al. Teaching interprofessional teamwork skills to health professional students: A scoping review. J Interprof Care. 2018;32(2):127–35.

    Article  Google Scholar 

  7. Gillan C, Lovrics E, Halpern E, Wiljer D, Harnett N. The evaluation of learner outcomes in interprofessional continuing education: a literature review and an analysis of survey instruments. Med Teach. 2011;33(9):e461–70.

    Article  Google Scholar 

  8. Ehlers JP, Kaap-Frohlich S, Mahler C, Scherer T, Huber M. Analysis of six reviews on the quality of instruments for the evaluation of interprofessional education in German-speaking countries. GMS J Med Educ. 2017;34(3):Doc36.

    PubMed Central  Google Scholar 

  9. Oates M, Davidson M. A critical appraisal of instruments to measure outcomes of interprofessional education. Med Educ. 2015;49(4):386–98.

    Article  Google Scholar 

  10. Havyer RD, Nelson DR, Wingo MT, Comfere NI, Halvorsen AJ, McDonald FS, Reed DA. Addressing the Interprofessional Collaboration Competencies of the Association of American Medical Colleges: a systematic review of assessment instruments in undergraduate medical education. Acad Med. 2016;91(6):865–88.

    Article  Google Scholar 

  11. Reeves S, Goldman J, Gilbert J, Tepper J, Silver I, Suter E, et al. A scoping review to improve conceptual clarity of interprofessional interventions. J Interprof Care. 2011;25(3):167–74.

    Article  Google Scholar 

  12. O'Keefe M, Henderson A, Chick R. Defining a set of common interprofessional learning competencies for health profession students. Med Teach. 2017;39(5):463–8.

    Article  Google Scholar 

  13. IPEC: Core competencies for interprofessional collaborative practice: 2016 update: Interprofessional Education Collaborative Expert Panel; 2016. https://hsc.unm.edu/ipe/resources/ipec-2016-core-competencies.pdf Assessed 24 April 2020.

  14. Thistlethwaite J, Dallest K, Moran M, Dunston R, Roberts C, Eley D, et al. Introducing the individual Teamwork Observation and Feedback Tool (iTOFT): development and description of a new interprofessional teamwork measure. J Interprof Care. 2016;30(4):526–8.

    Article  Google Scholar 

  15. Freeth D, Hammick M, Koppel I, Reeves S, Barr H. A critical review of evaluations of interprofessional education. In: ISBN 095424401X edn: Higher Education Academy Health Sciences and Practice Network; 2002.

  16. Schmitz C, Brandt BF. The Readiness for Interprofessional Learning Scale: To RIPLS or not to RIPLS? That is only part of the question. J Interprof Care. 2015;29(6):525–6.

    Article  Google Scholar 

  17. Moher D, Shamseer L, Clarke M, Ghersi D, Liberati A, Petticrew M, et al. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Syst Rev. 2015;4:1.

    Article  Google Scholar 

  18. Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. J Clin Epidemiol. 2009;62(10):1006–12.

    Article  Google Scholar 

  19. Innovation VH: Covidence systematic review software. In. Melbourne, Australia. Available at www.covidence.org; 2020.

  20. Reed DA, Cook DA, Beckman TJ, Levine RB, Kern DE, Wright SM. Association between funding and quality of published medical education research. JAMA. 2007;298(9):1002–9.

    Article  CAS  Google Scholar 

  21. Cook DA, Reed DA. Appraising the quality of medical education research methods: the Medical Education Research Study Quality Instrument and the Newcastle-Ottawa Scale-Education. Acad Med. 2015;90(8):1067–76.

    Article  Google Scholar 

  22. Gordon M, Farnan J, Grafton-Clarke C, Ahmed R, Gurbutt D, McLachlan J, et al. Non-technical skills assessments in undergraduate medical education: a focused BEME systematic review: BEME Guide No. 54. Med Teach. 2019;41(7):732–45.

    Article  Google Scholar 

  23. Prinsen CAC, Mokkink LB, Bouter LM, Alonso J, Patrick DL, de Vet HCW, et al. COSMIN guideline for systematic reviews of patient-reported outcome measures. Qual Life Res. 2018;27(5):1147–57.

    Article  CAS  Google Scholar 

Download references

Acknowledgements

We would like to thank librarian Linda Bejerstrand from the University Library, Örebro University, for her assistance with development of the search strategy.

Funding

There are to date no external funding sources. Open access funding provided by Örebro University.

Author information

Authors and Affiliations

Authors

Contributions

SE and RA conceived the idea for the review. RA wrote the first draft of the manuscript supported by SE with contributions from CT. All authors contributed to the development of the selection criteria, the risk of bias assessment strategy and data extraction criteria. SE and RA developed the search strategy in collaboration with a librarian. CT provided methodological expertise and critical comment on the design. All authors read, provided feedback and approved the final manuscript. SE is the guarantor for the review.

Corresponding author

Correspondence to Renée Allvin.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interest.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Additional file 1:

PRISMA-P checklist.

Additional file 2:

Table S2

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Allvin, R., Thompson, C. & Edelbring, S. Assessment of interprofessional competence in undergraduate health professions education: protocol for a systematic review of self-report instruments. Syst Rev 9, 142 (2020). https://doi.org/10.1186/s13643-020-01394-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13643-020-01394-7

Keywords