Assessment of interprofessional competence in undergraduate health professions education: protocol for a systematic review of self-report instruments
Systematic Reviews volume 9, Article number: 142 (2020)
Health practitioners from different professions, and with differing competencies, need to collaborate to provide quality care. Competencies in interprofessional working need developing in undergraduate educational preparation. This paper reports the protocol for a systematic review of self-report instruments to assess interprofessional learning in undergraduate health professionals’ education.
We will search PubMed, Web of Science, CINAHL and ERIC from January 2010 onwards. A combination of search terms for interprofessional learning, health professions, psychometric properties, assessment of learning and assessment tools will be used. Two reviewers will independently screen all titles, abstracts and full-texts. Potential conflicts will be resolved through discussion. Quantitative and mixed-methods studies evaluating interprofessional learning in undergraduate health professions education (e.g. medicine, nursing, occupational and physical therapy, pharmacy and psychology) will be included. Methodological quality of each reported instrument, underpinning theoretical frameworks, and the effects of reported interventions will be assessed. The overall outcome will be the effectiveness of instruments used to assess interprofessional competence. Primary outcomes will be the psychometric properties (e.g. reliability, discriminant and internal validity) of instruments used. Secondary outcomes will include time from intervention to assessment, how items relate to specific performance/competencies (or general abstract constructs) and how scores are used (e.g. to grade students, to improve courses or research purposes). Quantitative summaries in tabular format and a narrative synthesis will allow recommendations to be made on the use of self-report instruments in practice.
Many studies use self-report questionnaires as tools for developing meaningful interprofessional education activities and assessing students’ interprofessional competence. This systematic review will evaluate both the benefits and limitations of reported instruments and help educators and researchers (i) choose the most appropriate existing self-report instruments to assess interprofessional competence and (ii) inform the design and conduct of interprofessional competency assessment using self-report instruments.
Systematic review registration
Open Science Framework [https://osf.io/vrfjn].
Healthcare is increasingly complex, often involving delivering care and treatment to ageing populations, with multiple comorbid conditions . Thus, health practitioners from different professions, with differing competencies, need to collaborate to provide quality care. These interprofessional competencies (IPCs) need to be prepared and developed in undergraduate health professional education. Whilst theoretically straightforward, this has proven difficult within health educational programs of preparation .
Interprofessional learning occurs when students from two or more professions learn about, from and with each other, to enable effective collaboration and improve health outcomes . Interprofessional education (IPE) is implemented in different healthcare contexts, often focusing on, but not limited to, teamwork and communication in medicine and nursing practice . Other reviews of research have identified barriers and facilitators to IPE , mechanisms underpinning outcomes of IPE , effective teaching methods , learner outcome levels  and assessment tools applicable to specific national contexts . The psychometric properties of assessment tools measuring IPE have also been evaluated [9, 10]. Attempts have also been made to explain how, why and when IPE is successful [4, 11]. The effects of IPE on learning outcomes across all health professions are inconclusive . There is a need for deeper knowledge of how principles of IPC can be expressed in learning activities and assessment practice .
IPC has several aspects, reflecting the complex interactions between professionals that can be involved. The Interprofessional Education Collaborative (IPEC) framework describes IPCs using four dimensions: values/ethics for interprofessional practice, roles/responsibilities, interprofessional communication and teams and teamwork . Because of the multidimensional characteristics and heterogeneous relationships to clinical situations, these competencies present a challenge to systematic assessment. Arguably, many aspects of IPE are best assessed in real-life clinical situations. However, while direct observation of actual interprofessional behaviour is preferable, observation is hindered by limited opportunities for observing students and scarcity of trained observers . Consequently, the majority of interprofessional developmental activities use self-report questionnaires to assess IPCs. Many such tools are also being used in research studies of IPE, with some systematically derived estimates as high as 70% . The psychometric quality of IPE assessment instruments has been questioned, and there are reasons to believe that there is room for improvement on how outcomes should be interpreted . Whilst it is hard to conceive that self-report instruments alone would ever provide valid and reliable measures of IPC, if used wisely, they can be a valuable part of IPE assessment strategies. Thus, there is a need to identify variations in the characteristics of assessment tools, the ways they are used and their effects on the educational outcomes they are intended to foster. This systematic review aims to contribute to knowledge and assessment practices surrounding IPL in undergraduate health professional education.
To achieve this aim, we have four objectives:
Determine the quality of self-report instruments used in assessment of IPE.
Describe the educational strategies utilized.
Describe which aspects of IPC that are being assessed.
Explore the theoretical basis for instruments and assessment practice.
Methods and design
This protocol has been reported in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analysis Protocols (PRISMA-P) statement  (see PRISMA-P checklist, provided as Additional file 1). The planned systematic review will be reported in accordance with the PRISMA statement . This review protocol has been pre-registered within the Open Science Framework [https://osf.io/vrfjn].
Studies will be selected according to the criteria outlined below.
Types of studies
We will include studies with a quantitative (e.g. experimental studies, observational studies, quasi-experimental studies) or mixed methods design.
We will include studies that assess undergraduate students from two or more health professions (e.g. medicine, nursing, occupational and physical therapy, pharmacy, psychology) represented in the educational activity.
Studies with educational interventions assessing one or more aspects of IPC (values/ethics for interprofessional practice, roles/responsibilities, interprofessional communication and teams and teamwork). Furthermore, they must have used a self-report instrument (e.g. scales, evaluation form, survey) and evaluated the psychometric properties (e.g. validity, reliability) of such an instrument.
Outcomes and prioritization
The overall outcome will be the effectiveness of instruments used to assess interprofessional competence. Primary outcomes will be the psychometric properties (e.g. reliability, discriminant and internal validity) of instruments used. Secondary outcomes will include time from intervention to assessment, how items relate to specific performance/competencies (or general abstract constructs) and how scores are used (e.g. to grade students, to improve courses or research purposes).
The exclusion criteria are:
Editorial letters, commentaries, review articles and qualitative studies.
Studies presenting results from both students and practitioners/faculty.
Studies reporting only on course satisfaction.
Information sources and search strategy
A literature search will be conducted to identify relevant studies from the following electronic databases: PubMed, Cumulative Index of Nursing and Allied Health literature (CINAHL), Web of Science and ERIC. Peer reviewed articles in the English language, published from January 2010 onwards, will be included in the review. The search strategy is developed using a combination of medical subject headings (MeSH) adapted for each database and abstract/titles using the Boolean operators (OR/AND). Search strings and synonyms will be adapted for each of the three databases using a combination of the following aspects: interprofessional learning, health professions, psychometric properties, assessment of learning and assessment tools. Our search strategy was developed by the research team in collaboration with an experienced information specialist (see Additional file 2: Table S1). To maximize retrieval, reference lists of all included studies will be hand-searched to identify relevant articles missed in the electronic search.
Data selection and screening process
After completed database searches, results will be uploaded to Covidence™, an online systematic review program to facilitate efficient collaborative study screening and selection . Titles and abstracts will be screened for inclusion using inclusion and exclusion criteria (see below) independently by two researchers (RA, SE). Full-text articles will then be examined in detail and further screened for eligibility. In case of disagreement, a third reviewer (CT) will be used as an arbiter and consensus reached through discussion. We will list excluded studies and reasons for exclusions.
A data extraction form will be developed and populated with data extracted from each study. Analyses will be conducted, and data presented, in accordance with the review questions. Data extraction will be managed using Covidence™. The extraction form will be piloted, modified and refined based on a sample of studies, first independently by two reviewers (RA, SE) then via consensus.
Data extraction will be conducted independently by two reviewers (RA, SE) in relation to relevance for the research questions. In case of disagreement about data extraction choices, consensus will be reached by involving a third reviewer (CT). Data will be extracted about author/year/country of origin/study design/measurement properties (reliability, validity)/sampling; study participants; intervention activities; underpinning theories; outcome of interventions; and approaches to data analyses.
Evaluation of study quality
Identified studies’ quality will be assessed using the Medical Education Research Study Quality Instrument (MERSQI) guidelines  alongside the Newcastle-Ottawa Scale-Education (NOS-E)  and Best Evidence in Medical education (BEME) guidelines . These guidelines, developed to appraise methodological quality in medical education research, will be adapted as necessary to suit our review aims, objectives and retrieved studies. An adaption of the Consensus-based Standards for the selection of health Measurement Instruments (COSMIN) checklist  will be used to evaluate measurement properties.
The extracted data will be systematically recorded and analyzed using descriptive statistics and narrative synthesis. General information and instrument details will be summarized using tables. Preliminary searches of the literature suggest that instruments, populations, designs and outcomes are likely to be heterogeneous. Accordingly, we do not anticipate statistical meta-analysis being suitable or warranted. Underpinning theoretical frameworks, methodological quality, measurement properties and impact of study interventions will be described and synthesized narratively. An analysis of possible subgroups, e.g. by context (simulation praxis, clinical practice, theory based), or type of IPEC dimension, will be performed using descriptive statistics and narrative synthesis.
Developing, implementing, improving and sustaining IPC for interprofessional practice are an educational challenge. Most interprofessional activities rely on self-report questionnaires, both for developing meaningful IPE activities and assessing IPC in students. This systematic review will clarify both the benefits and limitations of commonly used instruments and serve as a guide for choosing the most appropriate existing self-report instrument to assess IPC based on their psychometric and other properties. Furthermore, the results will inform educational practice on how to design and conduct IPC assessment using self-report instruments. Potential limitations at the study and review level include the variety of IPL activities, study conditions and outcomes found in the included studies—i.e. intervention and study heterogeneity. This may negate the appropriateness of statistical synthesis. Any protocol amendments will be documented in a protocol amendment and in the final manuscript of the systematic review.
Availability of data and materials
Interprofessional Education Collaborative
Preferred Reporting Items for Systematic Reviews and Meta-Analysis Protocols
Cumulative Index of Nursing and Allied Health literature
Medical Education Research Study Quality Instrument
Best Evidence in Medical Education
Consensus-based Standards for the selection of health Measurement instruments
Rotter T, Kinsman L, James E, Machotta A, Gothe H, Willis J, et al. Clinical pathways: effects on professional practice, patient outcomes, length of stay and hospital costs. The Cochrane database of systematic reviews. 2010;3:Cd006632.
Olson R, Bialocerkowski A. Interprofessional education in allied health: a systematic review. Med Educ. 2014;48(3):236–46.
World Health Organization (WHO). Framework for action on interprofessional education and collaborative practice. http://www.who.int/hrh/resources/framework_action/en/. Assessed 24 April 2020.
Visser CLF, Ket JCF, Croiset G, Kusurkar RA. Perceptions of residents, medical and nursing students about Interprofessional education: a systematic review of the quantitative and qualitative literature. BMC Med Educ. 2017;17(1):77.
Hammick M, Freeth D, Koppel I, Reeves S, Barr H. A best evidence systematic review of interprofessional education: BEME Guide no. 9. Med Teach. 2007;29(8):735–51.
Fox L, Onders R, Hermansen-Kobulnicky CJ, Nguyen TN, Myran L. Linn Bet al. Teaching interprofessional teamwork skills to health professional students: A scoping review. J Interprof Care. 2018;32(2):127–35.
Gillan C, Lovrics E, Halpern E, Wiljer D, Harnett N. The evaluation of learner outcomes in interprofessional continuing education: a literature review and an analysis of survey instruments. Med Teach. 2011;33(9):e461–70.
Ehlers JP, Kaap-Frohlich S, Mahler C, Scherer T, Huber M. Analysis of six reviews on the quality of instruments for the evaluation of interprofessional education in German-speaking countries. GMS J Med Educ. 2017;34(3):Doc36.
Oates M, Davidson M. A critical appraisal of instruments to measure outcomes of interprofessional education. Med Educ. 2015;49(4):386–98.
Havyer RD, Nelson DR, Wingo MT, Comfere NI, Halvorsen AJ, McDonald FS, Reed DA. Addressing the Interprofessional Collaboration Competencies of the Association of American Medical Colleges: a systematic review of assessment instruments in undergraduate medical education. Acad Med. 2016;91(6):865–88.
Reeves S, Goldman J, Gilbert J, Tepper J, Silver I, Suter E, et al. A scoping review to improve conceptual clarity of interprofessional interventions. J Interprof Care. 2011;25(3):167–74.
O'Keefe M, Henderson A, Chick R. Defining a set of common interprofessional learning competencies for health profession students. Med Teach. 2017;39(5):463–8.
IPEC: Core competencies for interprofessional collaborative practice: 2016 update: Interprofessional Education Collaborative Expert Panel; 2016. https://hsc.unm.edu/ipe/resources/ipec-2016-core-competencies.pdf Assessed 24 April 2020.
Thistlethwaite J, Dallest K, Moran M, Dunston R, Roberts C, Eley D, et al. Introducing the individual Teamwork Observation and Feedback Tool (iTOFT): development and description of a new interprofessional teamwork measure. J Interprof Care. 2016;30(4):526–8.
Freeth D, Hammick M, Koppel I, Reeves S, Barr H. A critical review of evaluations of interprofessional education. In: ISBN 095424401X edn: Higher Education Academy Health Sciences and Practice Network; 2002.
Schmitz C, Brandt BF. The Readiness for Interprofessional Learning Scale: To RIPLS or not to RIPLS? That is only part of the question. J Interprof Care. 2015;29(6):525–6.
Moher D, Shamseer L, Clarke M, Ghersi D, Liberati A, Petticrew M, et al. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Syst Rev. 2015;4:1.
Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. J Clin Epidemiol. 2009;62(10):1006–12.
Innovation VH: Covidence systematic review software. In. Melbourne, Australia. Available at www.covidence.org; 2020.
Reed DA, Cook DA, Beckman TJ, Levine RB, Kern DE, Wright SM. Association between funding and quality of published medical education research. JAMA. 2007;298(9):1002–9.
Cook DA, Reed DA. Appraising the quality of medical education research methods: the Medical Education Research Study Quality Instrument and the Newcastle-Ottawa Scale-Education. Acad Med. 2015;90(8):1067–76.
Gordon M, Farnan J, Grafton-Clarke C, Ahmed R, Gurbutt D, McLachlan J, et al. Non-technical skills assessments in undergraduate medical education: a focused BEME systematic review: BEME Guide No. 54. Med Teach. 2019;41(7):732–45.
Prinsen CAC, Mokkink LB, Bouter LM, Alonso J, Patrick DL, de Vet HCW, et al. COSMIN guideline for systematic reviews of patient-reported outcome measures. Qual Life Res. 2018;27(5):1147–57.
We would like to thank librarian Linda Bejerstrand from the University Library, Örebro University, for her assistance with development of the search strategy.
There are to date no external funding sources. Open access funding provided by Örebro University.
Ethics approval and consent to participate
Consent for publication
The authors declare that they have no competing interest.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Allvin, R., Thompson, C. & Edelbring, S. Assessment of interprofessional competence in undergraduate health professions education: protocol for a systematic review of self-report instruments. Syst Rev 9, 142 (2020). https://doi.org/10.1186/s13643-020-01394-7
- Interprofessional learning
- Systematic review protocol
- Undergraduate students