Skip to main content

The characteristics of effective technology-enabled dementia education for health and social care practitioners: protocol for a mixed studies systematic review

Abstract

Background

The global prevalence of people living with dementia is expected to increase exponentially and yet evidence suggests gaps in dementia-specific knowledge amongst practitioners. Evidence-based learning approaches can support educators and learners who are transitioning into new educational paradigms resulting from technological advances. Technology-enabled learning is increasingly being used in health care education and may be a feasible approach to dementia education.

Methods

This protocol aims to describe the methodological and analytical approaches for undertaking a systematic review of the current evidence based on technology-enabled approaches to dementia education for health and social care practitioners. The design and methodology were informed by guidelines from the Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols.

Discussion

The evidence generated from a systematic review of the current evidence is intended to inform the design and implementation of technology-enabled dementia education programmes and to advance the current academic literature at a time of unprecedented demographic and technological transition.

Trial registration

PROSPERO, CRD42018115378.

Peer Review reports

Background

Evidence-based practices are widely accepted across health care disciplines [1, 2]. Implementing evidence-based teaching practices in health care settings avoids reliance on traditional methods including expert opinions that have not been established in systematic research [1]. Educators who are transitioning into new teaching paradigms to meet the expanded needs and learning styles of students can be supported by evidence-based approaches, including new pedagogy and the use of technology for learning [3]. This protocol describes the planned methodological and analytical approaches for undertaking a systematic review of the current evidence on technology-enabled dementia education (TEDE) for health and social care practitioners (HSCPs). The features of effective dementia education for HSCPs were identified in a recent and comprehensive systematic review [4]. To date, there do not appear to be any published systematic reviews on the characteristics and effectiveness of TEDE.

Dementia is a chronic, progressive syndrome in which there is disturbance of multiple higher cortical functions. Alzheimer’s disease, vascular dementia, dementia with Lewy bodies, and frontotemporal dementia are common subtypes although boundaries are indistinct and mixed forms co-exist [5]. The global prevalence of people living with dementia is 47 million and is predicted to rise to 135 million by 2050 [6]. Within the UK context, 850,000 people (one in 14 adults over the age of 65) are estimated to be living with dementia with the future prevalence predicted to mirror global trends [7]. Concern about the quality of care for people living with dementia has intensified the need for an appropriately educated workforce [8] with evidence suggesting gaps in dementia-specific knowledge amongst practitioners [9]. Furthermore, the undergraduate dementia education agenda is variable and dependent on the curricular priorities of Higher Education Institutions [10]. There is, therefore, a growing need for the wide dissemination of dementia education for those involved in meeting the care needs of people living with dementia [11].

TEDE is a dementia-specific form of technology-enabled learning (TEL). TEL is increasingly being adopted in medical and health care education as an effective approach compared to traditional learning for knowledge and skills acquisition [12]. In this protocol, TEL is defined as ‘educational or learning activities that are mediated by information communication technology (ICT), or web-based applications, where learners or teachers engage with the technology for flexible learning, either exclusively, or in combination with face-to-face approaches’. In the absence of a traditional social presence, TEL facilitates interactive learning and is supported by Web 2.0 technology [13]. Web 2.0 characterises the transformation from the static ‘read-only’ capabilities of the original Web 1.0 into a dynamic ‘read-and-write’ participatory media. This has generated a new paradigm for teaching and learners’ participatory activity by offering interconnectivity, content creation and remixing, and interactivity that accommodates learners’ creative practices [14]. Web 2.0 tools include blogs, wikis, and social networking platforms, and they share a capacity for social cohesion and the social construction of new knowledge [15].

TEL in health care settings can diminish traditional, logistical barriers to learning and offers individualised, tailored, point-of-care learning to meet the multiple needs of professional learners from various practice disciplines [16]. Barriers include time loss due to device functionality, inaptitude with a particular device, and lack of social contact compared with face-to-face learning. Low-level digital literacy can also restrict learning with technology [17]. Digital literacy, in the health care context, defines the ability to learn, work, and develop effectively in a digital workplace and society [18]. Indeterminate expressions of digital literacy can generate ambiguity and misconceptions for educators who are involved in the design of TEL [19]. Digital competence is often assumed despite there being varying levels of aptitude amongst learners and disparities are compounded by an increasing diversity of new technologies [20]. An important distinction is situated between a learners’ technological skill in social and entertainment activity and their intellectual proficiency in using technology effectively for learning [21, 22].

The Technology Acceptance Model can be helpful to explain technology use and acceptance [23]. It is based on the ‘theory of reasoned action’ [24], as a predictive model, that posits the subjective perceived usefulness (PU) and perceived ease of use (PEU) mediating the relationship between external variables and the behavioural intention, or likelihood, of using technology [25]. Teaching with technology is dependent on the integration of traditional approaches and the dynamic interactions between educational content, pedagogy, and the technology. These constructs, and their interactions, are conceptualised in a holistic and context-specific technological pedagogical, and content knowledge (TPACK) [26]. TPACK establishes an ecological perspective when teaching with technology so that the technology is not regarded as being merely supplementary. Barriers to teaching with technology include extrinsic factors such as equipment, time, training, and support. The intrinsic factors are less tangible and include an educators’ pedagogical and technical beliefs [27].

TEL is optimised in its capacity to deliver participatory and activity-centred learning that incorporates strategies to promote purposeful virtual dialogue between learners, and with teachers, by incorporating appropriate design features [28]. The design processes and educational theories that underpin TEL are important determinants when evaluating the effectiveness of interventions [29]. Kirkpatrick’s Four-Level Model is a widely cited framework for evaluating educational and training interventions [30]. It is a four-level model that considers learners’ reactions to the training; learning gains as knowledge, skills, and attitudinal change; practice-based behaviour changes following training; and the wider results occurring because of the training. It is subject to criticism for being simplistic and for its assumptions of hierarchy, causality, and inter-correlation between levels [31]. Despite this, the simplicity of the method is a strength [32]; however, it is considered sub-optimal for TEL evaluations [33].

A review on the effectiveness of TEDE requires capacity to consider substantial functional and technological heterogeneity, supporting content and activities in theory-based dementia education. One way to conceptualise complexity is by employing logic models to unpack intervention characteristics for transparency across the intervention variables and multiple outcomes [34] (Fig. 1). Like this, the contribution of the educators’ TPACK and the organisational digital capacity can be seen to support TEDE in conveying the appropriate content, pedagogy, and technical characteristics, ideally underpinned by educational theories and pedagogical frameworks. The functionality, PU, and PEU of the technology may be evaluated independently but also influence user satisfaction, affecting the knowledge, skills, and attitudes derived from the learners’ overall experience. Learner gains can be viewed as determinants of improved practitioner, patient, and broader organisational outcomes (Fig. 2).

Fig. 1
figure 1

System-based logic model. Adapted from Rohwer et. al. (2017)

Fig. 2
figure 2

Process-based logic model. Adapted from Rohwer et al (2017)

The aim of the proposed review is to establish the characteristics and effectiveness of TEDE for HSCPs by critically appraising and synthesising the available evidence. The research questions are as follows: What are the pedagogical and technological characteristics of TEDE for HSCPs? Is TEDE an effective approach to dementia education for HSCPs?

Methods

The design and methodology of this review protocol were informed using the Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols (PRISMA-P) guidelines and checklist [35] (see Additional file 1).

Population

We will incorporate papers that report on data sourced from research participants including all qualified and unqualified HSCPs in either practice or education including educators and instructors.

Intervention

We will consider all papers that report on interventions of technology-enabled approaches to dementia education. The interventions will include a TEDE approach not limited to online learning, e-learning, web-based learning, distance learning, blended learning, and mobile learning. Where studies report a hybrid approach, the effectiveness of the combined approach will be evaluated. All single interventions, modules, and online courses will be included. The learning approach will be considered whether it is passive (reading, watching videos) or active (interactive, multimedia) as will discernment of any intentional opportunities for reflection from engagement with TEDE resources. Interventions from studies published after 2005 will be included to capture those studies that reflect the technological opportunities since Web 2.0 [36, 37].

Comparator

Comparators will include usual practice, traditional learning methods, alternative pedagogical approaches, or differing communication and collaboration tools (i.e. synchronous approaches versus asynchronous or social media versus traditional e-learning). Studies that do not involve a comparator will also be included.

Outcomes

We will report on the educational content, pedagogical approach, and technological specifications.

Primary outcomes:

  • The effectiveness of TEDE

    • Reaction/satisfaction

    • Knowledge, skills, attitude

    • Behaviours

    • Results

Secondary outcomes:

  • Experiences, reactions, or satisfaction of educators with TEDE

  • The usability or reusability of TEDE

    • Functionality

    • Technical support

    • PU

    • PEU

  • The cost-effectiveness of TEDE

  • Barriers and facilitators to TEDE

  • Attrition rates in TEDE

  • Educational theories or pedagogical frameworks that inform TEDE

Outcomes that describe the effectiveness of TEDE are likely to be derived from randomised controlled trials (RCTs), non-randomised studies, or surveys that use numerical data. The validity and reliability of the evaluation instruments used within quantitative studies will be considered. Outcomes such as stakeholders’ perspectives are likely to be derived from qualitative research designs that use narrative data.

Study design

We will include quantitative, qualitative, and mixed method studies that report on the effectiveness or user perceptions of TEDE.

Inclusion criteria

  • Studies of adult learners aged > 16 years

  • Studies published after 2005

  • Quantitative, qualitative, or mixed method studies evaluating TEDE involving HSCPs or health and social care students or educators

  • Studies of TEDE in workplace or higher educational settings (including studies where interventions originate from these settings but are completed externally, i.e. in distance learning)

  • Studies that do or do not include a description of educational theories or pedagogical frameworks that inform the interventions

Exclusion criteria

  • Systematic literature reviews and studies or articles with indeterminate or insubstantial research design (including concept papers, discussion papers, theoretical papers, proposals, protocols, editorials, letters, or comments)

  • Studies not in the English language

  • Pilot or feasibility studies that report on measures of suitability for implementation of a TEDE intervention and do not evaluate a TEDE process

  • Books/book chapters

  • Studies of educational interventions for informal carers of people living with dementia

  • Studies of interventions related to dementia education (i.e. interventions for delirium education)

  • Studies that combine interventions for formal and informal carers unless formal carer (HSCPs) outcomes are explicitly reported

  • Studies of massively open online courses unless professional engagement is specifically evaluated (i.e. participants from higher education or HSCPs)

  • Decision support interventions

  • Studies involving DVD or video unless delivered in an online format

  • Studies of telephonic educational interventions

Search strategy

Literature searches will be carried out in MEDLINE (OVID interface), CINAHL Complete (EBSCO interface), ERIC (EBSCO interface), PsycINFO (EBSCO interface), PubMed, Web of Science Core Collection, OVID Nursing Database, and SCOPUS. An initial search was conducted in November 2018; however, the search will be updated before the preparation of the final report so to identify any new studies since the initial search. The International Prospective Register of Systematic Reviews (PROSPERO), Cochrane Database of Systematic Reviews, Campbell Collaboration Online Library, and Ethos database of doctoral thesis will be searched to ensure that comparable works do not exist and are not in progress.

A combination of subject headings and keywords will be included in the search strategy. Whilst all attempts will be made to apply the keywords uniformly throughout all databases, subject headings will be dependent on database specific thesaurus and subject term mapping. Other variances between databases will result from truncation rules and database specific preferences. Subject headings will be included when they are available and closely match the keywords chosen to describe the concepts of dementia, education, or TEL. Keywords were developed a priori and by considering relevant synonyms and concepts in consultation with the review team. To optimise the relevance of results, the ‘explode’ function will be used on subject headings only if all narrower terms are considered relevant or are included as keywords. Any narrower terms that match keywords will be included as independent subject headings.

A combination of subject terms and keywords will be used for dementia. These will be based on common subtypes of dementia [5]. Multi-infarct dementia is included as the most common form of vascular dementia [38]. The keyword ‘education’ will be consistently applied with an unexploded ‘education’ subject heading, where the subject heading is available. This is considered an optimal approach to reducing irrelevant results from a diverse array of educational subheadings existing within educational subject headings. Subject headings and keywords for TEL will reflect a technological approach to education, learning, or training. This will include absolute and partial approaches, for instance, online learning is an absolute approach where blended learning may involve partial use of technology. Technological devices or computer applications will not be included, as these are diverse, numerous, and continually evolving. Instead, terms of ICT that relate to learning or education will be included.

Population characteristics will be identified during title and abstract screening to enable comprehensive identification of various HSCPs, including students and relevant stakeholders. This strategy will reduce omissions through unnecessary database filtering. The full search strategy is presented in Additional file 2.

Reference management

The titles and abstracts of identified studies will be downloaded from bibliographic databases into RefWorks and duplicate studies will be removed. One reviewer will then screen the titles and abstracts of the studies based on the eligibility criteria. Two other reviewers will screen 10% of the total titles and abstracts, by each screening 5%. If there is any disagreement in suitability for inclusion, a third reviewer will provide arbitration. The next stage will involve a closer scrutiny of full texts considered eligible for inclusion. One reviewer will review the full texts and studies not meeting the eligibility criteria will be removed. A supplementary Microsoft Excel spreadsheet will be created for supplementary reference management and will include study eligibility status, rationale for inclusions/exclusions, and information on locating studies. The database will be made available on a file sharing platform, i.e. Microsoft SharePoint, only when all reviewers have completed the title and abstract screening. One member of the review team will screen the reference lists of all eligible studies for additional studies that satisfy the inclusion and exclusion criteria. The search process will be charted in a Preferred Reporting Items for Systematic Review and Meta-Analysis (PRISMA) flowchart, and the PRISMA checklist will be applied for optimal reporting in the broader review [39].

Data extraction

The data extraction stage will involve transcribing the relevant information that is reported in primary studies onto a standard form that will be developed in a format that is specific to the review question [40]. Separate data extraction forms will be developed to capture quantitative or qualitative data. Quantitative forms will include study characteristics (citation, author details, study design, aims, country, ethics, participant characteristics, and participant demographics); methods (results of quality assessment, sampling approach, data collection, and data analysis methods); intervention characteristics (educational content, technical characteristics, pedagogical specifications, and comparator or control group characteristics); and outcome data (learner satisfaction, knowledge, skills, attitudes, behaviours, results, educator experience, functionality, technical support, usability, cost-effectiveness, and attrition data). Quantitative data will include sample sizes and values of statistical significance and/or confidence intervals.

Qualitative data extraction forms will include bibliographic information (citation, author details, country, ethics, participant characteristics, and demographics); methods (theoretical and epistemological perspectives, qualitative method, data analysis technique, sampling approach, and results of quality assessment); aims (including the research question); and intervention characteristics (educational content, technical characteristics, and pedagogical specifications). Qualitative data will also be included that relates to learner satisfaction, knowledge, skills, attitudes, behaviours, results, educator/instructor experiences, expressions of usability/reusability, and commentary on educational theories or pedagogical frameworks.

Two authors will pilot-test the data extraction tools on one, randomly selected, quantitative study and one qualitative study. Subsequently, one reviewer will extract the remaining data. The data extraction tool will be developed digitally and stored on a file sharing system, i.e. Microsoft SharePoint, and will be subject to ‘spot checking’ by the review team.

Dealing with missing data

Where data is missing, or discrepancies exist within the data, and the data is considered relevant, one reviewer will attempt to request it from study authors using a maximum of two emails. Where data appears ambiguous, or not obviously relevant for inclusion, this data will be flagged for discussion between two reviewers. Disagreement will be arbitrated by a third reviewer. Where data is not included and unavailable, a discussion on the impact of missing data will be provided.

Quality assessment

The Mixed Methods Appraisal Tool (MMAT) is designed for the appraisal of qualitative studies, RCTs, non-randomised quantitative studies, quantitative descriptive studies, and mixed method research studies. Due to the anticipated heterogeneity of study designs that will be included in the review, the MMAT was considered relevant. Further, its specific design intention for mixed study appraisal encourages an inclusive approach to quality assessment. It has been content validated and pilot-tested for reliability [41]. The MMAT criteria are scored on a nominal scale (yes, no, can’t tell) which allows for detailed presentations of studies of higher, lower, or indeterminate methodological quality. The MMAT comprises a checklist and detailed explanations for methodological quality criteria specific to each study design [42]. One reviewer will assess the methodological quality of eligible studies using the current MMAT (2018 version) and record the nominal values of quality assessment and describe justification for decisions. The quality appraisal documents will be maintained digitally and uploaded onto a file sharing system, i.e. Microsoft SharePoint, for accessibility to the review team where they will be subject to spot checks.

Data synthesis

Data synthesis involving quantitative and qualitative research requires that researchers consider ways to handle the methodological diversity within and between qualitative and quantitative studies. A segregated approach to data synthesis recognises the binary distinction between quantitative and qualitative research. This approach will allow quantitative data to be synthesised independently of qualitative data, but with capacity to compliment, confirm, or refute evidence from the divergent paradigm [43]. This approach will be beneficial when considering quantifiable learner gains with qualitative expressions of learner satisfaction. A proposed method to combining quantitative and qualitative findings is illustrated (Fig. 3) [44].

Fig. 3
figure 3

Proposed method for including quantitative and qualitative data. Adapted from Harden (2010)

Due to anticipated interventional and methodological heterogeneity between studies, it is unlikely that the metrics arising from diverse sources of quantitative data will be amenable to meta-analysis. Findings will therefore be reported in narrative synthesis by tabulating the outcome data. This will include effect sizes and reports on statistically and non-statistically significant results. The sample size and direction of effect will also be included so that precision and weighting can be considered when quantifying effectiveness at varying significance levels. Additionally, quality assessment ratings will be incorporated with the synthesis so that outcomes can be considered in relation to methodological quality and not merely on any evidence of effect.

A thematic synthesis of qualitative data will be conducted should the data be amenable. Textual data, describing the views of participants, and key findings by the researcher will be identified, on an inductive basis, resulting in descriptive themes that will be further developed into analytical themes [45]. These analytical themes will then be interpreted to produce rich and deep understanding of the phenomena with capacity to complement, confirm, or refute any evidence from the quantitative data. A comprehensive and transparent account describing the conduct of the thematic synthesis will be provided.

A textual interpretation of all findings will be presented to form an overall picture of the current knowledge. To uphold internal validity and transparency in the methods of narrative synthesis, guidance will be integrated from the UK’s Economic and Social Research Council (ESRC) [46].

Analysis of subgroups

Subgroup analysis can be helpful to explore the impact of potential effect modifiers on the effects of an intervention and may be helpful to understand which TEDE interventions works best, and for whom [47]. Therefore, interventions of shared learner characteristics will be reported in subgroups so to discern the effectiveness of TEDE within specific learner groups. There is anticipated substantial interventional heterogeneity; therefore, shared technological or pedagogical characteristics will be explored in subgroups should the data be amenable. The newest and emerging TEDE approaches will also be investigated in subgroups, with particular relevance to their technological and pedagogical specifications, thereby considering effectiveness in the context of the most recent technological advances. The methods for implementing subgroups in the review will be informed by guidance from the ESRC [46].

Overall quality of the evidence

The outcomes from quantitative data will be defined in high-, moderate-, low-, and/or very low-quality categories using the GRADE approach [48, 49]. As the quantitative evidence will be presented in narrative synthesis, the certainty of the evidence will be presented in the absence of a single (combined) estimate of effect [50]. The GRADE-CERQual approach will inform the confidence of the findings from qualitative evidence [51, 52].

Discussion

This protocol aims to demonstrate the planned methodological and analytical approaches that will inform a systematic review of TEDE. The design is strengthened by adhering to PRISMA-P guidance for increased clarity, transparency, and future reproducibility [35]. Pre-registering the protocol with PROSPERO reduces the risk of reporting bias in a completed review [53]. The research questions have been clearly articulated and broken down into searchable keywords [54]. Therefore, a comprehensive search strategy can be applied to several scientific, health-related, biomedical, and educational databases. Whilst it may be inevitable that not all relevant studies can be sourced for a review, the study selection process is optimised by searching the reference lists of all eligible studies for additional studies also meeting the eligibility criteria.

Every attempt has been made to provide a comprehensive inclusion and exclusion criteria, although this cannot guarantee against elements of subjectivity and human error in the study selection process [55]. Single reviewer title and abstract screening is a limitation of the protocol design but is necessary due to the available resources and is mitigated by second reviewers screening a total of 10% of potentially relevant studies. A similar limitation results from single reviewer quality assessment; therefore, the use of a reliability tested and validated tool (MMAT) will add rigour to this process and help to reduce bias from the individual studies. A single reviewer will also complete data extraction; however, the data extraction tools will be piloted with two reviewers. A process of spot checking has been implemented for data extraction and quality assessment in order to further mitigate against error.

The participants include a wide range of HSCPs from various occupational and educational levels which are considered to be important influences on learning outcomes. The effectiveness of TEDE is therefore established in a heterogeneous population, and it is necessary that this diversity is acknowledged; therefore, specific learner characteristics are intended to be investigated, in relation to their outcomes, in a subgroup analysis. This will be particularly relevant when reporting on the overall effectiveness of TEDE. The heterogeneity of interventional characteristics is somewhat expected in a review of TEDE, considering the various technological and pedagogical approaches. A narrative synthesis is therefore more appropriate than statistical methods of analysis and the avoidance of bias is the main factor to a rigorous synthesis of data. Advanced specification of the intended methods, including guidance from ESRC, will promote rigour and transparency [47].

This protocol is intended to inform the development of a completed review which will aim to support educators, practitioners, and other stakeholders in the design and implementation of TEDE programmes for HSCPs. It is also intended to advance the current academic literature and potentiate further research into TEDE at a time of unprecedented demographic and technological transition.

Availability of data and materials

Not applicable.

Abbreviations

ESRC:

Economic and Social Research Council

HSCP:

Health and social care practitioner

ICT:

Information communication technology

MMAT:

Mixed Methods Appraisal Tool

PEU:

Perceived ease of use

PRISMA:

Preferred Reporting Items for Systematic Review and Meta-Analysis

PRISMA-P:

Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols

PROSPERO:

International Prospective Register of Systematic Reviews

PU:

Perceived usefulness

RCT:

Randomised controlled trial

TEDE:

Technology-enabled dementia education

TEL:

Technology-enabled learning

TPACK:

Technological pedagogical and content knowledge

References

  1. Ferguson L, Day RA. Evidence-based nursing education: myth or reality? J Nurs Educ. 2005;44(3):107–15.

    PubMed  Google Scholar 

  2. Harden RM, Grant J, Buckley G, Hart IR. BEME Guide No. 1: Best Evidence Medical Education. Medical Teacher. 1999;21(6):553–62.

    Article  CAS  PubMed  Google Scholar 

  3. Lauver LS, West MM, Campbell TB, Herrold J, Wood GC. Toward evidence-based teaching: evaluating the effectiveness of two teaching strategies in an associate degree nursing program. Teach Learning Nurs. 2009;4:133–8.

    Article  Google Scholar 

  4. Surr C, Gates C, Irving D, Oyebode J, Smith S, Parveen S, et al. Effective Dementia Education and Training for the Health and Social Care Workforce: A Systematic Review of the Literature. Review of Educational Research. 2017;87(5):966–1002.

    Article  PubMed  PubMed Central  Google Scholar 

  5. World Health Organization. Dementia A Public Health Priority. 2012. https://www.who.int/mental_health/publications/dementia_report_2012/en/. Accessed 10 Oct 2018.

  6. World Health Organization. The epidemiology and impact of dementia current state and future trends. 2015. https://www.who.int/mental_health/neurology/dementia/dementia_thematicbrief_epidemiology.pdf. Accessed 28 Oct 2018.

  7. Alzheimer's Research UK. Dementia Statistics Hub. Number of People in the UK. 2018. https://www.dementiastatistics.org/statistics/numbers-of-people-in-the-uk/. Accessed 22 Oct 2018.

  8. Surr C, Baillie L, Qaugh A, Brown M. Position Paper: The importance of including dementia in pre and post-qualifying curricula for health and social care professionals. 2017. https://www.dementiauk.org/wp-content/uploads/2017/11/HEDN-Position-Paper-for-Professional-Bodies-Colleges-Nov-2017.pdf. Accessed: 10 Oct 2018.

  9. Cowdell F. The care of older people with dementia in acute hospitals. Int J Older People Nurs. 2010;5(2):83–92.

    Article  PubMed  Google Scholar 

  10. Hvalič-Touzery S, Skela-Savič B, Macrae R, Jack-Waugh A, Tolson D, Hellström A, et al. The provision of accredited higher education on dementia in six European countries: An exploratory study. Nurse Education Today. 2018;60:161–9.

    Article  PubMed  Google Scholar 

  11. King C, Kedler J, Phillips R, McInerney F, Doherty K, Walls J, et al. Something for Everyone: MOOC Design for Informing Dementia Education and Research. Proceedings Eur Conference E-learning. 2013:191–8.

  12. George PP, Papachristou N, Belisario JM, Wang W, Wark PA, Cotic Z, et al. Online eLearning for undergraduates in health professions: A systematic review of the impact on knowledge, skills, attitudes and satisfaction. J Global Health. 2014;4(1):010406.

    Article  Google Scholar 

  13. Gupta S, Seth A. Web 2.0 Tools in Higher Education. Trends Inform Manag. 2014;10(1):1–11.

    Google Scholar 

  14. Greenhow C, Robelia B, Hughes JE. Learning, teaching, and scholarship in a digital age: Web 2.0 and classroom research: What path should we take now? Educ Res. 2009;38(4):246–59.

    Article  Google Scholar 

  15. Duffy P. Engaging the YouTube Google-Eyed Generation: Strategies for Using Web 2.0 in Teaching and Learning. Electronic J E-Learning. 2008;6(2):119–29.

    Google Scholar 

  16. Reeves S, Fletcher S, McLoughlin C, Yim A, Patel KD. Interprofessional online learning for primary healthcare: findings from a scoping review. BMJ Open. 2017;7(8):e016872.

    Article  PubMed  PubMed Central  Google Scholar 

  17. Rouleau G, Gagnon M, Cote J, Payne-Gagnon J, Hudson E, Bouix-Picasso J, et al. Effects of e-learning in a continuing education context on nursing care: a review of systematic qualitative, quantitative and mixed studies reviews (protocol). BMJ Open. 2017;7(10):e018441.

    Article  PubMed  PubMed Central  Google Scholar 

  18. NHS Health Education England, Royal College of Nursing. Improving Digital Literacy. 2017. https://www.rcn.org.uk/professional-development/publications/pub-006129. Accessed 22 Jan 2019.

  19. Eshet-Alkalai Y. Digital literacy: A conceptual framework for survival skills in the digital era. J Educ Multimedia Hypermedia. 2004;13(1):93–106.

    Google Scholar 

  20. Prior DD, Mazanov J, Meacheam D, Heaslip G, Hanson J. Attitude, digital literacy and self efficacy: Flow-on effects for online learning behavior. Internet Higher Educ. 2016;29:91–7.

    Article  Google Scholar 

  21. Tang CM, Chaw LY. Digital Literacy: A Prerequisite for Effective Learning in a Blended Learning Environment? Electronic J E-Learning. 2016;14(1):54–65.

  22. Kirkwood A. Teaching and learning with technology in higher education: blended and distance education needs ‘joined-up thinking’ rather than technological determinism. Open Learning. 2014;29(3):206–21.

    Article  Google Scholar 

  23. Davis F, Bagozzi R, Warshaw P. User Acceptance of Computer Technology: A Comparison of Two Theoretical Models. Manag Sci. 1989;35(8):982–1003.

    Article  Google Scholar 

  24. Holden RJ, Karsh B. The Technology Acceptance Model: Its past and its future in health care. Journal of Biomedical Informatics. 2010;43(1):159–72.

    Article  PubMed  Google Scholar 

  25. Abbad M. A Conceptual Model of eLearning Adoption. International Journal of Emerging Technologies in Learning (iJET). 2011;6(S2).

  26. Koehler MJ, Mishra P, Cain W. What Is Technological Pedagogical Content Knowledge (TPACK)? The Journal of Education. 2013;3:13.

    Article  Google Scholar 

  27. Ertmer PA. Addressing First- and Second-Order Barriers to Change: Strategies for Technology Integration. Educ Technol Res Dev. 1999;47(4):47–61.

    Article  Google Scholar 

  28. Care WD. The Transactional Approach to Distance Education. Adult Learning. 1996;7(6):11–2.

    Article  Google Scholar 

  29. Sandars J, Patel RS, Goh PS, Kokatailo PK, Lafferty N. The importance of educational theories for facilitating learning when using technology in medical education. Medical Teacher. 2015;37(11):1039–42.

    Article  PubMed  Google Scholar 

  30. Kirkpatrick D. Great Ideas Revisited. Techniques for Evaluating Training Programs. Revisiting Kirkpatrick's Four-Level Model. Training Dev. 1996;50(1):54–9.

    Google Scholar 

  31. Reio TG, Rocco TS, Smith DH, Elegance C. A Critique of Kirkpatrick's Evaluation Model. New Horizons Adult Educ Human Res Dev. 2017;29(2):35–53.

    Article  Google Scholar 

  32. Bisgaard C, Rubak S, Rodt S, Petersen J, Musaeus P. The effects of graduate competency-based education and mastery learning on patient care and return on investment: a narrative review of basic anesthetic procedures. BMC Med Educ. 2018;18(1):154.

    Article  PubMed  PubMed Central  Google Scholar 

  33. Nicoll P, MacRury S, van Woerden H, Smyth K, Eysenbach G. Evaluation of Technology-Enhanced Learning Programs for Health Care Professionals: Systematic Review. J Med Internet Res. 2018;20(4):e131.

    Article  PubMed  PubMed Central  Google Scholar 

  34. Rohwer A, Pfadenhauer L, Burns J, Brereton L, Gerhardus A, Booth A, et al. Series: Clinical Epidemiology in South Africa. Paper 3: Logic models help make sense of complexity in systematic reviews and health technology assessments. Journal of Clinical Epidemiology. 2017;83:37–47.

    Article  PubMed  Google Scholar 

  35. Shamseer L, Moher D, Clarke M, Ghersi D, Liberati A, Petticrew M, et al. PRISMA-P Group. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015: elaboration and explanation. BMJ. 2015;349:g7647.

    Article  Google Scholar 

  36. Maness J. Library 2.0 Theory: Web 2.0 and its Implication for Libraries. Webology. 2006;3(2).

  37. Seo D, Lee J. Web_2.0 and five years since: How the combination of technological and organizational initiatives influences an organization’s long-term Web_2.0 performance. Telematics Inform. 2016;33(1):232–46.

    Article  Google Scholar 

  38. McKay E, Counts SE. Multi-infarct dementia: A historical perspective. Dementia and Geriatric Cognitive Disorders Extra. 2017;7(1):160–71.

    Article  PubMed  PubMed Central  Google Scholar 

  39. Moher D, Liberati A, Tetzlaff J, Altman D. The PRISMA Group. Preferred Reporting Items for Systematic Reviews and Meta-Analyses: The PRISMA Statement. PLoS Med. 2009;6(7):e1000097.

    Article  PubMed  PubMed Central  Google Scholar 

  40. Buscemi N, Hartling L, Vandermeer B, Tjosvold L, Klassen TP. Single data extraction generated more errors than double data extraction in systematic reviews. Journal of Clinical Epidemiology. 2006;59(7):697–703.

    Article  PubMed  Google Scholar 

  41. Pace R, Pluye P, Bartlett G, Macaulay AC, Salsberg J, Jagosh J, et al. Testing the reliability and efficiency of the pilot Mixed Methods Appraisal Tool (MMAT) for systematic mixed studies review. International Journal of Nursing Studies. 2012;49(1):47–53.

    Article  PubMed  Google Scholar 

  42. Hong Q, Pluye P, Fàbregues S, Bartlett G, Boardman F, Cargo M, et al. Mixed Methods Appraisal Tool (MMAT), version 2018. 2018. http://mixedmethodsappraisaltoolpublic.pbworks.com/w/file/fetch/127916259/MMAT_2018_criteria-manual_2018-08-01_ENG.pdf. Accessed 13 Oct 2018.

  43. Sandelowski M, Voils C, Barroso J. Defining and Designing Mixed Research Synthesis Studies. Res Sch. 2006;13(1):29.

    PubMed  PubMed Central  Google Scholar 

  44. Harden A. Mixed-Methods Systematic Reviews: Integrating Quantitative and Qualitative Findings. FOCUS. A Publication of the National Center for the Dissemination of Disability Research (NCDDR). 2010;25:1-8.

  45. Thomas J, Harden A. Methods for the thematic synthesis of qualitative research in systematic reviews. BMC Med Res Methodol. 2008;8(1). https://doi.org/10.1186/1471-2288-8-45.

  46. Popay J, Roberts H, Sowden A, Petticrew M, Arai L, Rodgers M, et al. Guidance on the conduct of narrative synthesis in systematic reviews. A product from the ESRC methods programme. 2006;1:1-92.

  47. Ryan R. Cochrane Consumers and Communication Review Group. ‘Cochrane Consumers and Communication Review Group: data synthesis and analysis’. 2013. http://cccrg.cochrane.org/. Accessed 21 Nov 2018.

  48. GRADE Working Group. 2018. http://www.gradeworkinggroup.org/. Accessed 10 Oct 2018.

  49. GRADE Working Group. Grading quality of evidence and strength of recommendation. BMJ. 2004;328(1490).

  50. Murad M, Mustafa R, Schünemann H, Sultan S, Santesso N. Rating the certainty in evidence in the absence of a single estimate of effect. Evidence-Based Medicine. 2017;22(3):85–7.

    Article  PubMed  PubMed Central  Google Scholar 

  51. Lewin S, Booth A, Glenton C, Munthe-Kaas H, Rashidian A, Wainwright M, et al. Applying GRADE-CERQual to qualitative evidence synthesis findings: introduction to the series. Implement Sci. 2018;13(S1):2.

    Article  PubMed  PubMed Central  Google Scholar 

  52. GRADE CERQual. Confidence in the evidence from reviews of qualitative research. 2018. https://www.cerqual.org/. Accessed 14 Oct 2018.

  53. Drucker AM, Fleming P, Chan A. Research Techniques Made Simple: Assessing Risk of Bias in Systematic Reviews. J Investigative Dermatol. 2016;136(11):e109–14.

    Article  CAS  Google Scholar 

  54. Davies K. Formulating the Evidence Based Practice Question: A Review of the Frameworks. Evidence Based Library and Information Practice. 2011;6(2):75-80.

    Article  Google Scholar 

  55. McDonagh M, Peterson K, Raina P, Chang S, Shekelle P. Avoiding Bias in Selecting Studies. 2013 Feb 20. In: Methods Guide for Effectiveness and Comparative Effectiveness Reviews [Internet]. Rockville: Agency for Healthcare Research and Quality (US); 2013. https://www.ncbi.nlm.nih.gov/books/NBK126701/. Accessed 18 Jan 2019.

Download references

Funding

This systematic review protocol and subsequent systematic review form part of a PhD programme at the Department of Nursing at the University of the Highlands and Islands. The wider PhD programme is entitled ‘Technology Enabled Dementia Education and Support for Health Care Professionals in Rural Scotland’ and is directly funded by the European Social Fund and Scottish Funding Council as part of Developing Scotland’s Workforce in the Scotland 2014–2020 European Structural and Investment Fund Programme.

Author information

Authors and Affiliations

Authors

Contributions

KM is a PhD student at the University of the Highlands and Islands. He drafted the manuscript and developed the search strategy in collaboration with all protocol authors. All authors have read and approved the final manuscript.

Corresponding author

Correspondence to Kevin Muirhead.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Additional file 1.

PRISMA-P (Preferred Reporting Items for Systematic review and Meta-Analysis Protocols) 2015 checklist: recommended items to address in a systematic review protocol.

Additional file 2.

Search Strategy.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Muirhead, K., Macaden, L., Clarke, C. et al. The characteristics of effective technology-enabled dementia education for health and social care practitioners: protocol for a mixed studies systematic review. Syst Rev 8, 316 (2019). https://doi.org/10.1186/s13643-019-1212-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13643-019-1212-4

Keywords