Skip to main content


We’d like to understand how you use our websites in order to improve them. Register your interest.

Overview of systematic reviews of the effectiveness of reminders in improving healthcare professional behavior



The purpose of this project was to conduct an overview of existing systematic reviews to evaluate the effectiveness of reminders in changing professional behavior in clinical settings.

Materials and methods

Relevant systematic reviews of reminder interventions were identified through searches in MEDLINE, EMBASE, DARE and the Cochrane Library in conjunction with a larger project examining professional behavioral change interventions. Reviews were appraised using AMSTAR, a validated tool for assessing the quality of systematic reviews. As most reviews only reported vote counting, conclusions about effectiveness for each review were based on a count of positive studies. If available, we also report effect sizes. Conclusions were based on the findings from higher quality and current systematic reviews.


Thirty-five reviews were eligible for inclusion in this overview. Ten reviews examined the effectiveness of reminders generally, 5 reviews focused on specific health care settings, 14 reviews concentrated on specific behaviors and 6 reviews addressed specific patient populations. The quality of the reviews was variable (median = 3, range = 1 to 8). Seven reviews had AMSTAR scores >5 and were considered in detail. Five of these seven reviews demonstrated positive effects of reminders in changing provider behavior. Few reviews used quantitative pooling methods; in one high quality and current review, the overall observed effects were moderate with an absolute median improvement in performance of 4.2% (IQR: 0.5% to 6.6%).


The results support that modest improvements can occur with the use of reminders. The effect size is consistent with other interventions that have been used to improve professional behavior.


Reminders appear effective in improving different clinical behaviors across a range of settings.

Peer Review reports


Reminders are a common approach to prompt clinicians to remember to perform critical tasks, such as monitoring of chronic conditions. Reminders have taken on many forms since their inception, evolving from simple paper reminders posted on medical charts to complex computerized reminders. While earlier versions were often more labor-intensive to administer, the increased use of electronic medical records (EMR) in clinical settings has made computerized reminders more inexpensive and feasible to implement.

There has been a parallel increase in the number of studies examining the effectiveness of reminders to improve clinical care delivered in different settings. The first study that examined the use of reminders was published in 1976 by MacDonald and demonstrated improvements in quality of care[1]. The first systematic review of reminders was published in 1987 by Haynes and included 135 studies[2]. Since that time, a multitude of primary studies and systematic reviews using different methods and approaches to examine the effectiveness of reminders for different disorders in diverse clinical settings have been published. Therefore, this overview attempts to summarize the literature and provide useful information to guide health care providers and administrators to more effectively use reminders in different clinical settings[3].

Overviews are a new approach to summarizing evidence, synthesizing results from multiple systematic reviews in a single, useful document ([4]. This is particularly important in areas with overlapping reviews. Overviews identify high-quality, reliable systematic reviews and explore consistency of findings across reviews.

There have been two previous overviews on changing professional behavior in health care settings[5, 6]; however, neither of them specifically explored the use of reminders. Therefore, this overview will examine the effectiveness of reminders in improving professional behavior in clinical settings using data from existing systematic reviews.

Materials and methods

This overview was carried out in conjunction with the Rx for Change database ( The database consists of quality-appraised and summarized systematic reviews on the effects of professional and other interventions on changing professional behavior that is regularly updated using sensitive searches of MEDLINE, EMBASE, DARE and the Cochrane Library[7]. For this overview, two individuals screened the titles and abstracts of systematic reviews in the Rx for Change database to identify relevant articles published before September 2009.

Ethics approval was not required for this overview. No formal protocol was drawn up for this review in advance.

According to the Cochrane Effective Practice and Organisation of Care (EPOC) group (, reminders are defined as ‘patient or encounter specific information, provided verbally, on paper or on a computer screen, which is designed or intended to prompt a health professional to recall information.’ The population of interest was health professionals working in clinical settings. The intervention had to compare the effectiveness of reminders to other interventions or control. Only reviews that reported outcomes for professional performance (for example, prescribing, test ordering, patient education and so on) were included. Reviews that only examined knowledge of the professional as the outcome were excluded. Reviews primarily focused on reminders or reviews where studies assessing reminders could be clearly distinguished from studies on other interventions were included.

Quality assessment

All eligible reviews were assessed independently by two individuals using the AMSTAR quality assessment tool (A Measurement Tool to Assess Systematic Reviews). AMSTAR is an 11-item tool to assess the methodological quality of systematic reviews that has been internally and externally validated and has been found to have good reliability[8, 9].

Data analysis

We conducted dual, independent data extraction on populations, interventions, comparisons and outcomes using a standardized form. Disagreements were resolved by consensus or consultation with a third individual. Only the latest version of updated reviews was included. Systematic reviews that were published in more than one source were treated as duplicate reviews with data extracted from the most comprehensive paper.

Within a review, studies were included in the analysis if they addressed reminder interventions, either as part of another component such as EMR or as their own entity. Given the limited data presented in many reviews, we used a vote-counting method to assess the effectiveness of the interventions[10]. We re-analyzed the results of each review by counting the proportion of positive studies reported regardless of statistical significance. We chose to focus on the direction of effect instead of the statistical significance because many of the included studies were cluster randomized trials with unit of analysis errors that do not reliably estimate the statistical significance of an intervention[11]. Unit of analysis errors are very common in cluster trials of professional behavior change interventions; Grimshaw et al. observed that approximately 50% of cluster randomized controlled trials (RCTs) on guideline dissemination and implementation strategies (including reminders) had unit of analysis errors[12]. Although it is theoretically possible to adjust for unit of analysis errors, cluster trials are rarely reported in sufficient detail to permit this. In addition, a number of these studies are small and may not be adequately powered for statistical significance.

Effectiveness in this overview was categorized as 1) generally effective (more than two-thirds of studies in a review demonstrated positive effects), 2) mixed effects (one-third to two-thirds of studies demonstrated positive effects) and 3) generally ineffective (fewer than one-third of studies demonstrated positive effects).

Summaries of the included reviews are reported in Additional file1 along with the proportion of included studies that assessed reminder interventions in each systematic review. We also report the overall findings of each review as provided by the review authors as well as any quantitative analyses undertaken by the authors of the original reviews (see Additional file1). The results of RCTs and studies examining multifaceted interventions versus reminder interventions alone are also presented.

Although all reviews are summarized and reported, we focused our conclusion on reviews of higher quality (AMSTAR >5) and more current (2003 or later).

Finally, we categorized the reviews into four groups for analysis based on their focus: 1) broad reviews (for example, on all types of reminders), 2) reviews of specific settings (for example, primary care), 3) reviews of specific behaviors (for example, prescribing), and 4) reviews of specific patient populations (for example, geriatric population).


There were 313 reviews included in the Rx for Change database that examined professional behavior change interventions (Figure1), including 41 reviews of reminder interventions. We excluded six reviews because they had been updated by a subsequent review[2, 1317]. In total, 35 reviews published between 1993 and 2009 were eligible for inclusion in this overview. The majority of reviews were conducted from the late 1990s onward, and most broad reviews were published from the late 1990s to mid-2000s with fewer published in the last four years (Figure2).

Figure 1

Flow diagram of selected reviews. Reviews included in the Rx for Change database that examined professional behavior change interventions.

Figure 2

Publication year of included reviews.

Ten of the reviews looked at reminders generally, 5 reviews examined reminders in specific settings, 14 reviews looked at reminders for specific behaviors, and finally, 6 focused on reminders for specific patient populations.

The quality of the reviews was variable, the median AMSTAR score was 3 (range 1 to 8) (Figure3 and Additional file1). Several AMSTAR items were rarely reported in the included reviews: 1) working from a protocol (only reported by two reviews), 2) disclosing conflict of interest for individual studies (reported by no reviews), 3) assessing publication bias (two reviews), 4) searching grey literature (three reviews), and 5) listing included and excluded studies (three reviews). Further details on AMSTAR items are provided in Table1.

Figure 3

AMSTAR scores of included reviews. AMSTAR scores of included reviews.

Table 1 AMSTAR items

Data were extracted and analyzed from all 35 included reviews. However, only seven of the reviews had AMSTAR scores greater than 5. We focused our conclusions in the text below on these seven key reviews[1723], but provide summaries of all included reviews in Additional file1. We did not find any substantial discrepancies between the findings of the seven key reviews compared to the other identified reviews within the categories.

There was considerable overlap in the studies included in the systematic reviews. In total, 655 studies were included in the reviews, including 459 studies included in more than one review.

Results from broad reviews

Out of the 10 reviews that broadly examined the effectiveness of reminders, including any health professionals in any clinical settings, half demonstrated that reminders were generally effective and half showed mixed results ( Additional file1)[2332].

Shojania et al. published the only high quality review (AMSTAR ≥8) in this category and demonstrated that reminders were effective[23]. The review included an analysis of 32 comparisons of on-screen computer reminders on process adherence, and was the only review that reported a quantitative summary for all included reminder studies based upon a description of the distribution (interquartile range and median) of the observed effects. The median effect size was an absolute risk difference of 4.2% (IQR: 0.5% to 6.6%) in the process of care measures. Shojania et al. also examined the impact of other effect modifiers on the effectiveness of computerized reminders and found that systems which required clinicians to provide a response were more likely to demonstrate a positive effect[23].

Reviews of specific settings

A total of five reviews[3337] examined studies that focused on specific health care settings, such as primary care or emergency rooms. Four of the five reviews showed positive results (three of five reviews: generally effective; one of five reviews: mixed results) ( Additional file1). All of the settings evaluated were outpatient or ambulatory settings. These reviews were of lower quality: none of the reviews had an AMSTAR score >5.

Reviews of specific behaviors

Fourteen of our included reviews focused on specific behaviors with the majority of studies examining prescribing changes[3, 17, 19, 21, 22, 3846]. All of the studies showed positive results (10/14 reviews: generally effective; 4/14 reviews: mixed results). Four of the reviews had AMSTAR scores >5 and all of these showed that reminders had a positive effect on professional behavior. Durieux et al. examined studies on the effect of computer-assisted drug dosing in a meta-analysis: the standardized mean difference (SMD) of initial doses favored the intervention and was statistically significant (five comparisons, SMD, 1.12; 95% confidence interval (C)], 0.33 to 1.92). There was a small non-significant pooled difference favoring the intervention in both maintenance dose changes (eight comparisons, SMD, 0.19; 95% CI, -0.10 to 0.48) and total amount of drug used (four comparisons, SMD, 0.43; 95% CI, -0.29 to 1.16)[19].

Ammenwerth and colleagues also found that reminders had positive effects on prescribing behaviors[17]. A subgroup analysis indicated that for locally developed systems (n = 12), the median effect was a 63% reduction in medication errors (range from reduction of 13% to a reduction of 99%) as compared to commercial systems (n = 11) with a median improvement of 47% reduction in medication errors (range from an increase of 26% to a reduction of 96%). Kaushal et al. did not restrict by profession and found that reminders were effective in improving prescribing[21]. Randell et al. evaluated the effect of reminders on nursing practice and the majority of studies eligible for this review demonstrated that the intervention was effective[22].

Reviews of specific patient populations

The six remaining reviews focused on specific patient populations[18, 21, 4750]. Five of the six reviews demonstrated that reminders were effective (four of six reviews: generally effective; one of six reviews: mixed results). Only two reviews scored >5 on the AMSTAR. Kastner et al. examined the effectiveness of reminders in improving management of osteoporosis and found mixed results[20]. Bywood and colleagues examined the use of reminders to improve management of drug and alcohol use disorders and found that reminders were generally ineffective in changing professional behavior[18].


In our overview of systematic reviews examining the effectiveness of reminders in improving professional behavior, we identified 35 systematic reviews with AMSTAR scores ranging from 1 to 8 (out of total score of 11) with a median of 3. The results of the reviews indicate positive effects when reminders are incorporated into a variety of clinical settings for different types of diseases. Furthermore, the results support that modest improvements can occur with the use of reminders, with one review estimating an overall effect size of 4.2%. This effect size is consistent with other interventions that have been used to improve professional behavior[12].

There are several strengths of this overview. First, it employed a comprehensive search strategy, developed and implemented by an information specialist as part of a larger project to examine interventions to change professional behavior. Second, duplicate screening, data extraction and quality assessments were conducted. Third, a validated instrument (AMSTAR) was used to assess the methodological quality of included reviews[8, 9].

There are also several limitations to this overview. First, we did not retrieve data from the primary studies; therefore, we were limited by the information reported by the review authors on aspects such as the description of the interventions and outcomes. However, by focusing on the results of the systematic reviews rather than each individual primary study, we were able to obtain a broad sense of the field. We also focused our conclusions on reviews with AMSTAR scores >5 to address concerns with the quality of the systematic reviews. Second, this overview could not examine differences in effectiveness that may exist between locally developed and commercially available reminder systems due to the limited data. Only three of the included reviews evaluated the effectiveness of locally developed versus commercially available reminder systems[17, 23, 27]. The subgroup analysis conducted by Ammenwerth and colleagues demonstrated a higher relative risk reduction for locally developed systems, which they suggested was likely because they are developed to meet local needs, and sites often receive additional resources and support when implementing these systems[17]. Garg and colleagues also found that authors who created the decision support system were more likely to report improved performance[27], but this was not supported by Shojania et al.[23].

There may be other factors that impact the effectiveness of reminders, such as the functionality of the decision support system. Functionality (how well a system can model the clinician decision making process) was examined by two of the included reviews[20, 24]. Both found that systems that actively engaged clinicians (either prompted them to use the system or required a response) improved performance to a greater degree compared with systems that required clinicians to initiate use.

Third, the 35 included systematic reviews are not independent given the significant overlap of the included studies. In total, 655 studies from the 35 reviews were included for analyses in this overview. One hundred and ninety-five studies were analyzed only once (found in only one included review) and 459 were “double-counted.” This overlap occurred most often with larger reviews that included more than 20 primary studies. In fact, 35% (160) of these studies were “double-counted” in the three largest systematic reviews in this overview[27, 28, 43]. Therefore, we would argue that these reviews should be considered as (partial) replications of systematic reviews of reminders undertaken by different authors with different inclusion criteria and methods. The convergence of findings across the reviews, therefore, is not surprising but reassuring that the findings are not due to the specific inclusion criteria or methods adopted by a group of authors.

Finally, there remains a lack of information on the long-term effect of reminders. None of the reviews restricted inclusion of studies based on length of follow-up, but the majority of studies were of relatively short durations. Whether the effectiveness of reminders diminishes over time has not been established to date.

An important methodological challenge in conducting overviews is how to interpret and summarize an area of research that includes reviews of variable quality. Another challenge is that inevitably, individual studies will be included in more than one systematic review, which leads to “double-counting.” To overcome both of these challenges, in this overview, we included all eligible reviews regardless of quality but we focused our conclusions on those of higher quality (AMSTAR >5). Therefore, we were able to provide a comprehensive picture of the state of the research literature on the effectiveness of reminders.

With 35 reviews included in this overview, and others still continuing to be published, evaluating the effectiveness of reminder interventions is a topic of frequent publication[5154]. However, the literature is disorganized and reviews are often published in overlapping topic areas, which suggests that there is an unnecessary duplication of efforts by review authors[55]. Furthermore, the publication of reviews that focus on specific populations, settings or diseases (that is, split reviews) rather than broadly based reviews that include all professionals in all settings (that is, lumped reviews) also adds to the duplication of efforts since the former are really subgroup analyses of the latter. It is unclear whether further systematic reviews of reminders will likely change the conclusions of this overview, although we judge this unlikely, questioning the need for substantial new reviews in this area. Instead, we would argue that the field would be best served by updating (a limited number of) high quality broad reviews. Future reviews should focus on possible effect modifiers and moderators to explain the variation observed across primary studies of reminders. Given the poor quality of existing reviews, authors of future reviews must utilize more robust methods to conduct and report their reviews. This should lead to fewer but higher quality reviews, resulting in a more organized field of literature that is more interpretable by end-users.


Reminders address acts of omission that are bound to occur because of simple overload of information for health care clinicians[1]. The results of this overview suggest that reminder systems are effective in changing healthcare professional behavior and improving processes of care. They may be more likely to be successful if they are designed to meet the specific needs of the clinical setting they are serving. Systems that proactively prompted clinicians and/or required a response were also more likely to be effective in changing professional behavior. Recent studies have also demonstrated the potential of checklists, a form of reminder, to dramatically improve patient morbidity and mortality[56]. Our findings suggest that the effects of reminders are positive and they may meaningfully impact clinical practice since they are relatively inexpensive and easy to administer in many settings, particularly as EMR becomes more common.



A Measurement Tool to Assess Systematic Reviews


Cochrane Effective Practice and Organisation of Care group


electronic medical records


randomized controlled trials


standardized mean difference.


  1. 1.

    MacDonald CJ: Protocol-based computer reminders, the quality of care and the non-perfectibility of man. N Engl J Med. 1976, 24: 1351-1355.

  2. 2.

    Haynes RB, Walker CJ: Computer-aided quality assurance. A critical appraisal. Arch Intern Med. 1987, 7: 1297-1301.

  3. 3.

    Weir CR, Staggers N, Phansalkar S: The state of the evidence for computerized provider order entry: a systematic review and analysis of the quality of the literature. Int J Med Inform. 2009, 78: 365-374. 10.1016/j.ijmedinf.2008.12.001.

  4. 4.

    Becker LA Oxman AD:Chapter 22: Overviews of reviews. Cochrane Handbook for Systematic Reviews of Interventions Version 501 . Edited by: Higgins JPT, Green S. 2008, Higgins JPT, Green S (editors). Cochrane Handbook for Systematic Reviews of Interventions Version 5.1.0 [updated March 2011]. The Cochrane Collaboration, 2011. Available from,

  5. 5.

    Bero LA, Grilli R, Grimshaw JM, Harvey E, Oxman AD: Getting research findings into practice. Closing the gap between research and practice: an overview of systematic reviews of interventions to promote the implementation of research findings. BMJ. 1998, 317 (7156): 465-468. 10.1136/bmj.317.7156.465.

  6. 6.

    Grimshaw JM, Shirran L, Thomas R, Mowatt G, Fraser C, Bero L, Grilli R, Harvey E, Oxman A, O’Brien MA: Changing provider behavior: an overview of systematic reviews of interventions. Med Care. 2001, 39 (Suppl 2): II2-II45.

  7. 7.

    Weir MC, Ryan R, Mayhew A, Worswick J, Santesso N, Lowe D, Leslie B, Stevens A, Hill S, Grimshaw JM: The Rx for Change database: a first-in-class tool for optimal prescribing and medicines use. Implementation Sci. 2010, 5: 89-10.1186/1748-5908-5-89.

  8. 8.

    Shea BJ, Bouter LM, Peterson J, Boers M, Andersson N, Ortiz Z, Ramsay T, Bai A, Shukla VK, Grimshaw JM: External validation of a measurement tool to assess systematic reviews (AMSTAR). PLoS One. 2007, 2: e1350-10.1371/journal.pone.0001350.

  9. 9.

    Shea BJ, Hamel C, Wells GA, Bouter LM, Kristjansson E, Grimshaw J, Henry DA, Boers M: AMSTAR is a reliable and valid measurement tool to assess the methodological quality of systematic reviews. J Clin Epidemiol. 2009, 62: 1013-1020. 10.1016/j.jclinepi.2008.10.009.

  10. 10.

    Bushman BJ, Wang MC:Vote counting procedures in meta-analysis. Handbook of Research Synthesis. Edited by: Hedges LV Cooper H, Valentine JC. 1994, New York: Russell Sage Foundation, 207-220. 2,

  11. 11.

    Campbell MK, Mollison J, Steen N, Grimshaw JM, Eccles M: Analysis of cluster randomized trials in primary care: a practical approach. Fam Pract. 2000, 17: 192-196. 10.1093/fampra/17.2.192.

  12. 12.

    Grimshaw JM, Thomas RE, MacLennan G, Fraser C, Ramsay CR, Vale L, Whitty P, Eccles MP, Matowe L, Shirran L, Wensing M, Dijkstra R, Donaldson C: Effectiveness and efficiency of guideline dissemination and implementation strategies. Health Technol Assess. 2004, 8: iii-iv. 1–72

  13. 13.

    Sullivan F, Mitchell E: Has general practitioner computing made a difference to patient care? A systematic review of published reports. BMJ. 1995, 311: 848-852. 10.1136/bmj.311.7009.848.

  14. 14.

    Johnston ME, Langton KB, Haynes RB, Mathieu A: Effects of computer-based clinical decision support systems on clinician performance and patient outcome. A critical appraisal of research. Ann Intern Med. 1994, 120: 135-142.

  15. 15.

    Hunt DL, Haynes RB, Hanna SE, Smith K: Effects of computer-based clinical decision support systems on physician performance and patient outcomes: a systematic review. JAMA. 1998, 280: 1339-1346. 10.1001/jama.280.15.1339.

  16. 16.

    Walton RT, Harvey E, Dovey S, Freemantle N: Computerised advice on drug dosage to improve prescribing practice. Cochrane Database Syst Rev. 2001, 1: CD002894-

  17. 17.

    Ammenwerth E, Schnell-Inderst P, Machan C, Siebert U: The effect of electronic prescribing on medication errors and adverse drug events: a systematic review. J Am Med Inform Assoc. 2008, 15: 585-600. 10.1197/jamia.M2667.

  18. 18.

    Bywood PT, Lunnay B, Roche AM: Strategies for facilitating change in alcohol and other drugs (AOD) professional practice: a systematic review of the effectiveness of reminders and feedback. Drug Alcohol Rev. 2008, 27: 548-558. 10.1080/09595230802245535.

  19. 19.

    Durieux P, Trinquart L, Colombet I, Nies J, Walton R, Rajeswaran A, Rege Walther M, Harvey E, Burnand B: Computerized advice on drug dosage to improve prescribing practice. Cochrane Database Syst Rev. 2008, 3: CD002894-

  20. 20.

    Kastner M, Straus SE: Clinical decision support tools for osteoporosis disease management: a systematic review of randomized controlled trials. J Gen Intern Med. 2008, 23: 2095-2105. 10.1007/s11606-008-0812-9.

  21. 21.

    Kaushal R, Shojania KG, Bates DW: Effects of computerized physician order entry and clinical decision support systems on medication safety: a systematic review. Arch Int Med. 2003, 163: 1409-1416. 10.1001/archinte.163.12.1409.

  22. 22.

    Randell R, Mitchell N, Dowding D, Cullum N, Thompson C: Effects of computerized decision support systems on nursing performance and patient outcomes: a systematic review. J Health Serv Res Policy. 2007, 12: 242-249. 10.1258/135581907782101543.

  23. 23.

    Shojania KG, Jennings A, Mayhew A, Ramsay CR, Eccles MP, Grimshaw J: The effects of on-screen, point of care computer reminders on processes and outcomes of care. Cochrane Database Syst Rev. 2009, 3: CD001096-

  24. 24.

    Balas EA, Austin SM, Mitchell JA, Ewigman BG, Bopp KD, Brown GD: The clinical value of computerized information services. A review of 98 randomized clinical trials. Arch Fam Med. 1996, 5: 271-278. 10.1001/archfami.5.5.271.

  25. 25.

    Buntinx F, Winkens R, Grol R, Knottnerus JA: Influencing diagnostic and preventive performance in ambulatory care by feedback and reminders. A review. Fam Pract. 1993, 10: 219-228. 10.1093/fampra/10.2.219.

  26. 26.

    Chaudhry B, Wang J, Wu S, Maglione M, Mojica W, Roth E, Morton SC, Shekelle PG: Systematic review: impact of health information technology on quality, efficiency, and costs of medical care. Ann Intern Med. 2006, 144: 742-752.

  27. 27.

    Garg AX, Adhikari NK, McDonald H, Rosas-Arellano MP, Devereaux PJ, Beyene J, Sam J, Haynes RB: Effects of computerized clinical decision support systems on practitioner performance and patient outcomes: a systematic review. JAMA. 2005, 293: 1223-1238. 10.1001/jama.293.10.1223.

  28. 28.

    Kawamoto K, Houlihan CA, Balas EA, Lobach DF: Improving clinical practice using clinical decision support systems: a systematic review of trials to identify features critical to success. BMJ. 2005, 330: 765-10.1136/bmj.38398.500764.8F.

  29. 29.

    Mitchell E, Sullivan F: A descriptive feast but an evaluative famine: systematic review of published articles on primary care computing during 1980–97. BMJ. 2001, 322: 279-282. 10.1136/bmj.322.7281.279.

  30. 30.

    Nies J, Colombet I, Degoulet P, Durieux P: Determinants of success for computerized clinical decision support systems integrated in CPOE systems: a systematic review. AMIA Annu Symp Proc. 2006, 594-598.

  31. 31.

    Shiffman RN, Liaw Y, Brandt CA, Corb GJ: Computer-based guideline implementation systems: a systematic review of functionality and effectiveness. J Am Med Inform Assoc. 1999, 6: 104-114. 10.1136/jamia.1999.0060104.

  32. 32.

    Sintchenko V, Magrabi F, Tipper S: Are we measuring the right end-points? Variables that affect the impact of computerised decision support on patient outcomes: a systematic review. Med Inform Internet Med. 2007, 32: 225-240. 10.1080/14639230701447701.

  33. 33.

    Bryan C, Boren SA: The use and effectiveness of electronic clinical decision support tools in the ambulatory/primary care setting: a systematic review of the literature. Inform Prim Care. 2008, 16 (2): 79-91.

  34. 34.

    Colombet I, Chatellier G, Jaulent MC, Degoulet P: Decision aids for triage of patients with chest pain: a systematic review of field evaluation studies. AMIA Annu Symp Proc. 1999, 721-725.

  35. 35.

    Georgiou A, Williamson M, Westbrook JI, Ray S: The impact of computerised physician order entry systems on pathology services: a systematic review. Int J Med Inform. 2007, 76: 514-529. 10.1016/j.ijmedinf.2006.02.004.

  36. 36.

    Jerant AF, Hill DB: Does the use of electronic medical records improve surrogate patient outcomes in outpatient settings?. J Fam Pract. 2000, 49: 349-357.

  37. 37.

    Shea S, DuMouchel W, Bahamonde L: A meta-analysis of 16 randomized controlled trials to evaluate computer-based clinical reminder systems for preventive care in the ambulatory setting. J Am Med Inform Assoc. 1996, 3: 399-409. 10.1136/jamia.1996.97084513.

  38. 38.

    Austin SM, Balas EA, Mitchell JA, Ewigman BG: Effect of physician reminders on preventive care: meta-analysis of randomized clinical trials. Proc Annu Symp Comput Appl Med Care. 1994, 121-124.

  39. 39.

    Bennett JW, Glasziou PP: Computerised reminders and feedback in medication management: a systematic review of randomised controlled trials. Med J Aust. 2003, 178: 217-222.

  40. 40.

    Dexheimer JW, Talbot TR, Sanders DL, Rosenbloom ST, Aronsky D: Prompting clinicians about preventive care measures: a systematic review of randomized controlled trials. J Am Med Inform Assoc. 2008, 15: 311-320. 10.1197/jamia.M2555.

  41. 41.

    Eslami S, Abu-Hanna A, de Keizer NF: Evaluation of outpatient computerized physician medication order entry systems: a systematic review. J Am Med Inform Assoc. 2007, 14: 400-406. 10.1197/jamia.M2238.

  42. 42.

    Jimbo M, Nease DE, Ruffin MT, Rana GK: Information technology and cancer prevention. CA Cancer J Clin. 2006, 56: 26-36. 10.3322/canjclin.56.1.26. quiz 48–29

  43. 43.

    Pearson SA, Moxey A, Robertson J, Hains I, Williamson M, Reeve J, Newby D: Do computerised clinical decision support systems for prescribing change practice? A systematic review of the literature (1990–2007). BMC Health Serv Res. 2009, 9: 154-10.1186/1472-6963-9-154.

  44. 44.

    Reckmann MH, Westbrook JI, Koh Y, Lo C, Day RO: Does computerized provider order entry reduce prescribing errors for hospital inpatients? A systematic review. J Am Med Inform Assoc. 2009, 16: 613-623. 10.1197/jamia.M3050.

  45. 45.

    Schedlbauer A, Prasad V, Mulvaney C, Phansalkar S, Stanton W, Bates DW, Avery AJ: What evidence supports the use of computerized alerts and prompts to improve clinicians’ prescribing behavior?. J Am Med Inform Assoc. 2009, 16: 531-538. 10.1197/jamia.M2910.

  46. 46.

    Yourman L, Concato J, Agostini JV: Use of computer decision support interventions to improve medication prescribing in older adults: a systematic review. Am J Geriatr Pharmacother. 2008, 6: 119-129. 10.1016/j.amjopharm.2008.06.001.

  47. 47.

    Chatellier G, Colombet I, Degoulet P: An overview of the effect of computer-assisted management of anticoagulant therapy on the quality of anticoagulation. Int J Med Inform. 1998, 49: 311-320. 10.1016/S1386-5056(98)00087-2.

  48. 48.

    Fitzmaurice DA, Hobbs FD, Delaney BC, Wilson S, McManus R: Review of computerized decision support systems for oral anticoagulation management. Br J Haematol. 1998, 102: 907-909. 10.1046/j.1365-2141.1998.00858.x.

  49. 49.

    Montgomery AA, Fahey T: A systematic review of the use of computers in the management of hypertension. J Epidemiol Community Health. 1998, 52 (8): 520-525. 10.1136/jech.52.8.520.

  50. 50.

    van Rosse F, Maat B, Rademaker CM, van Vught AJ, Egberts AC, Bollen CW: The effect of computerized physician order entry on medication prescription errors and clinical outcome in pediatric and intensive care: a systematic review. Pediatrics. 2009, 123: 1184-1190. 10.1542/peds.2008-1494.

  51. 51.

    Heselmans A, Van de Velde S, Donceel P, Aertgeerts B, Ramaekers D: Effectiveness of electronic guideline-based implementation systems in ambulatory care settings - a systematic review. Implement Sci. 2009, 4: 82-10.1186/1748-5908-4-82.

  52. 52.

    Damiani G, Pinnarelli L, Colosimo SC, Almiento R, Sicuro L, Galasso R, Sommella L, Ricciardi W: The effectiveness of computerized clinical guidelines in the process of care: a systematic review. BMC Health Serv Res. 2010, 10: 2-10.1186/1472-6963-10-2.

  53. 53.

    Bright TJ, Wong A, Dhurjati R, Bristow E, Bastian L, Coeytaux RR, Samsa G, Hasselblad V, Williams JW, Musty MD, Wing L, Kendrick AS, Sanders GD, Lobach D: Effect of clinical decision-support systems: a systematic review. Ann Intern Med. 2012, 157: 29-43.

  54. 54.

    Holt TA, Thorogood M, Griffiths F: Changing clinical practice through patient specific reminders available at the time of the clinical encounter: systematic review and meta-analysis. J Gen Intern Med. 2012, 27: 974-984. 10.1007/s11606-012-2025-5.

  55. 55.

    Weir M, Mayhew A, Fergusson D, Grimshaw J:17th Cochrane Colloquium. An exploratory analysis of lumping and splitting in systematic reviews. 2009, Singapore,

  56. 56.

    Haynes AB, Weiser TG, Berry WR, Lipsitz SR, Breizat AS, Patchen Dellinger EP, Herbosa T, Joseph S, Kibatala PL, Lapitan MCM, Merry AF, Moorthy K, Reznick RK, Taylor B, Gawande AA, Safe Surgery Saves Lives Study Group: A surgical safety checklist to reduce morbidity and mortality in a global population. N Engl J Med. 2009, 360 (5): 491-499. 10.1056/NEJMsa0810119.

Download references


The authors thank Ms. Laura Nichol for her research support for this overview. Amy Cheung was supported by a Ministry of Health and Long-term Care, Ontario, Career Scientist Award and the CIHR RCT Mentoring Program. Alain Mayhew is supported by the Canadian Institutes of Health Research and the Canadian Agency for Drugs and Technologies in Health. Michelle Weir was supported by the Canadian Agency for Drugs and Technologies in Health. Jeremy Grimshaw holds a Tier 1 Canada Research Chair. The website which provided the data, is hosted by the Canadian Agency for Drugs and Technologies in Health. Amy Cheung (PI) had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.

Author information



Corresponding author

Correspondence to Amy Cheung.

Additional information

Competing interests

Amy Cheung was supported by a Ministry of Health and Long-Term Care, Ontario, Career Scientist Award and the CIHR RCT Mentoring Program. Alain Mayhew is supported by the Canadian Institutes of Health Research and the Canadian Agency for Drugs and Technologies in Health. Michelle Weir was supported by the Canadian Agency for Drugs and Technologies in Health. Jeremy Grimshaw holds a Tier 1 Canada Research Chair. The website which provided the data, is hosted by the Canadian Agency for Drugs and Technologies in Health. Amy Cheung (PI) had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. There are other competing interests.

Authors’ contributions

JG, AM, MW and AC contributed to the design of the overview. AM, MW, AC, NK and KB conducted data extraction. All authors contributed to the analyses and completion of the manuscript. All authors read and approved the final manuscript.

Electronic supplementary material

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2

Authors’ original file for figure 3

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and Permissions

About this article

Cite this article

Cheung, A., Weir, M., Mayhew, A. et al. Overview of systematic reviews of the effectiveness of reminders in improving healthcare professional behavior. Syst Rev 1, 36 (2012).

Download citation


  • Reminders
  • Professional behavior
  • Overview


By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate. Please note that comments may be removed without notice if they are flagged by another user or do not comply with our community guidelines.