Syntheses of published research, such as meta-analyses and systematic reviews, are becoming increasingly important in providing relevant and valid research evidence to clinical and health policy decision making. However, published studies might represent a biased selection of all studies that have been conducted, if statistically significant or ‘positive’ results are published preferably, a phenomenon widely known as publication bias [1–4]. When searching the literature for meta-analyses, unpublished studies and studies published in the so called ‘grey literature’ only (such as conference abstracts, dissertations, policy documents, and book chapters) might be missed. The effect estimates of meta-analyses based exclusively on the published literature might be exaggerated and represent an overestimation of the true effect size [2, 5], and consequently the patient might be exposed to an ineffective or even harmful treatment.
Unfortunately, the elimination of publication bias can seldom be achieved, since relevant ‘unpublished’ studies are frequently difficult to find or not accessible. There are basically two kinds of ‘unpublished’ data. The first type of data, described as grey literature in the paragraph above, can still be identified through extended search strategies in computerized databases. The second type refers to data that have not been published at all and thus are far more difficult to identify. In order to tackle bias related to non-publication or distortion in the publication process of study findings there have been various calls for mandatory registration of clinical trials at inception . In 2004, major medical journals agreed that they would only publish trials that were previously registered . However, some of the data fields requested in the registries are frequently incomplete . Thus, until the complete registration at inception of all trials is a well-established method and results of all trials are publicly available, it is of great importance to improve methods for detection, quantification and the adjustment for publication bias in meta-analyses and systematic reviews.
In the literature various methods to detect, quantify and adjust for publication bias in meta-analyses have been described. There are graphical approaches, such as funnel plots , formal statistical tests to detect the presence of publication bias, such as the regression test proposed by Egger and colleagues , and statistical approaches to modify effect size to adjust pooled estimates when the presence of publication bias is suspected, such as the trim-and-fill method . Still, statistical approaches to correct for missing studies are precarious. For instance, some authors criticize that the visual interpretation of a funnel plot depends too much on the subjective impression of the observer [11, 12]. Furthermore, the performances of many of these methods have been evaluated using simulation studies, but concerns remain as to whether the simulations reflect real-life situations.
Currently, consensus on what method is best to use only exists for the special case of tests for funnel plot asymmetry in meta-analyses of randomized controlled trials . In order to inform the future development of policies and guidelines regarding the assessment of publication bias, we will conduct a systematic review of methods described in the literature.
To systematically review methodological articles which focus on non-publication of studies and to describe methods of detecting and/or quantifying and/or adjusting for publication bias in meta-analyses.
To appraise strengths and weaknesses of methods, the resources they require, and the conditions under which the method could be used, based on findings of included studies.
This systematic review will be part of the OPEN project (To Overcome Failure to Publish Negative Findings) which, among other objectives, aims to elucidate the non-publication of studies through a series of systematic reviews .