Overall, our evaluation found a total of 102 companies that provide consultancy in evidence synthesis. Nineteen of these contracting companies replied to our first survey disclosing that they had conducted many hundreds of network meta-analyses, but the results were published for only a small minority. Among the contracting companies that replied to our second survey, there was an intent to publish about half of the meta-analyses in the peer-reviewed literature and some others have been used for HTA submissions. Unwillingness of the industry sponsor to allow publication was the most common specified reason for lack of a plan for publication. Registration of meta-analysis protocols was found to be poor.
To our knowledge, this is the first effort to investigate the number of network meta-analyses performed by contracting companies commissioned by the industry. There are, however, some limitations to our study. First, the response rate of the addressees was 40%. This is probably acceptable given the involvement of rather sensitive information [8]. Probably many non-responders did not actually perform network meta-analysis. Second, since 4 companies out of the 37 that responded were unable to provide information due to confidentiality agreements with industry, it is possible that some other companies did not respond at all for this same reason. Nevertheless, the 19 companies that did perform network meta-analyses were responsible for the majority of all published network meta-analyses affiliated with a contracting company, therefore probably we did capture most of the main stakeholders operating in this field. The 74 published network meta-analyses for which no company replied to our first survey were affiliated to 48 different contracting companies, indicating that the non-responders had published few network meta-analyses each and were either less involved in this type of work and/or had more prominent non-publication rates than the responders. Third, the accuracy of the information provided by survey responders cannot be fully verified. Juxtaposition against verifiable information, e.g., number of published papers, suggested mostly good concordance, but we cannot verify information on unpublished work and the reasons for its lack of publication.
So far, evidence for non-publication in evidence-based medicine has focused primarily on randomized trials rather than meta-analyses. Low publication rates have been found for randomized clinical trials in many fields, [9–13] and specifically in industry-sponsored trials [9, 10, 12]. Industry-sponsored trials that did get published have been shown to more often find a treatment effect in benefit of the sponsor’s treatment [14, 15]. Additionally, examples exist of intentional withholding of trial reports from publication due to results not in favor of the sponsor’s drug [16]. This may also apply to network meta-analyses. Sponsors may know the outcomes where their drugs rank high and ask for a network meta-analysis on these outcomes. Furthermore, contracting companies have no incentive to spend resources on analyzing outcomes other than the ones requested. There is also a high risk of presenting results in a way that favors the sponsor’s drug. Further challenges may arise when publically unavailable data is included in meta-analyses initiated by industry.
In our study, veto from the industry was the most common specific stated reason for not having a plan for publication of network meta-analyses. Moreover, a substantial number of meta-analyses were left with unclear reasons for non-publication even among the responders, so the proportion of these evidence syntheses that were not published because of the unwillingness of the industry sponsor may be higher. Especially, considering that the six contracting companies responsible for two-thirds of all conducted network meta-analyses did not provide reasons for non-publication. It is unknown whether the decision for non-publication was made before or after seeing the results and thus whether non-publication reflects the presence of unfavorable results for the manufacturers, unwillingness to share with the public (and thus also with competitors) private information with strategic advantages, or low priority for publishing meta-analyses on topics already covered in other published papers [17]. In general, we think it is safe to state that publication of any type of analysis done for industry cannot be separated from the overall commercial strategy and is not usually done with the aim of disseminating generalizable knowledge.
Moreover, contracting companies that are commissioned to perform network meta-analyses do not necessarily have strong incentives to publish the results, even without any veto from the industry sponsor. Contractors may not have the “publish or perish” pressure that is typical of academic investigators. Manuscript preparation and publication also require resources that a profit-oriented business may not want to spend. Lack of peer-reviewed publication has been seen also in other areas of biomedical investigation where entrepreneurs are involved for profit and has been termed stealth research [18]. In fact, we identified very few published network meta-analyses where contracting companies were the only type of affiliation. The majority of those published had also academic affiliations which would offer a traditional motivation for pursuing publication.
Some network meta-analyses that were not published in the peer-reviewed journal literature were part of an HTA submission to regulatory agencies like NICE or the Canadian Agency for Drugs and Technologies in Health (CADTH). Nevertheless, these submissions generally only present selected results, while specific details are sent to the agency separately, thereby making it very difficult to assess the rigor and validity of the network meta-analysis and its results. Similarly, some pharmaceutical companies may publish their results on websites or at conferences, but again, judging the quality and inclusiveness of the analyses is difficult.
Non-transparency is further compounded by lack of protocol registration. The percentage of protocol registration was low, especially when you consider that close to 400 network meta-analysis protocols and more than 12,000 systematic review and/or meta-analysis protocols have been registered on PROSPERO. Prospective registration of systematic reviews with written protocols [6, 7] is increasingly endorsed [19]. The Cochrane Comparing Multiple Interventions Methods Group (CMIMG) has developed protocol templates and has made these protocols available on their website [20]. Protocol registration will not avoid the need for unanticipated deviations from the protocol, but would make deviations more visible and open to public judgment [21, 22].
We did not try to estimate the volume of traditional systematic reviews and pairwise meta-analyses conducted and/or published by contractors, but this is also likely to be impressive. Our survey clearly shows there is a huge market for evidence synthesis, even for the most sophisticated type of synthesis method. A large segment of this work happens outside of peer-reviewed publications, with results only known to contractors and their sponsors.
Non-publication of network meta-analysis results seems particularly problematic as network meta-analysis is an evidence synthesis technique that allows informing health policy and guidelines. Regardless of the exact reasons, non-publication results in a largely non-transparent corpus of evidence synthesis work that would otherwise have been of potential value to patients, clinicians and guideline developers, and even the industry itself. Reimbursement agencies may require industry to preregister a protocol, e.g., in PROSPERO, before accepting their HTA submissions. Another way to improve publication rates may come from regulatory bodies that could require certain information about the HTA submission to be made publically available, similar to the transparent reporting for systematic reviews and meta-analyses (PRISMA) extension for network meta-analyses (PRISMA-NMA standards [23]).