Publication bias in clinical trials due to statistical significance or direction of trial results
Trials reporting positive findings are more likely to be published, and to be published faster, than those reporting negative findings. In order to obtain an unbiased overview of all relevant studies, authors of systematic reviews should make particular efforts to recover all unpublished findings. Also, all clinical trials should be registered at inception, although how that can be achieved in practice is still under discussion.
RHL Commentary by Butler PA
It is widely accepted that well conducted systematic reviews provide the most reliable evidence for decision-making in health care. However, for a systematic review to be valid, it must include all relevant studies, whether published or not. Clearly, results that remain unpublished will be more difficult to track down, and published studies are likely to have a major influence on the outcome of the systematic review. This may not be a problem if the unpublished studies represent an unbiased subset of all the studies. However, any systematic publication bias – e.g. a tendency for investigators to submit manuscripts and editors and reviewers to accept them based on the strength and direction of the research findings – will adversely affect the validity of the review.
This Cochrane review (1) therefore sought to determine the extent to which the publication of results of clinical trials is influenced by the statistical significance, perceived importance or direction of the trial results.
The primary outcomes studied were publication and time to publication. Studies were included in the review if they:
- assessed a cohort of trials registered at onset or before the main results were known;
- included a complete series of trials or an unbiased sample of trials in the cohort;
- compared the publication, or time to publication, of trials with positive findings and those with either negative or null findings.
Positive findings were defined as statistically significant results (P<0.05), or findings classified by the study investigators as important or showing a positive effect of the intervention studied.
The review also looked at a number of other factors potentially associated with failure to publish, including source of funding, sample size, number of centres involved in the trial, and rank and sex of the principal investigator.
The authors conducted a search using the index term “publication bias” in the Cochrane Methodology Register, Medline (1950 to March 2007), EMBASE (1980 to March 2007) and Ovid Medline In-process and Other Non-Indexed Citations (March 21 2007). They also checked the Science Citation Index for April 2007 to identify articles that cited the studies found. Finally, they checked the reference lists of the included studies and contacted authors of key studies on publication bias to try to identify other (non-indexed) studies.
Of 5000 studies identified as potentially relevant, only five, covering a total of 750 clinical trials, were found to meet all the inclusion criteria. In the five studies, the percentage of clinical trials published ranged from 93% to 36%. Trials with positive findings were more likely to be published than those with negative or null findings [odds ratio (OR) 3.90; 95% confidence interval (CI) 2.68–5.68].
Two of the studies also assessed time to publication: trials with positive findings tended to be published sooner than those with negative findings. These findings receive little discussion in the Cochrane review, and readers are referred to another methodology review with the same lead author.
As regards secondary outcomes, no statistically significant differences were found between publication and sample size (three studies), funding mechanism (one study), investigator rank (one study) or sex of the principal investigator (one study).
Only one study attempted to find out the reasons why investigators had not published their results. The most common reasons given were that the trial findings were not interesting enough or that the investigators did not have sufficient time.
4.1. APPLICABILITY OF THE RESULTS
The authors conclude that trials with positive findings are more likely to be published, and to be published faster, than trials with negative findings. This finding confirms the existence of a problem that has attracted growing attention over the past 30 years (2). The authors draw two major implications from their conclusion. First, those conducting systematic reviews need to make particular efforts to recover unpublished findings, in order to obtain an unbiased overview of all relevant studies. Second, to make this possible, all clinical trials should be registered at inception.
The studies included in this review all analysed cohorts of clinical trials registered in developed countries. Three of the studies looked at cohorts in the USA, one in Finland and one in Australia. There is no reason to suppose that the results would be different in other developed countries. Since no data were available from developing countries, it is less clear what the situation is in under-resourced settings with regard to registration and publication of trials. In the developing countries that have the research capacity and infrastructure to organize and carry out their own clinical trials, the situation is probably similar to that in developed countries. With the growth of clinical research (indigenous or sponsored by the international pharmaceutical industry) in an increasing number of developing countries, these issues will increase in importance and will need to be specifically addressed. A further difficulty in these countries is that authors often face greater hurdles in getting their research published or their publications indexed in international databases (3).
It has, nevertheless, to be recognized that the review may well have suffered from the very problem it is seeking to analyse. Only five studies could be found that met the review criteria, all of them published in journals indexed in MEDLINE. The authors’ attempts to retrieve relevant unpublished studies were unsuccessful.
4.2. IMPLEMENTATION OF THE INTERVENTION
The compulsory registration of all clinical trials is widely regarded as an essential step towards ensuring that health care decisions can be made on the basis of complete and unbiased data. Medical journals have taken a lead in efforts to achieve this, and since July 2005, journals that adhere to the guidelines of the International Committee of Medical Journal Editors (ICMJE) will consider reports of clinical trials only if the trials were registered before patient enrolment began (4).
The World Health Organization has also taken an active role in promoting and setting standards for trial registration, and now provides a search portal to the data in clinical trial registries that meet the WHO Registry Criteria (5). Indeed, WHO has gone a step further by proposing that the findings of all clinical trials must be made publicly available (6). How this can be achieved in practice is still under discussion.
The compulsory use of such trial registries, and the public availability of all results, will have implications beyond the preparation of systematic reviews. Researchers, ethics committees, institutional review boards and others with an interest in obtaining an overview of the clinical trials carried out and ongoing in a particular field will also benefit.
4.3. IMPLICATIONS FOR RESEARCH
In addition to the possible link between trial findings and publication, the review also looked at other potential risk factors affecting publication, including funding mechanism, investigator rank, and sex of the principal investigator. However, each of these factors was considered by only one of the studies included in the review. There is clearly scope here for more research into the various types of bias that can affect publication of clinical trial results (7). The review also highlighted the selective reporting of trial outcomes as a substantial problem that merits further research.
Only one study had asked the trial investigators the reasons for not publishing certain results. The majority of responses related to lack of time, operational problems, or the fact that the results were not interesting. There seemed to be no reference to rejection of papers by journals, but this could have been related to the way the question was asked. In any case, more work to identify the reasons for publishing or not publishing trial reports is warranted.
Finally, the situation in developing countries, particularly those with a rapidly expanding pharmaceutical sector and research community, warrants specific attention.
- Hopewell S, Loudon K, Clarke MJ, Oxman AD, Dickersin K. Publication bias in clinical trials due to statistical significance or direction of trial results. Cochrane Database of Systematic Reviews 2009;Issue 1. Art. No.: MR000006; DOI: 10.1002/14651858.MR000006.pub3.
- Chalmers I. Underreporting research is scientific misconduct. Journal of the American Medical Association 1990;263:1405-1408.
- Zielinski C. New equities of information in an electronic age. British Medical Journal 1995;310:1480-1481.
- De Angelis C, Drazen JM, Frizelle FA, Haug C, Hoey J, Horton R, et al. Clinical trial registration: a statement from the International Committee of Medical Journal Editors. The Lancet 2004;364: 911-912.
- International Clinical Trials Registry Platform Search Portal. Geneva: World Health Organization. http://www.who.int/ictrp/search/en (accessed 10 June 2009).
- Ghersi D, Clarke M, Berlin J, Gülmezoglu AM, Kush R, Lumbiganon P, et al. Reporting the findings of clinical trials: a discussion paper. Bulletin of the World Health Organization 2008;86:492-493.
- Cochrane Bias Methods Group. Types of bias. www.chalmersresearch.com/bmg/types_bias.html (accessed 10 June 2009).
This document should be cited as: Butler P. Publication bias in clinical trials due to statistical significance or direction of trial results: RHL commentary (last revised: 1 July 2009). The WHO Reproductive Health Library; Geneva: World Health Organization.