Announcement

Meta Analysis
Gerard E. Dallal, Ph.D.

It is not every question that deserves an answer.
-- Publilius Syrus
Sometimes there are mixed reports about a treatment's effectiveness. Some studies may show an effect while others do not. Meta analysis is a set of statistical techniques for combining information from different studies to derive an overall estimate of a treatment's effect.

The underlying idea is attractive. Just as the response to a treatment will vary among individuals, it will also vary among studies. Some studies will show a greater effect, some will show a lesser effect--perhaps not even statistically significant. There ought to be a way to combine data from different studies, just as we can combine data from different individuals within a single study. That's Meta Analysis.

Meta analysis always struggles with two issues:

  1. publication bias (also known as the file drawer problem) and
  2. the varying quality of the studies.

Publication bias is "the systematic error introduced in a statistical inference by conditioning on publication status." For example, studies showing an effect may be more likely to be published and written up and submitted for publication more promptly than studies showing no effect. (Studies showing no effect are often considered unpublishable and are just filed away, hence the name file drawer problem.) Publication bias can lead to misleading results when a statistical analysis is performed after assembling all of the published literature on some subject.

When assembling the available literature, it can be difficult to determine the amount of care that went into each study. Thus, poorly designed studies end up being given the same weight as well designed studies. This, too, can lead to misleading results when the data are summarized.

When large, high quality randomized, double-blind, controlled trials are available, they are the gold standard basis for action. Publication bias and the varying quality of other studies are not issues because there is no need to assemble the research in the area. So, to the two primary concerns about meta-analysis--publication bias and varying quality of the studies--I have added a third:

(3) Meta analysis is used only when problems (1) and (2) are all but certain to cause the most trouble! That is, meta-analysis is employed only when no large-scale, high quality trials are available and the problems of publication bias and the varying quality and outcomes of available studies all but guarantee it will be impossible to draw a clear conclusion!

Those who perform meta analyses are aware of these problems and have proposed a number of guidelines to minimize their impact.

While the motivation is noble, I am not convinced that these procedures are sufficient to overcome the problems they seek to address.

If meta analysis is to have a future, perhaps it will be due to registries that allow studies to be tracked from their inception and not just on publication. This would make it impossible to publish the results of trials showing benefit while suppressing those that do not. Many medical journals now require that studies be registered at ClinicalTrials.gov before they start in order to be eligible for publication. This may eventually solve the problem of publication bias. It would not capture everything, but it would capture prospectively a (large enough?) cohort of studies. I am less confident of whether it will be able to address the differing quality of the studies or subtle differences in the way outcomes are studied.

At the moment, it is difficult to improve upon the remarks of John C. Bailar, III, taken from his letter to The New England Journal of Medicine, 338 (1998), 62, in response to letters regarding LeLorier et al. (1997), "Discrepancies Between Meta-Analyses and Subsequent Large Randomized, Controlled Trials", NEJM, 337, 536-542 and his (Bailar's) accompanying editorial, 559-561:

My objections to meta-analysis are purely pragmatic. It does not work nearly as well as we might want it to work. The problems are so deep and so numerous that the results are simply not reliable. The work of LeLorier et al. adds to the evidence that meta-analysis simply does not work very well in practice.

As it is practiced and as it is reported in our leading journals, meta-analysis is often deeply flawed. Many people cite high-sounding guidelines, and I am sure that all truly want to do a superior analysis, but meta-analysis often fails in ways that seem to be invisible to the analyst.

The advocates of meta-analysis and evidence-based medicine should undertake research that might demonstrate that meta-analyses in the real world--not just in theory--improve health outcomes in patients. Review of the long history of randomized, controlled trials, individually weak for this specific purpose, has led to overwhelming evidence of efficacy. I am not willing to abandon that history to join those now promoting meta analysis as the answer, no matter how pretty the underlying theory, until its defects are honestly exposed and corrected. The knowledgeable, thoughtful, traditional review of the original literature remains the closest thing we have to a gold standard for summarizing disparate evidence in medicine.

Those wishing to learn more might start with the British Medical Journal series of expository articles:

[back to The Little Handbook of Statistical Practice]



Copyright © 2003 Gerard E. Dallal