Announcement
Which Variables Should We Adjust For?
Gerard E. Dallal, Ph.D.

I suppose that before talking about what we should adjust for, a few sentence are in order about what we mean by adjusting and why we might want to do it.

Adjustment is often nothing more than a linear adjustment achieved by adding another term to a regression model, as in

Yij = + i + WTij+ ij

Within each group, we fit a linear relation between the response and the covariate(s). More complicated models can be fitted if the need arises, but unless the data are compelling, the linear term is commonly used as an approximation to whatever the relation might be.

There are two chief reasons for adjusting for covariates. The one most people are familiar with is to adjust for imbalances in baseline variables that are related to the outcome. The adjustment helps correct for the groups' predisposition to behave differently from the outset. For example, if body weight was such a variable and one group was much heavier on average than the other, we might adjust for body weight.

The second, which is not fully appreciated, is to reduce the underlying variability in the data so that more precise comparisons can be made. Consider Student's t test for independent samples. There the difference in sample means is compared to the within-group standard deviation. Now, consider a simple analysis of covariance model

Yij = + i + Xij+ ij

Here, the difference in intercepts is compared to the variability about the within-group regression lines. If Y and X are highly correlated, the variabilty about the regression line will be much less than the within-group standard deviation. It is very nearly (1-r2) times the within-group standard deviation. Thus, even if the groups are not imbalanced with respect to a covariate, it can still be a good idea to adjust for it to enhance the ability to recognize statistically significant effects.

It is always a good idea to make adjustments that will reduce variability inherent in treatment comparisons. The variables that will reduce variability will be know beforehand. These adjustments will be specified in the protocol before the data are collected. The design of the study--sample size calculations, in particular--will take these variance reductions into account.

Adjustments to correct imbalances are more controversial. We could adjust for everything imaginable. This may not do any harm other than cost us some error degrees of freedom. If there are enough data, it won't be of any real consequence. At the other extreme (for randomized trials), some argue that because of the randomization, it's not necessary to adjust for anything. While this is true from a theoretical perspective, let's not be stupid about it, I have yet to meet the statistician who in practice would fail to adjust once a large imbalance was detected in a baseline variable related to outcome. If no adjustment is made, it is impossible to tell whether any difference (or similarity!) in outcome is due to the treatments or the imbalance at baseline.

The sensible approach is an intermediate path that attempts to avoid adjustment but concedes the need for it when large imbalances are detected in variables that are know to be related to the outcome. Typical practice is to perform t tests or chi-square tests on the baseline variables and adjust for any where the observed significance level reaches a particular value (the ubiquitous 0.05, although some may choose a larger P value just to be safe).

An excellent discussion of these issues can be found in Assmann SF, Pocock SJ, Enos LE, Kasten LE (2000), "Subgroup Analysis and Other (Mis)Uses of Baseline Data in Clinical Trials", Lancet, 355, 1064-1069. I recommend it highly and agree completely, especially with the first two paragraphs of their discussion section, which touch on all of the important topics.

In general, simple unadjusted analyses that compare treatment groups should be shown. Indeed they should be emphasised, unless the baseline factors for covariate adjustment are predeclared on the basis of their known strong relation to outcome. One notable exception is the baseline value of a quantitative outcome, in which analysis of covariance adjustment is the recommended primary analysis since a strong correlation is expected.

Many trials lack such prior knowledge, requiring any strong predictors of outcome to be identified from the trial data by use of an appropriate variable selection technique. Covariate adjustment should then be a secondary analysis. Adjustment for baseline factors with treatment imbalances is unimportant, unless such factors relate to outcome. Nevertheless, such secondary analyses help achieve peace of mind.

Never underappreciate the value of "peace of mind"!

[back to LHSP]