Announcement

Cause & Effect

"Cause and Effect"! You almost never hear these words in an introductory statistics course. The subject is commonly ignored. Even on this site, all it gets is this one web page. If cause and effect is addressed at all, it is usually by giving the (proper) warning "Association does not imply causation!" along with a few illustrations. For example, in the early part of the twentieth century, it was noticed that, when viewed over time, the number of crimes increased with membership in the Church of England. This had nothing to do with criminals finding religion. Rather, both crimes and Church membership increased as the population increased. Association does not imply causation! Should opposition increase or decrease accuracy? During WWII it was noticed that bombers were more accurate when there was more opposition from enemy fighters. The reason was that fighter opposition was less when the weather was cloudy. The fighters couldn't see the bombers, but the bombers couldn't see their targets! Association does not imply causation, at least not necessarily in the way it appears on the surface!

We laugh at obvious mistakes but often forget how easy it is to make subtle errors any time an attempt is made to use statistics to prove causality. This could have disastrous consequences if the errors form the basis of public policy. This is nothing new. David Freedman ("From Association to Causation: Some Remarks on the History of Statistics," Statistical Science, 14(1999),243-258) describes one of the earliest attempts to use regression models in the social sciences. In 1899, the statistician G. Udny Yule investigated the causes of pauperism in England. Depending on local custom, paupers were supported inside local poor-houses or outside. Yule used a regression model to analyze his data and found that the change in pauperism was positively related to the change in the proportion treated outside of poor-houses. He then reported that welfare provided outside of poor-houses created paupers. A contemporary of Yule's suggested that what Yule was seeing was instead an example of confounding--those areas with more efficient administrations were better at both building poor-houses and reducing poverty. That is, if efficiency could be accounted for, there would be no association between pauperism and the way aid was provided. Freedman notes that after spending much of the paper assigning parts of the change in pauperism to various causes, Yule left himself an out with his footnote 25: "Strictly speaking, for 'due to' read 'associated with'."

Discussions of cause & effect are typically left to courses in study design, while courses in statistics and data analysis have focused on statistical techniques. There are valid historical reasons for this. Many courses are required to earn a degree in statistics. As long as all of the principles were covered, it didn't matter (and was only natural) that some courses focused solely on the theory and application of the techniques. When the first introductory statistics courses were taught, they either focused on elementary mathematical theory or were "cookbook" courses that showed students how to perform by hand the calculations that were involved in the more commonly used techniques. There wasn't time for much else.

Today, the statistical community generally recognizes that these approaches are inappropriate in an era when anyone with a computer and a statistical software package can attempt to be his/her own statistician. "Cause & effect" must be among the first things that are addressed because this is what most people will use statistics for! Newspapers, radio, television, and the Internet are filled with claims based on some form of statistical analysis. Calcium is good for strong bones. Watching TV is a major cause of childhood and adolescent obesity. Food stamps and WIC improve nutritional status. Coffee consumption is responsible for heavens knows what! All because someone got hold of a dataset from somewhere and looked for associations. Which claims should be believed? Only by understanding what it takes to establish causality do we have any chance of being intelligent consumers of the "truths" the world throws at us.

Freedman points out that statistical demonstrations of causality are based on assumptions that often are not checked adequately. "If maintained hypotheses A,B,C,... hold, then H can be tested against the data. However, if A,B,C,... remain in doubt, so must inferences about H. Careful scrutiny of maintained hypotheses should therefore be a critical part of empirical work--a principle honored more often in the breach than in the observance." That is, an analysis could be exquisite and the logic could be flawless provided A,B,C hold but the same attention is rarely paid to checking A,B,C as goes into the analysis that assumes A,B,C hold.

The rules for claiming causality vary from field to field. The physical sciences seem to have the easiest time of it because it is easy to design experiments in which a single component can be isolated and studied. Fields like history have the hardest time of it. Not only are experiments all but impossible, but observations often play out over generations, making it difficult to collect new data, while much of the existing data is often suspect. In a workshop on causality that I attended, a historian stated that many outrageous claims were made because people often do not have the proper foundations in logic (as well as in the subject matter) for making defensible claims of causality. Two examples that were offered, (1) Two countries that have McDonalds restaurants have never gone to war. [except for the England and Venezuela!] (2) Before television, two World Wars; after television, no World Wars. In similar fashion, one of my friends recently pointed out to his girlfriend that he didn't have any grey hairs until after he started going out with her...which is true but he's in his late 30s and they've been seeing each other for 3 years. I suppose it could be the relationship...

Statisticians have it easy, which perhaps is why statistics courses don't dwell on causality. Cause and effect is established through the intervention trial in which two groups undergo the same experience except for a single facet. Any difference in outcome is then attributed to that single facet.

In epidemiology, which relies heavily on observational studies (that is, taking people as you find them), cause and effect is established by observing the same thing in a wide variety of settings until all but the suspected cause can be ruled out. The traditional approach is that given by Bradford Hill in his Principles of Medical Statistics (first published in 1937; 8th edition 1966). He would have us consider the strength of the association, consistency (observed repeatedly by different persons, in different circumstances and times), specificity (limited to specific sets of characteristics), relationship in time, biological gradient (dose response), biological plausibility (which is the weak link because it depends on the current state of knowledge), and coherence of the evidence.

The classic example of an epidemiological investigation is John Snow's determination that cholera is a waterborne infectious disease. This is discussed in detail in every introductory epidemiology course as a standard against which all other investigations are measured. It is also described in Freedman's article.

A modern example is the link between smoking and lung cancer. Because is it impossible to conduct randomized smoking experiments in human populations, it took many decades to collect enough observational data (some free of one types of bias, others free of another) to establish the connection. Much of the observational evidence is compelling. Studies of death rates show lung cancer increasing and lagging behind smoking rates by 20-25 years while other forms of cancer stay flat. Smokers have lung cancer and heart disease at rates greater than the nonsmoking population even after adjusting for whatever potential confounder the tobacco industry might propose. However, when smoking was first suspected of causing lung cancer and heart disease, Sir Ronald Fisher, then the world's greatest living statistician and a smoker, offered the "constitution hypothesis" that people might be genetically disposed to develop the diseases and to smoke, that is, that genetics was confounding the association. This was not an easy claim to put to an experiment. However, the hypothesis was put to rest in a 1989 Finnish study of 22 smoking-discordant monozygotic twins where at least one twin died. There, the smoker died first in 17 cases. In the nine pairs where death was due to coronary heart disease, the smoker died first in every case.

As connections become more subtle and entangled, researchers tend to rely on complicated models to sort them out. Freedman wrote, "Modern epidemiology has come to rely more heavily on statistical models, which seem to have spread from the physical to the social sciences and then to epidemiology." When I first picked up the article and glanced quickly at this sentence, I misread it as, "Modern epidemiology has come to rely more heavily on statistical models than on epidemiology!" I may have misread it, but I don't think I got it entirely wrong.

As a group, those trained in epidemiology are among the most scrupulous about guarding against false claims of causality. Perhaps I can be forgiven my mistake in an era when much epidemiology is practiced by people without proper training who focus on model fitting, ignore the quality of the data going into their models, and rely on computers and complex techniques to ennoble their results. When we use statistical models, it is essential to heed Freedman's warning about verifying assumptions. It is especially important that investigators become aware of the assumptions made by their analyses. Some approaches to causality are so elaborate that basic assumptions about the subject matter may be hidden to all but those intimately familiar with the underlying mathematics, but this is NEVER a valid excuse for assuming that what we don't understand is unimportant.

A good statistician will point out that causality can be proven only by demonstrating a mechanism. Statistics alone can never prove causality, but it can show you where to look. Perhaps no example better illustrates this than smoking and cancer/heart disease. Despite all of the statistical evidence, the causal relationship between smoking and disease will not be nailed down by the numbers but by the identification of the substance in tobacco that trigger the diseases.


Copyright © 2000 Gerard E. Dallal
[Back to The Little Handbook of Statistical Practice]