What’s Healthy?

Don't ask scientists, or the press either

What do red wine, cell phones, and daycare have in common? All have ambiguous links to human health established by epidemiological, or observational, studies. The list of dietary, consumer, and lifestyle choices that can affect wellbeing is long; anybody who reads the morning paper or watches the evening news knows this. Journalists and the public have an insatiable appetite for usable science.


Scientifically speaking, however, the effects of dietary, consumer, and lifestyle choices are often poorly understood. This isn’t such a bad thing; uncertainty is a natural part of the scientific process. But journalists and the public often do not understand the general structure and process of medical research studies, let alone the role uncertainty plays in them.


One reason for this is the difficulty reporters have getting generalized, wide-angle stories about science into print. But last weekend The New York Times Magazine cover story was an exposé on the state of epidemiology, under the headline: “Do we really know what makes us healthy?” The answer, obviously, is that we usually do not. “Much of what we’re told about diet, lifestyle and disease is based on epidemiologic studies,” writes Gary Taubes, the author of the Times piece. “What if it is just bad science?”


Epidemiologists study disease patterns (think epidemics) and dietary, consumer, and lifestyle choices within or between large populations. Based on those studies, they try to make inferences about both causes of and ways to prevent chronic illnesses like cancer, heart disease, or asthma. The biggest limitation of these observational studies - as people familiar with them, but few others, well know - is that they can distinguish association between activities or events and diseases, but they can never determine causation. “Testing these hypotheses in any definitive way requires a randomized-controlled trial,” writes Taubes. Such laboratory-based experiments, also known as clinical trials, are the “gold standard” of medical research.


Often, clinical trials will refute the hypotheses of observational studies by reaching entirely different conclusions. Some critics say that “more than half” of the observational studies are incorrect, according to a Monday article in the Los Angeles Times that was very similar to Taubes’ piece. Both pieces compare the weaknesses of epidemiology to the strengths of clinical trials, and leave the reader with the impression that the former is simply bad science.


The problem, Taubes writes, is that because observational studies “can generate an enormous number of speculations about causes or prevention of chronic diseases, they provide the fodder for much of the health news that appears in the media.” In other words, the health-news craze that has infected the press in recent decades demands definitive stories, and science is often much too complicated for such black and white treatment. The consequence, notes Andreas von Bubnoff, who wrote the Los Angeles Times article, is that “often, in response to them, members of the public will go out and dose themselves with this vitamin or that foodstuff.” The “dangerous game,” as Taubes describes it, of the “presumption of preventive medicine.”


This is not to say that Taubes and Von Bubnoff see no value in epidemiology. They cite its ability expose unexpected side effects of prescription drugs, track the progression of chronic illnesses within and between populations, and identify predictors of disease. Three major success stories in the field are the linking of dirty drinking water to cholera, smoking to lung cancer, and exposure to the sun to skin cancer. In addition to describing these merits of observational studies, Taubes and Von Bubnoff also point out that the clinical trials are not without their own deficiencies. For instance, there are some things that clinical trials simply cannot test. Large populations are difficult to assess, especially over long periods of time; and there are also ethical considerations. You cannot design a clinical trial to test the relationship between arsenic and leukemia, for example, because you cannot ask a control group to expose itself to potentially harmful levels of chemicals. That is where observational studies come in and, according to Taubes, “The appropriate question is not whether there are uncertainties about the epidemiologic data, rather, it is whether the uncertainties are so great that one cannot draw useful conclusions from the data.”


I called Dr. Julie Parsonnet, an epidemiologist at Stanford University who I had met at a conference in California recently, and asked for her take on Taubes’ article. She agreed that despite a “veneer of negativity,” his work mostly treats the distinction between epidemiological and clinical trial research fairly. Parsonnet emphasizes, though, that the latter can often be as problematic and unreliable as the former. “All research has to be looked at in the context of everything else that’s known about the subject,” she told me. Taubes, to his credit, predicts in his piece that epidemiologists like Parsonnet “will argue that they are never relying on any single study,” and that “this in turn leads to the argument that the fault is with the press, not the epidemiology.”


This is an astute and incredibly important observation. Epidemiological studies may be inferior to clinical trials at producing conclusive answers to some medical quandaries, but the real problem is that most people, including many journalists who write about this stuff, do not know the key differences between both types of research.


Matthew Nisbet, an assistant professor of communication at American University, who runs a blog about how journalists and others frame science, criticizes Taubes for not making enough of this angle and leaving the impression that “science can’t be trusted.” Monday, on his blog, Nisbet wrote that readers need epidemiology articles that are more like “a detective story hung around just how amazingly complex it is to figure out the linkages between diet, drug therapies, and human health.” Indeed, there are a few excellent of examples of such work, including one about a potential cancer cluster by Chris Bowman at The Sacramento Bee.


But where I agree that Taubes’ article tended to be couched in an unnecessarily negative tone, I also believe his approach was valid. As Nisbet himself asks, “Is it really ‘bad science’ or is it bad communication?” Regardless of whether a reporter seeks to discuss epidemiology generally, like Taubes and Von Bubnoff, or specifically, like Bowman, there is the original dilemma that most readers do not know the basic differences between observational studies and clinical trials. With this in mind, the generalized approach that Taubes took seems all the more useful.


“The fundamental problem is not necessarily reconcilable here,” Parsonnet told me, “because people have an innate desire to protect their health and the press has an innate desire to provide interesting information to sell newspapers.” As long as that is the case, there will continue to be three basic types of epidemiological journalism: that which typically finds its way into papers and magazines, heralding the latest research; that which, like Bowman’s piece in the Bee, relies on its own investigations; and that which, like Taubes’ and Von Bubnoff’s work, takes the wide-angle, explanatory approach.


With all three, however, the challenge is the same: journalists must explain that epidemiology is probabilistic, rather than absolute; that it is about chance, not certainty. With every story, reporters must precisely describe the likely consequence of any action -doubling or halving the risk of heart disease, for example. They must describe any internal factors that affect confidence in the study - the bigger the population and the longer the period of time examined, the better. And they must describe any external factors that affect confidence in the study - that is to say, the number and strength of supporting or competing hypotheses.

Has America ever needed a media watchdog more than now? Help us by joining CJR today.

Curtis Brainard writes on science and environment reporting. Follow him on Twitter @cbrainard.