Recently it was revealed in i09 that a study on the health effects of chocolate was B.S. It wasn’t faked, but it was B.S., as it was designed to be, because the people doing it wanted to see how much press such a bad study would get.
A couple blogs I sometimes read, Pharyngula and The Incidental Economist, have pointed out the questionable ethics of publishing a study that involved lying to the participants (but participants are often lied to, at least in psychological studies), their fellow scientists, and the media; I’m not sure if I’m equally concerned or not. (Pharyngula also points out, “it’s a spectacular way to illustrate p-hacking and the unreliability of peer review”.) Multiple people have also pointed out that most/all of the places publishing the chocolate study were not reputable (depending on your definition of reputable–I’d say Prevention is an edge case). (Daniel Engber notes that this may be because there have been multiple other studies showing benefits for chocolate, so this one lacks novelty, rather than solely because few were fooled.)
A couple sources quoted in the Pharyngula post included some surprising objections, though.
There’s real wrongdoing in both science and journalism (most infamously, see Stephen Glass, Jayson Blair, Janet Cooke, Jonah Lehrer, Brian Williams). But intentionally creating wrong to make a point is both bizarre and potentially very damaging.
“Our key resource as journalists is credibility,” Edmonds told me. “And a deceptive ploy like this could damage that.”
The end of the experiment is that millions of people all over the world were told that chocolate will help them lose weight. The consequence is that all those people who search (in vain) for fad diets—often to help them with their self-image—have been given yet another false data point and another failure to reflect upon.
In terms of ethical analysis, this is an experiment that did not tell us anything that wasn’t known already. On that score alone, the experiment fails to pass muster. Then there are the downsides. The reputation of science journals and science communicators just got a slight additional tarnish.
So, why is it a bad thing that people know that “science journals and science communicators” sometimes publish garbage? Why is it a bad thing for people to know that fad diets are often based on garbage science/”science”? I’m sure that these guys are worried that people will think ALL science is garbage based on this, but I’m not sure that that’s worse than people thinking that NO science is garbage. Best case scenario, people will be more skeptical of science journalism and wait for a strong consensus. Worst case scenario, they will retreat to “common sense”. (Well, some might get into the particularly bad kinds of alternative medicine, I guess, but I doubt that this will push many people into that who weren’t interested in it before.) Worst case scenario if they’re NOT skeptical of science reporting: they jump on every new bandwagon and waste their money on a bunch of questionably-effective supplements and such, and try potentially-dangerous new health advice that they think was proved by “a study”.
There are writers in FA who connect their religious views with their Fat Acceptance blogging in a way I find interesting and very poetic at times. For me, there are resonances between Fat Acceptance and atheism as well. One that’s relevant here: the end of belief in something isn’t always sad, at least once you’ve fully processed it. It can be liberating.
On to some more detailed discussion of the study.
Slate Star Codex also discussed the chocolate study, and how some people are taking unwarranted conclusions from it. I think some of his commentary about Conclusion 1 is kind of questionable–he’s combining findings of blood pressure reduction, flow-mediated dilation, insulin sensitivity, absolute BMI, and weight gain. Only the non-meta-analyses had findings related to BMI or weight gain, and even with meta-analyses we should be cautious, since the bias against publishing negative findings can affect them too, and from what I can tell from a quick pass, the effects were statistically significant but small in magnitude. (And similarly, in Conclusion 2, I think he overstates how much certainty we can have about chocolate’s health benefits.)
Anyway, the most important takeaway from this most recent chocolate study, besides, perhaps, Slate Star Codex’s Trust Science Journalism Less [with caveats about where it’s reported, etc], is that many scientific studies will measure multiple variables to see if one of them is significant. In the words of John Bohannon, one of the chocolate study’s authors and the author of the i09 article
Here’s a dirty little science secret: If you measure a large number of things about a small number of people, you are almost guaranteed to get a “statistically significant” result. Our study included 18 different measurements—weight, cholesterol, sodium, blood protein levels, sleep quality, well-being, etc.—from 15 people. (One subject was dropped.) That study design is a recipe for false positives.
With our 18 measurements, we had a 60% chance of getting some“significant” result with p < 0.05. (The measurements weren’t independent, so it could be even higher.) The game was stacked in our favor.
It’s called p-hacking—fiddling with your experimental design and data to push p under 0.05—and it’s a big problem. Most scientists are honest and do it unconsciously. They get negative results, convince themselves they goofed, and repeat the experiment until it “works.” Or they drop “outlier” data points.
Similarly, although the Look AHEAD study was nowhere near suffering from the low sample size problem that the chocolate study had, the fact that it reported no effect of weight loss on heart attacks, strokes, cardiovascular deaths, blood sugar, blood pressure, cholesterol… Oh look! They took fewer medications! …Well, that’s why I’m skeptical of how meaningful the fewer medications part is.
Bohannon also says
People who are desperate for reliable information face a bewildering array of diet guidance—salt is bad, salt is good, protein is good, protein is bad, fat is bad, fat is good—that changes like the weather. But science will figure it out, right? Now that we’re calling obesity an epidemic, funding will flow to the best scientists and all of this noise will die down, leaving us with clear answers to the causes and treatments.
Short answer: no. One reason that Bohannon doesn’t explicitly say is that “now that we’re calling obesity an epidemic” is leading to exactly the opposite–not an appetite for yearslong studies, but a big appetite for quick and headline-grabbing studies. Calling obesity an epidemic means that people’s emotions are running higher, that they’re willing to try things that don’t actually work because of the “don’t just stand there, do something!” effect. (I’m more inclined to say “don’t just do something, stand there!”)