A Career Postmortem: Dr. Brian Wansink

Hits: 205

Dr. Brian Wansink. Photo courtesy of Wikimedia Commons.

Being formally trained as a Food Scientist in my undergrad years, I had heard about Wansink’s 2006 book Mindless Eating, and became an admirer after reading the book. Because I was a casual reader, I made no effort to “look under the hood” at any papers and studies he might have referred to, and took him at his word as a then-executive director of the Center for Nutrition Policy and Promotion for the US Department of Agriculture (USDA). He was responsible for overseeing the design of the 2010 Dietary Guidelines for Americans, as well as the government-run nutrition site “My Pyramid.gov”. He was also a long-time director of the Food and Brand Lab at Cornell University. With all that under his belt, why would I question what he writes?

The book Mindless Eating has inspired many to be more active and deliberate in managing their nutritional cues, and to take a deeper look into how humans are hard-wired in their perceptions of food. The real strategy would be to find ways to work around these hard-wired perceptions, rather than against them.

The ways he would run his experiments — mostly on college-aged subjects attending Cornell — was that he would offer free food (what college student wouldn’t be attracted by that?). Once you are hooked by the free food (and sometimes a movie), the science kicked in. Plates and food packaging would be weighed by difference in a way that the subject never knew it was being done. They would get a fairly accurate calorie count that way. Then they would ask you about your own perceptions: How much did you think you ate? How many calories did you think you consumed? Depending on what was being investigated, the results when fed back to the participants were often remarkable and surprising. Some of the perceptual tricks in the design of the experiments even fooled graduate students in Dietetics. He showed that these perceptual tricks can be as simple as changing the size of the plate.

Dr. Wansink seemed sly, and clever. But he had to be, because humans can sometimes be even more sly and clever in fooling themselves into thinking that they ate less than they did. The world clearly needed someone like Wansink to expose our human frailties to ourselves, and to show us how we fool ourselves into eating more than we planned to, or than we thought we did.

Two-Buck Chuck comes in many varieties, including red and white.

In Mindless Eating, among his many tales, he discusses people’s perceptions of their meal based on the perceived vintage of the wine they were served. The investigators purchased several cases of the cheapest wine possible, Charles Shaw Wine, nicknamed “2-Buck Chuck”, a wine sold at a chain store called Trader Joe’s in United States. At the time, Charles Shaw Wine could really be purchased for two dollars (USD). All bottles had their labels removed and replaced with a fictitious label suggesting it was from California, and another label suggesting the wine was from North Dakota, a state not known for making wine. The patrons given the various wines with their meals were asked to rate the food (not the wine) they were served and asked whether they would come back. The reaction was far more favourable if the label on the bottle suggested California wine. It was a bit of a sly trick, but at least the 117 diners in the study had a prix fixe all-you-can-eat gourmet meal set at $21.00 (USD), with free wine.

There was another story Wansink likes to talk about, about the bowl of tomato soup that was filled from the bottom using a food-grade feeding tube that was invisible to the participant. The tubing led to a 2-gallon pot containing the soup. The participant seemed oblivious to the bowl of soup that would never empty. The finding here is that people will eat on average 73% more soup than a normal serving if there is no visual cue to tell them to stop eating.  Our stomachs are indeed a very crude instrument for measuring how much we have eaten. We need visual cues, which can be interfered with by the bottomless bowl, but also by regular distractions. This experiment aimed to prove that. For this experimental design, Wansink received the IgNobel prize in Nutrition in 2007.

IgNobel prizes are awarded to scientists whose research makes people laugh, then makes people think. These prizes are awarded by the publication Annals of Improbable Research (AIR), and handed out at an annual ceremony held at Harvard University in Cambridge, Massachusetts, with lectures from the prizewinners being given across town at MIT.

Wansink showed how our perceptions of food quantity is vulnerable to lighting; the presence of company or entertainment or other distractions; the size of our plates; the shape of our drinking glasses; the proximity of junk food from where we happen to be sitting; and so on. All of it was compelling and often headline-grabbing. He has been on interviews about his findings from all 3 major American television networks over the years.

He was apparently able to prove his findings quantitatively, but any graduate students using his findings are now better apt to check his numbers. No one has accused him of fraudulent research, just sloppy research with statistical calculations that didn’t match up with other reported numbers. It began with a now-deleted blog post where, according to The Cut,

Wansink told the story of a Turkish Ph.D. student who came to work in his lab for free. “When she arrived,” he wrote, “I gave her a data set of a self-funded, failed study which had null results (it was a one month study in an all-you-can-eat Italian restaurant buffet where we had charged some people ½ as much as others). I said, ‘This cost us a lot of time and our own money to collect. There’s got to be something here we can salvage because it’s a cool (rich & unique) data set.’ I had three ideas for potential Plan B, C, & D directions (since Plan A had failed).”

Wansink wrote glowingly about the Ph.D. student, Ozge Sigirsci, and in her ability to see the offer of data as an opportunity and get herself published. And that she did. Five papers bylined both by Wansink and Sigirsci, came out of this “failed study”. To grad students reading the blog and wanting their own work published, this raised eyebrows. He was suggesting that it was just fine for a scientist to take a failed study, then massage the data for different null hypotheses until they come up with a correlation that falls outside of a 95% confidence interval, which rejects the null hypothesis (Ho). This is science done backwards. You usually pose the hypotheses before the experiment is run, not after. In other words, a scientist doesn’t run an experiment without knowing what they are researching beforehand.

The kind of statistical error being committed in these papers is known as a “Type M Error” (“M” stands for “Magnitude”). This is where just because you found a correlation with a 5% margin of error, the effect of this statistic might be exaggerated. Remember, this result was stumbled upon as a side effect of slicing and dicing the data until a correlation of “anything” emerged. In that context, how much information is your data giving you that rejects the Ho, which came as more of an afterthought?  It would be better to run a modified experiment to see if the same thing happens when you run the experiment deliberately.

In the blog, Wansink then listed the papers that were published and where they were published. This gave readers 5 key papers to be sceptical about. And there was a research team who did the checking. Tim van der Zee​, Jordan Anaya​, and Nicholas Brown looked into 4 of these 5 papers, and found 150 statistical errors. The error findings were based on inconsistencies in the published tables without looking at the raw data. To look at the raw data, a scientist normally needs to ask the scientist who ran that experiment. It didn’t help that after repeated requests, Wansink refused to share his data with van der Zee, et. al., to settle the matter.

Now, there is no rule saying that he has to share his data. But to paraphrase Andrew Gelman in the blog Statistical Modeling, Causal Inference, and Social Science, there is also no rule saying that anyone in the scientific community needs to take him seriously, either. The various journals have, since 2017 retracted at least 18 of his papers, according to Wikipedia. Another 15 have been formally corrected.

Stanford determined in September, 2018 that he had, according to Science Magazine from 21 September, 2018:

“In a statement issued [on the 20th of September], Cornell’s provost, Michael Kotlikoff, said the investigation had revealed “misreporting of research data, problematic statistical techniques, failure to properly document and preserve research results, and inappropriate authorship.”

Wansink was removed from researching and teaching activities at Cornell, according to Science. Wansink also resigned after this statement was issued.

 

What are your thoughts?