If the myriad anecdotes about kids nationwide complaining about how awful their school lunches were — due to federal guidelines — weren’t enough, now there’s evidence that so-called “smarter lunchrooms” are based on bogus science.
The pair of Cornell scientists behind the 2010 federal Smarter Lunchrooms program, Brian Wansink and David Just, put forth the idea that strategies like placing “fruit before chips in cafeteria lines,” using “pre-sliced rather than whole fruit,” and offering “non-fat white milk prominent in beverage displays” would encourage students to eat a more healthy meal.
Reason reports, however, that other scientists are now calling into question the research methods used by the Cornell duo:
In a new paper—which has not yet been peer-reviewed or published—Eric Robinson, a professor with the University of Liverpool’s Institute of Psychology, Health, & Society, points to problems ranging from simple sloppiness to errors that seriously call into question the integrity of all of Wansink’s work.
Earlier this year, Nicholas Brown, a PhD student at the University of Groningen, discovered that much of Wansink’s work was lifted directly from his previous work without citations or acknowledgement—a practice that’s at least frowned upon in academia. And in at least one instance, two Wansink papers that purportedly rely on vastly different data sets yielded almost identical end results, down to decimal points, but with enough slight differences to discount simple clerical error.
“If the two studies in question are the same, then it would appear that there has been duplicate publication of the same results, which is not normally considered to be a good ethical practice,” writes Brown. “On the other hand, if the two studies are not the same, then the exact match between the vast majority of their results represents quite a surprising coincidence.”
In a January paper titled “Statistical heartburn: An attempt to digest four pizza publications from the Cornell Food and Brand Lab,” researchers Tim van der Zee, Jordan Anaya, and Nicholas Brown analyze four articles from Wansink and his colleagues, finding “a remarkably high number of apparent errors and inconsistencies.” These include:
— many instances in which the mean or standard deviation given was impossible given the sample size stated in the same table. (“For example, with a sample size of 10 any mean reported to two decimal places must always have a zero in the second decimal place; yet, this table contains means of 2.25 and 3.92 for a sample size of 10.”)
— inconsistent sample sizes and other numbers within and across articles that purportedly use the same data
In total, they found approximately 150 inconsistencies in reported statistics from the four papers.
Those four papers all used the same data, but “none of the articles mentions that they are based on the same data set as their predecessors, even though they were published over a period of many months,” van der Zee, et. al. note.
“We consider that this may constitute a breach of good publication ethics practice.”