fbpx
Breaking Campus News. Launching Media Careers.
Why the survey showing students endorse violence to stop speech is not ‘junk science’

When a UCLA researcher released survey results that found a startlingly high percentage of college students endorsing shoutdowns of controversial campus speakers and even violence to stop their events, one reporter found “polling experts” who called it “junk science.”

But that “junk” designation is itself junk, according to a Washington Post columnist who looked more deeply at the survey’s methodology in the context of common polling methodology.

Lois Beckett of The Guardian had described John Villasenor as a professor of “electrical engineering” while leaving out he also teaches public policy and is a senior fellow at the Brookings Institution, a left-leaning think tank, which published his results.

She highlighted that Villasenor’s survey funding came from the “conservative” Charles Koch Foundation, which had no role in designing the survey, and faulted the survey for using an “online opt-in panel” rather than a “probability sample” of randomly selected college students.

Here’s the problem with Beckett’s dig at Villasenor: He’s using the same methods as other surveys that Beckett uncritically reports on, writes Post columnist Catherine Rampell, who highlighted Villasenor’s findings originally:

While there could plausibly be other problems with this survey (as is true with any survey), these criticisms in and of themselves don’t render a poll “junk science.”

The critiques made in the Guardian article are either disingenuous, confused or both.

Beckett is implying that any survey that doesn’t cull randomly selected college students from a substantially complete list of all potential interview subjects is not trustworthy.

 

But precious little polling can meet these standards in an era where it’s harder to find willing survey participants, according to Rampell, which is why some of the biggest names in polling – including Nielsen, Harris Poll and YouGov – often use opt-in panels:

Such polls are often cited by The PostFiveThirtyEight, the New York Times and yeseven the Guardian.

Including in fact multiple times by Beckett, the Guardian reporter who just wrote that article in which critics suggested such polls are “junk science.”

Rampell lays out exactly the process that Villasenor followed, involving two reputable polling companies, to find a large enough sample of college students who had previously indicated their willingness to be interviewed. Because “the gender ratio was off, he re-weighted the data. Which is normal.”

While the timing of the poll – during the Trump administration and a week after the white nationalist Charlottesville rally – could affect the results, any poll “tells you about people’s views at a specific moment in time,” Rampell writes:

That doesn’t mean we should ignore these findings — or even that whatever effects Charlottesville may have had are temporary. …

Villasenor’s process for surveying college students is not unusual. Consider a 2016 survey of college students released by the Panetta Institute, which was administered by Hart Research Associates [using a “multi-million-member respondent panel” from an online survey vendor]. Some critics have cited this poll favorably while condemning Villasenor’s survey.

Rampell notes that Beckett is inaccurately characterizing a Gallup-conducted survey that found different results than Villasenor’s. Its survey obtained participation from only 32 colleges out of a “random sample” of 240, and the individual response rate of those few colleges was 6 percent:

I point this out not to pick on Gallup. It does excellent work, and I cite its stuff all the time. In fact I cited this very survey of college students last year and would not hesitate to do so again.

The point is that even this supposed gold-standard of polls doesn’t actually meet the impossible “probability sample” gold standard. You can call your poll a “probability-based sample,” but that’s really a theoretical concept.

There is no “magic phone book” from which a pollster can produce a “true probability survey,” as Beckett seems to believe, and there is much debate in the polling community on this subject, Rampell says:

If we are going to be choosy about methodology, we must be consistent about our methodological choices before we look at the results. Otherwise we aren’t doing science at all.

Read her column and Beckett’s original skeptical report.

Like The College Fix on Facebook / Follow us on Twitter

Add to the Discussion