Last week we noted the hypocrisy of a reporter for The Guardian casting aspersions on a UCLA professor’s survey of student speech attitudes.
She faulted that survey’s use of an “online opt-in panel” of 1,500 undergraduates rather than a “probability sample,” suggesting the unreliability of its core findings – shockingly large percentages of students support shoutdowns and even violence against controversial speakers.
In fact, she had uncritically cited surveys that used the same methodology in her previous reporting. Mainstream media organizations frequently partner with reputable polling firms on nonprobability surveys.
Now Stephanie Slade at Reason has a much deeper look at this question of methodology in an age where it’s functionally impossible to get a true random sample for polling purposes:
For years, most good survey researchers eschewed nonprobability polling on the grounds that drawing a random sample (i.e., one where everyone has an equal chance of being interviewed) is how you know that the opinions of the relatively small number of people you actually hear from are reflective of the opinions of the population as a whole. …
Even the very best polling companies have seen response rates plummet into the single digits, meaning their raw numbers have to be adjusted (“weighted,” in pollster parlance) more aggressively to try to approximate a representative sample. And it’s becoming more and more expensive over time. …
“All forms of surveys today—whether they start with a probability sample or not—the completed sample is not truly random, and there has to be some sort of correction,” [Pollster.com founder and SurveyMonkey election polling chief Mark] Blumenthal says.
No one intelligently disagrees that probability samples are better, but it’s disingenuous to say they are so much better than nonprobability surveys that their expense and time is worth it across the board.
The trade association for polling is building out a framework for nonprobability polling, and the firm that oversaw the UCLA survey says its methodology was “consistent” with those best practices.
Even the former head of the American Association of Public Opinion Research, who is quoted in the Guardian article, tells Slade he “might not feel the same way in two years” about publicly releasing surveys based on nonprobability samples.
Like The College Fix on Facebook / Follow us on Twitter
Add to the Discussion