General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsThis is awesome, NYT gave data from its FL poll to 4 diff pollsters to show interpretations change
Results using the same data:
NYT: C +1
Franklin: C +3
Ruffini: C +1
Omero: C +4
Corbett-Davies: T +1
Polling results rely as much on the judgments of pollsters as on the science of survey methodology. Two good pollsters, both looking at the same underlying data, could come up with two very different results.
How so? Because pollsters make a series of decisions when designing their survey, from determining likely voters to adjusting their respondents to match the demographics of the electorate. These decisions are hard. They usually take place behind the scenes, and they can make a huge difference.
To illustrate this, we decided to conduct a little experiment. On Monday, in partnership with Siena College, the Upshot published a poll of 867 likely Florida voters. Our poll showed Hillary Clinton leading Donald J. Trump by one percentage point.
We decided to share our raw data with four well-respected pollsters and asked them to estimate the result of the poll themselves.
http://www.nytimes.com/interactive/2016/09/20/upshot/the-error-the-polling-world-rarely-talks-about.html?_r=0
Nate Cohn's Upshot is doing some really awesome things in polling analysis and explanation. He's really improved what Nate Silver started when 538 was hosted by NYT.
Loki Liesmith
(4,602 posts)imo
DarthDem
(5,255 posts)Harry Enten at 538 interests me more than Silver too.
Loki Liesmith
(4,602 posts)Sharp too, and less prone to pontificate than Nate (Silver).
Gormy Cuss
(30,884 posts)Trends are much better indicators, as are a meta analysis of polls done by different reputable pollsters at about the same time.