The Dim-Post

February 24, 2014

Tracking poll update

Filed under: Politics — danylmc @ 9:16 am

The bias corrected poll (it compares polling before the election against the actual result and adjusts subsequent polls) is here, also non-interactive version below. Non bias-corrected is here. It all speaks for itself. National definitely trending up. Labour trending down. New Zealand First probably trending up (National ruling them in?). The Greens probably down (Norman meeting Dotcom?). I’ve added the Conservative Party to illustrate the lack of traction this party has despite extensive media coverage and repeated stunts.

nzpolls20140224

37 Comments »

  1. Interesting. With Roy Morgan polling National at 48%, I would have thought the bias-corrected line would be a bit higher.

    Comment by Andrew — February 24, 2014 @ 10:21 am

  2. This graph assumes that the bias before the election would be the same today as it was then. I don’t think this can possibly hold true for all future polls up until the next election. It is possible that the election result was affected by a one-off swing to NZF after the “teapot tapes” saga where a bunch of soft National voters got scared off after all the hoopla surrounding the beat-up about that. NZF did seem to be the big winner from that looking at their result.

    Comment by Andrew S — February 24, 2014 @ 11:14 am

  3. @Andrew – I’m sure that’s what all pollsters would like to believe, but in reality we don’t know how big that swing really was. Some of the difference could be due to a swing, and some could be due to bias and random error. Also, some polls may have tweaked their methodology since the last election. It’ll be really interesting to see how the election result compares to this chart.

    Comment by Andrew — February 24, 2014 @ 11:45 am

  4. 48% is the latest RM result, but the two before that are 47% and 43.5%. The interactive shows the RM line in about the right place. RM appears to over-predict National support less than the other firms, but it still favours National by about 2-3 percentage points (compared to 4-5 for the ohers).

    Comment by pete — February 24, 2014 @ 11:55 am

  5. Ahhh, thanks Pete.

    Comment by Andrew — February 24, 2014 @ 11:59 am

  6. @Andrew – yes totally agree. I was more trying to use that as an example of a one off (or more) event affecting the election result which we have now based a rolling graph off and expecting those same conditions to be true up until the next election.

    Comment by Andrew S — February 24, 2014 @ 12:02 pm

  7. The greens seem to semi-frequently have a outlying low poll result (as do Labour). Earlier in this cycle it seemed to be Herald digi-poll, but the last two have been Colmar-Brunton. I’d want to see at least one more poll (preferably two) before I give much credence to the idea that Green support is dropping. It really is too much about the one poll IMHO.

    Comment by James — February 24, 2014 @ 12:14 pm

  8. @Andrew (not S) The model estimates the bias using more than one election (last 3 I think it is) rather than just the last 1. So a downswing in support just before the last election would contribute to about 1/num_elections of the computed bias (everything else held equal). So yes, it could have an effect, but the effect would have to be consistent across elections to have much impact on bias’ing the bias.

    Comment by lefty — February 24, 2014 @ 12:53 pm

  9. @lefty – I think originally Pete calculated the bias using the last three elections. Now, as I understand it, he uses the last election only. I could be wrong though.

    Comment by Andrew — February 24, 2014 @ 12:55 pm

  10. While I think adjusting for bias is an excellent thing to do, it is very risky to do so on the basis of a dataset of one – the final pre-election poll. In the US they adjust for bias based on scores or hundreds of polls to work out the average a polling company over or under estimates a party.

    Saying that because a single poll with a 3% margin of error was 1.5% different to the election result means we will adjust all future polls by that company by 1.5% is, pretty nutty to me. The concept is excellent but adjusting on a dataset of one in very flawed in my opinion.

    Comment by dpf — February 24, 2014 @ 1:56 pm

  11. @dpf – I partly agree. That’s why I always provide the non-bias adjusted poll as above. I’ve talked to Peter (who wrote the code) about weighting it for other factors like population and recency and hopefully he’ll find time to do that before the campaign kicks off.

    Comment by danylmc — February 24, 2014 @ 2:03 pm

  12. “recency”

    Comment by kalvarnsen — February 24, 2014 @ 2:04 pm

  13. it is very risky to do so on the basis of a dataset of one

    There’s only one poll that counts. Everything else is just noisy data.

    Comment by George D — February 24, 2014 @ 2:26 pm

  14. @George D – I’m not sure David Shearer would agree there’s only one poll that counts.

    Comment by Andrew — February 24, 2014 @ 2:33 pm

  15. @kalvarnsen

    “recency” is a perfectly good word. I raised my eyebrows too when I read it but when I looked it up in Chambers dictionary I discovered it is a real word.

    Comment by Alwyn — February 24, 2014 @ 3:40 pm

  16. @Lefty / @Andrew: The model only uses the data visible on the chart, so the bias estimates in this one only use the 2011 election.

    We make a simplifying assumption of a constant house effect for each firm. Over longer periods, the adjustments in methodology violate this assumption, so we run the model from Jan-1-2011. The trade-off is that we lose some ability to generalise to other elections.

    However, the results are qualitatively similar if we look at a longer time period, and (retrospectively) 2011 predictions were pretty good using only the 2008 election for estimating bias.

    @dpf: that’s not even remotely close to how the adjustment is made!

    Comment by pete — February 24, 2014 @ 4:08 pm

  17. So, the election result is far too close to call.

    Comment by Ross — February 24, 2014 @ 4:20 pm

  18. @pete thanks! So (in simple terms) you’re fitting splines to the data and then adjust each by a constant so that they pass through the election data point. Do you happen to have a short description (or the R code) somewhere floating around – would be fun to play around with it.

    Comment by lefty — February 24, 2014 @ 4:35 pm

  19. @lefty: https://gist.github.com/pitakakariki/2791866

    Comment by pete — February 24, 2014 @ 4:40 pm

  20. @pete – awesome, thanks!

    Comment by lefty — February 24, 2014 @ 7:58 pm

  21. i think we all know what this means.

    this is very bad for phil goff.

    Comment by Che Tibby — February 24, 2014 @ 8:37 pm

  22. Another thing to consider is that in the run-up to 2011, Colmar Brunton predicted an outright majority for the Key Govt. Which, of course, never came to pass.

    Comment by DeepRed (@DeepRed6502) — February 25, 2014 @ 11:14 am

  23. Of course your comment was off the cuff and not a fleshed out statement, but I dont agree on it just being the Dotcom meeting behind the Greens downturn, it seems to be a trend from late last year, which was Cunliffe taking over and/or I think its the fact that Environmental voters may be getting turned off the Greens other policies.

    With Labour and the Greens pushing more Progressive politics and National and NZF trending upwards I wonder if NZ is a little more Conservative than I would like it to be.

    Thanks for the graph its great to see the information compiled, I always find it amusing/sad when Political Journos and Radio hosts tell us you have to look at the trend without citing an actual trend graph or looking at any material beyond the last 3 polls.

    Comment by Benjamin — February 25, 2014 @ 12:10 pm

  24. @Deepred Colmar Brunton predicted National at 50.3% pre-election. You really call that an ‘outright majority’?

    Comment by Andrew — February 25, 2014 @ 3:43 pm

  25. @24 Andrew: What do you think ‘outright majority’ means?

    Comment by RJL — February 26, 2014 @ 9:15 am

  26. @RJL – In a poll with a stated margin of sampling error of 3.1 percentage points, I would think a prediction that National will win by an outright majority would be a poll results of someone around 53.1% or more.

    Comment by Andrew — February 26, 2014 @ 9:24 am

  27. The margin of error is the 95% confidence interval. Even assuming the poll is a representative random sample of the voting population, it doesn’t mean that National’s support is definitely within the margin of error.

    Comment by wtl — February 26, 2014 @ 10:23 am

  28. @26 Andrew: sure Colmar Brunton might qualify their prediction with a comment on its precision, but a prediction of 50.3% is still a prediction of an outright majority. If the margin of error was 3.1%, as you say, then they apparently thought it equally likely that the result would be as high as 53.4% rather than as low as 47.2% (with the actual result being 47.3%).

    Comment by RJL — February 26, 2014 @ 11:00 am

  29. @RJL – Believe me – you appear to place *a lot* more certainty than Colmar Brunton does in a poll result of 50.3% being indicative of an outright majority.

    @wtl – Absolutely.

    Comment by Andrew — February 26, 2014 @ 11:37 am

  30. In one out of every two unbiased polls with a result of 50.3% (regardless of margin of error), the actual population proportion would be expected to be >50%.

    Comment by wtl — February 26, 2014 @ 12:07 pm

  31. @wtl Exactly.

    Comment by Andrew — February 26, 2014 @ 12:14 pm

  32. Hi wtl – Do you see my comment over at The Standard about the poll comparison you did? Not sure if you checked back.

    Comment by Andrew — February 26, 2014 @ 12:16 pm

  33. @29 Andrew, I don’t think Colmar Brunton’s polls are very credible, or certain, or useful information at all.

    Comment by RJL — February 26, 2014 @ 12:25 pm

  34. @RJL – Yeah, I gathered that.

    Comment by Andrew — February 26, 2014 @ 12:38 pm

  35. I did not see you comment on the Standard previously. My response is the same as Lanthanide’s:

    1) Yes, the simple significance test I did not was only an approximation since the polling companies use probability sampling. However, the p-value wasn’t marginal, and my feeling is that even a more exact test would give the same conclusion – that change cannot be account for by sampling error. In any case, it is impossible to do a more exact test without access to the polling companies’ raw data and weighting methods, and they will never reveal these to us for confidentiality reasons.

    2) I don’t buy your argument about the change being account for due to the non-identical time frames of the two polls. Nothing of any real importance happened over that time frame to account for such a large change in support.

    I wouldn’t say my simplistic analysis is absolute proof*, but it does add to the weight of evidence that the polls are not representative (e.g. this thread and Gavin White’s analysis).

    * It isn’t even a one off, looking at the interactive version of Danyl’s figure, you can actually see jumps of >5% in support for National for the results conducted by different polling companies in the same month (e.g. March/April 2012, August 2012, September/October 2012, February/March 2013…)

    Comment by wtl — February 27, 2014 @ 3:17 pm

  36. @wtl

    Hi – I think you might have misunderstood. I was saying only two public polling companies use probability sample. Neither of the polls you compared do. Even if you had access to the companies raw data and weight methods, was test is more exact on a non-probability sample?

    No survey sample is representative – you’ve got no argument from me there. I also completely agree with Gavin’s analysis.

    Comment by Andrew — February 27, 2014 @ 4:45 pm

  37. Opps…

    *what* test is more exact, etc…

    By the way – I’m not defending polls by saying they’re unbiased and representative. No good survey researcher would make that argument.

    It’s just that so many of the arguments against them point to single sources of error. The arguments usually don’t display an appreciation for the number of variables involved. Anyone can bang a survey together, but carrying out a good survey is actually very difficult to do (and very expensive!). In my view each pollster should constantly try to understand sources of error, and find practical ways to reduce them or cancel them out. They won’t always get it right – but that’s the nature of measurement in a context where there are so many variables.

    The pollsters I know in the industry put an enormous effort into trying to get things right.

    Comment by Andrew — February 27, 2014 @ 4:57 pm


RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Blog at WordPress.com.

%d bloggers like this: