The Dim-Post

December 18, 2012

Ho ho ho. Here’s a poll chart

Filed under: polls — danylmc @ 8:38 am

Thanks again to Peter Green for the R code. Image is below, but you should really go here for the interactive SVG version.

nzpolls18122012More poll geekiness over the jump:

Peter also supplied these charts to show what’s happened to National’s poll ratings since last year’s election. One of the end-of year refrains in political summaries is that National ‘held steady in the polls’ despite a terrible year. Well, they did hold pretty steady compared to the election result. But the polls all massively over-estimated National’s election result – if you compare current poll estimates with pre-election poll estimates then National is way down. (It looks like Roy Morgan may have corrected their methodology post-election, the other firms not so much.)

december_national

Peter also provided this summary of the statistical modeling used to generate the aggregated graph:

The smoothed curves are calculated using a generalised additive model (GAM).  The smoothing parameter is estimated using cross-validation.  This means that the curves are estimated based on subsets of the data, and the parameter that best predicts the hold-out data is chosen.  Note that the error bounds are based on the assumption that the smoothing parameter is correct, i.e. uncertainty in the smoothing parameter estimate are not accounted for.
A separate smoothed curve is estimated for each combination of polling firm and party (the election is treated as a poll and given more weight than the opinion polls).  The shape of the curves for each party is constrained to be the same for every polling firm, although they can be vertically offset from each other.  The displayed curve is aligned so that it passes through the election result, but in the interactive version you can see the offset curves for each polling outfit.
About these ads

25 Comments »

  1. I don’t really feel up to studying the maths of your results but the trend lines for National and, I assume, New Zealand First look odd.
    Both Labour and the Greens have about as many data points above as below the trend line, as I would expect. For National however almost every poll result is above the trend line, which doesn’t seem right. With New Zealand First it is the other way round. If their graphs are drawn correctly almost every poll is below the line you have fitted. This doesn’t seem to be sensible to me.
    Is there some simple explanation for how these graphs were produced? My days of studying Econometrics are long, long ago.

    Comment by Alwyn — December 18, 2012 @ 9:38 am

  2. The trend line is based on how well the polls predicted the outcome of the last election. They all overestimated National, underestimated NZFirst.

    Comment by danylmc — December 18, 2012 @ 9:54 am

  3. For National however almost every poll result is above the trend line, which doesn’t seem right. With New Zealand First it is the other way round. If their graphs are drawn correctly almost every poll is below the line you have fitted. This doesn’t seem to be sensible to me.

    Alwyn – that’s to take account of the “fact” that the polls have tended to overestimate National support and underestimated New Zealand First support. I do wonder whether that places too much of the result on too weak a foundation, but the overall trend is still useful.

    Comment by Graeme Edgeler — December 18, 2012 @ 9:57 am

  4. Thats where I dont but the line of “National is still where it was at the election”. Well the polls beofre the election had them on over 50% some up to 55% from memory. I think if you held an election tomorrow National would be in the low 40’s, possibly out of government.

    Comment by max — December 18, 2012 @ 9:58 am

  5. So, the key takeout fact is one in five who voted National in 2011 has abandoned them twelve months later. Such is the crisis of mediocrity in our political elite that they do not appear interested in voting for anyone else, my guess is they are just not going to bother. The next election may be the story of National’s stay at home vote being even bigger than Labours. Such is the fruits of the inspirational leadership of the New Zealanders provided by the collection of political dilettantes, neo-liberal hacks, tired yesterday’s men and self-interested careerists who inhabit our parliament.

    Comment by Sanctuary — December 18, 2012 @ 10:09 am

  6. … clap clap clap.

    Comment by TransportationDevice A7-98.1 — December 18, 2012 @ 10:21 am

  7. The logical conclusion of this project is that the Centre Left won the last two elections

    Comment by tinakori — December 18, 2012 @ 11:32 am

  8. Those pretty little blue balloons all above the line – particularly the cluster just before the election – and their comparison to the other-coloured balloons, tell us all we need to know about poll accuracy. Which of course, begs the question: it is, actually, a science.
    Perhaps a survey of the political persuasion of polling company and media outlet owners, an examination of research on the “poll effect”, and data on the comparative reportage of poll results, might point to an answer.

    Comment by ak — December 18, 2012 @ 11:33 am

  9. Danyl, Graeme et al.
    Now I see what is going on. Thank you for the explanation.
    Like Graeme E. I do wonder whether it may be putting a bit much weight on a pretty thin foundation.
    The polls are already showing a pretty massive drop in the National figure and a pretty massive climb in the NZF values just before the election.
    Incidentally are the dates against which the poll figures are plotted the date when the poll was published or, say, the middle date of the sampling period? If it is the published date this would accentuate the over (Nat) or under (NZF) reading for the parties as the great change would have actually have been happening a week or so before the date shown against the poll.

    Comment by Alwyn — December 18, 2012 @ 1:36 pm

  10. @AK – I think the problem with the polls is how they are reported in the media, they are almost always reported as ‘fact’ with no mention of bias or sampling error or any of the other complicated things that your average journalist has no understanding of.

    The politicians also play this game, John Key often says the polls show them at roughly the same result as election night, of course most of the major polls had National around 53-55% just before the election and they got 47%, so if they have now polling at 47-45% they have dropped around 6%.

    So I’d say yes polling is a ‘science’ but the reporting of the significance of the poll results is mostly voodoo.

    Comment by Ieuan — December 18, 2012 @ 1:41 pm

  11. @Alwyn: the polls are plotted at the centre of the polling period, when that information is available, otherwise the publication date.

    Comment by pete — December 18, 2012 @ 2:19 pm

  12. Apart from the usually predictable sampling errors, there are two main sources of systematic error (at the instant of polling)
    S: the polled sample lacks representative nature
    L: people lie about their voting intentions

    These sum to an error, but both have unknown values. If you take the differences at election time, all that tells you is S+L at an instant. The actual values of S & L are unknown (peoples lies could be counteracted by the sample error, nobody could be lying, the sample could be perfect, etc) *and* can change with time. Even if you have a formula for S+L that worked in the last 20 elections, that’s no guarantee it’ll work next time.

    Comment by richdrich — December 18, 2012 @ 3:07 pm

  13. @richdrich: S & L are “non-identifiable” in stats-speak. But we can still estimate S+L, which is what we need for the adjustment. It’s also true that there is no guarantee that the next election won’t be a landslide for a Conservative / ACT coalition – this is the classic “problem of induction”. But just because we can’t get ironclad 100% certainty doesn’t mean we can’t make sensible inferences about how likely an outcome is.

    Comment by pete — December 18, 2012 @ 3:26 pm

  14. Thanks for the explanations, it makes much more sense now and thanks to pete for the work put in

    Comment by Raymond A Francis — December 18, 2012 @ 4:00 pm

  15. That all four organisations showed National’s support plummeting just before the election suggests very strongly to me the inaccuracy was because of voters ditching National late rather than systematic polling error, though this is unprovable with public data.

    Comment by bradluen — December 18, 2012 @ 5:03 pm

  16. This is good for Phil Goff David Shearer.

    Comment by George D — December 18, 2012 @ 5:29 pm

  17. The same comments work for the greens. They always poll 3-4% higher than their vote on election day. Note their polls have effectively gone nowhere despite all the media huffing and puffing. There big moves were made in 2011, but not since then. Maybe Russell Norman should remember he’s a co-leader, and let Metiria get some limelight too….

    Comment by Luke C — December 18, 2012 @ 5:45 pm

  18. “The same comments work for the greens. They always poll 3-4% higher than their vote on election day. Note their polls have effectively gone nowhere despite all the media huffing and puffing.”

    In before some Green supporter laboriously explains why the Greens are the party of the future (just like they have been since 1999)

    Comment by Hugh — December 18, 2012 @ 6:04 pm

  19. (worth noting that Green polling varies greatly between organisations, Morgan have historically overpolled them badly but Digipoll hasn’t)

    Comment by bradluen — December 18, 2012 @ 7:11 pm

  20. The same comments work for the greens. They always poll 3-4% higher than their vote on election day. Note their polls have effectively gone nowhere despite all the media huffing and puffing.

    That’s pretty much true. It deserves a some work; while it’s possible to put up a strong showing and thus impress the beltway (who then impress somewhat on the electorate via their poll-editorialising), it takes direct work on sections of the electorate directly or through proxies to make a large shift happen. The academic consensus at the moment seems to be that person-to-person conversation is the most durable vehicle for that change to occur, and at the moment there isn’t much that would create those conversations. If a Rena washes up to shore, or the economic management skills of this Government come under sustained and severe attack, then the elements are in place to see gains. However, it’s equally likely that nothing of that magnitude will occur, and gradual change will suffice.

    But at the same time, I’m happy if the Greens pick up 3-4% each election cycle. This allows organic growth, without the risk of overshoot. I’m also happy that the party appears to be making moves internally to minimise the gap between polling and poll-booth performance.

    Comment by George D — December 18, 2012 @ 7:34 pm

  21. The poll stats are all very nice, but if you’re looking for predictive value, you’d be better off reviewing voter volatility in MMP election campaigns.

    If there’s going to be a collapse under pressure (see Clark and English 2002, or to a lesser degree, Key and Goff 2011), Shearer has to be a prime candidate for the slide. Labour’s “plan” (scare quotes essential) seems to be to cross fingers and hope he has laryngitis for a month.

    Comment by sammy 2.0 — December 18, 2012 @ 7:54 pm

  22. Labour to date has offered nothing to the electorate (well I suppose they are being honest) .

    There is a general tendency to boredom and ennui with incumbents (warts tend to be more apparent) in second terms.

    The poll lines do not suggest anything different. This is typical second term doldrums. Almost inevitable in a system based on a three year electoral cycle.

    Danyl, methinks your ho ho ho might be a tad premature.

    Beware the ides of February

    Comment by peterlepaysan — December 18, 2012 @ 9:26 pm

  23. Meanwhile, the most incredible graph published today has to be this one, courtesy of Mr English. Such heroic growth assumptions. Given Treasury’s impressive track record, we know how likely they are to be correct.

    Comment by George D — December 18, 2012 @ 9:31 pm

  24. we can still estimate S+L

    You mean S plus L, right? You can, but the inherent assumption is that voters will behave similarly and the polling samples will reflect the electorate just as they did on the last election day. Which is reasonable in an unchanging world. If something radical happened, like a banking crash or a war, then these numbers might change. (For instance, a large group of unpolled, unvoting electors might decide that voting was suddenly important).

    Comment by richdrich — December 18, 2012 @ 9:35 pm

  25. In my view, the really insightful charts here are the four plots below the main chart. You can look at them for ages and reach all sorts of conclusions. :)

    Andrew

    Comment by Guess the pollster :) — December 19, 2012 @ 8:34 am


RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

The Rubric Theme. Create a free website or blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 399 other followers

%d bloggers like this: