The Dim-Post

July 1, 2014

Tracking poll and small parties

Filed under: Politics — danylmc @ 8:50 am

Peter updated the code to include the Internet/Mana Party so I’ve generated graphs for the smaller parties. The trendline prior to their alliance is the Mana Party, then the Mana Party plus the Internet Party. Then Internet/Mana.  Here is the bias corrected graph. Non bias corrected here.

Big difference for New Zelaand First. What we don’t know is whether the polls just prior to the 2011 election underestimated the level of support for New Zealand First or whether the teapot tapes saga swung voters from National to New Zealand First after the sample periods ended. Also, what impact will voter turnout have on that party’s chances of getting back in? He could get the same number of voters, but if turnout is higher still drop below 5%.

Internet/Mana are doing better than I thought they would. Almost exactly level with the Conservatives in the non-correcting graph. (I don’t think Colin Craig’s party will go much higher. I think that in 2011 he was a blank canvas that some Conservative voters could project their own values onto and now he’s a weird, silly canvas.) I have no idea how high  Internet/Mana will go. My wild guess about the voter behavior on the left is that the Greens are losing votes to Internet/Mana and Labour are losing votes to the Greens (also, Labour voters are probably switching to undecided). Ordinarily I’d expect Internet/Mana voters to act kind of like Green voters and not actually turn up and vote, but they say they’re pouring their resources into voter turnout so who knows?

Here are the graphs for the large parties. Bias corrected. Non bias-corrected here and a screen-shot below.

junepolls2

29 Comments »

  1. I seriously doubt the Greens are losing voters to Internet/Mana.

    Comment by kalvarnsen — July 1, 2014 @ 9:12 am

  2. I know people (ecologists) who were considering Mana last time. Their 1080 policy was the dealbreaker.

    Comment by pete — July 1, 2014 @ 9:50 am

  3. >I have no idea how high Internet/Mana will go. My wild guess about the voter behavior on the left is that the Greens are losing votes to Internet/Mana and Labour are losing votes to the Greens (also, Labour voters are probably switching to undecided).

    Except of course you have no evidence for this, and never, ever seem to consider the contra-evidence, that the correlation between Labour and Green polling is virtually non-existent.

    Because the undecided number is so high, the correlations drawn about people moving between parties are also weak. There is necessary correlation in overall polling because the results have to add up to 100%. So if people move from supporting a party to undecided, that actually registers as a win for all other parties on the overall percentage stakes even if they had no change in absolute support at all.

    In other words, it’s quite possible that Internet/Mana draw from the undecided pool, not from Green and Labour voters. Similarly, it’s possible, even probable, that Labour loses support to undecideds, not to the Greens. If Mana looks like a real force and starts to really gain momentum, there’s no big reason it should affect the Green and Labour support totals at all, although it will affect their percentage, obviously.

    Comment by Ben Wilson — July 1, 2014 @ 10:25 am

  4. >Also, what impact will voter turnout have on that party’s chances of getting back in? He could get the same number of voters, but if turnout is higher still drop below 5%.

    NZ First is strongly negatively correlated to undecided votes, so if the vote gets out, it’s good for Peters. It’s National that have to watch out. I’m not sure why they have focused on a “get out and vote” message…presumably they are targeting their own undecided pool. But if the message goes more general, it’s likely to benefit Labour and NZF more than them.

    Comment by Ben Wilson — July 1, 2014 @ 10:39 am

  5. “I’m not sure why they have focused on a “get out and vote” message” I *think* it’s aimed at strongly pro-national electorates, where there’s little doubt about who’ll win the seat – making sure these voters get out in numbers helps the party vote.

    Comment by Robinson Stowell — July 1, 2014 @ 10:57 am

  6. @Ben: Every party wants its own supporters to get out and vote!

    National probably knows that it will get less of the vote on election day than it gets in the polls, whether that’s because the polls overestimate their support or because increased exposure to their policies reduces their support. So they don’t want complacent National voters staying home while mobilised “far left” voters turn up at the polls. From their perspective, admitting it will be close is a hedge: reduces their expected winning margin, but reduces the risk that that margin turns negative.

    They also think that a lot of the undecided voters are National supporters, because being bad at maths is necessary for being right wing nowadays.

    Comment by pete — July 1, 2014 @ 11:02 am

  7. Isn’t all this consistent with undecideds voting (if they vote at all) disproportionately more for minor parties of any kind? If undecideds are undecided because the two main parties turn them off, but want to vote for someone and end up with one of the minor parties, then the polls will systematically underestimate the election share of minor parties, and overestimate the share of major parties. Which seems to be reasonably close to what happened last time.

    Comment by Dr Foster — July 1, 2014 @ 11:11 am

  8. >Isn’t all this consistent with undecideds voting (if they vote at all) disproportionately more for minor parties of any kind?

    I don’t think that’s entirely clear. It’s a much bigger thing to decide to vote for a different party than it is to decide not to support your usual party (if you have one). I think it’s a mistake we’ve made for quite a while to conflate “undecided” with “undecided between two parties”, when it can be much more about “undecided whether to vote at all”. And the nature of the people who have an element of tribalism seems to be different on that score between Labour and National at the moment. It would seem that Labour has bigger proportion of tribal voters tussling with this dilemma than National. Genuine swing voters may actually be the lesser of the pool of undecideds, over people confronting both apathy and disillusionment with their tribe.

    In the smaller parties, this seems to be much less of a phenomenon for the Greens, whose support is not highly correlated with undecideds. I’d say that this is because they are primarily a highly ideological party, so people who vote for them made a conscious choice about their views, and how much those matter. But NZF is not like that, and are highly negatively correlated to undecideds. I think they function as a conservative protest vote. I don’t know how Mana works and nobody knows how IP works, so how they will work together is a mystery. Need more data. But they’re both pretty new so trends can’t really be seen clearly.

    I would think your point about people swinging on the left might be right, but I’d also think that people who committed to Mana would probably do so strongly on Mana’s actual merits, rather than in protest to Labour. The policy platforms are too different for it to be a border easily crossed without lengthy navel gazing in undecided land first. So I’d expect Mana’s growth to be much slower and steadier, unlike the wild swings NZF can get.

    Comment by Ben Wilson — July 1, 2014 @ 11:32 am

  9. @Ben: “I’m not sure why they have focused on a “get out and vote” message…presumably they are targeting their own undecided pool.”

    I’m sure it’s simply because the National Party loves democracy more than anything else, and believe the most important thing is that all New Zealanders reach a fully inclusive decision regarding how they want parliament to be composed.

    On the Undecided topic, is data available to be able to generate these graphs in a way which shows the proportion of Undecided voters as part of the 100%?

    Comment by izogi — July 1, 2014 @ 1:03 pm

  10. So about the undecideds:

    A poll is commissioned to estimate party support if an election was held at the same time as the poll. Given that this is the purpose, it doesn’t make sense to include those unlikely to vote in the results for party support. Also it’s not possible to include the undecideds in that result because they are, well, undecided.

    Yes, the results would look very different if unlikely voters and undecided voters were included. But those results would look nothing at all like the result of an election held at the time of the poll, and they would be misleading in this regard. It would not be possible to translate the result into parliamentary seats – which can help to show how close an election might be under MMP.

    Perhaps check out KiwiPollGuy’s blog about the undecided analysis at The Political Scientist: http://kiwipollguy.wordpress.com/2014/06/23/undecided/

    Comment by Andrew — July 1, 2014 @ 1:48 pm

  11. All that’s true, Andrew, but when you’re making inferences about changes in the likely results from one poll to the next, and attempting to account for where support is moving, then you should take the undecided into account. They should be reported just as the total vote count is reported in an actual election.

    Comment by Ben Wilson — July 1, 2014 @ 4:21 pm

  12. Hi Ben

    If you mean “…when you’re making inferences about changes in ‘the overall level of public support for a party’…” then I *sort of* agree with you.

    But if you *do* mean “changes in the likely result from one poll to the next”, then I disagree. The two are very different.

    When it comes to attempting to account where support is moving, then really neither of these two approaches is appropriate. For that you’d actually need a different set of questions (at the very least). Actually, if a client asked me to design a survey to do that, and they wanted it to be robust, I’d probably recommend a different methodology altogether.

    I’m not trying to be annoying – I just think inferring swings from standard poll data is stretching it way beyond its limits.

    Comment by Andrew — July 1, 2014 @ 4:38 pm

  13. >I’m not trying to be annoying – I just think inferring swings from standard poll data is stretching it way beyond its limits.

    I don’t think you’re being annoying, it’s a good discussion, and I agree with you. I know that a longitudinal study is the only real way to be sure about where people are moving to and from. But those aren’t available to us, so we make do with the data we have. My main point is also to cast doubt on stories about where support is moving based on polls that aren’t designed for that. If you’re going to do that, then you should at least do it from some kind of analysis of the correlations, and they should include one of the largest groups, those who are undecided are at least the third biggest group.

    >I’d probably recommend a different methodology altogether.

    I’m surprised no one seems to do it. Expensive, presumably?

    Comment by Ben Wilson — July 1, 2014 @ 4:59 pm

  14. Perhaps not so expensive, depending on the approach.

    The thing is, you’d need more than 50 polls from one company before trying to infer anything from the analysis. Otherwise it’s just reading tea leaves.

    It wouldn’t be possible to use poll averages, because there is no standard definition of undecided.

    On top of that, correlation still isn’t the correct statistic. As Thomas Lumley at Stats Chat noted about The Political Scientist’s conclusions:

    “You could fit the data just as well by saying that Labour voters have switched to National and National voters have switched to Undecided by the same amount — this produces the same counts, but has different political implications.”

    Comment by Andrew — July 1, 2014 @ 5:47 pm

  15. @ Ben / Andrew

    Two examples of longitudinal surveys I can think of off the top of my head are HILDA in Australia and the recently-ceased SOPHIE here in New Zealand. There are three major stumbling blocks that occur to me for implementing a longitudinal survey for political polling
    1) in-sample demographic/socioeconomic changes as the circumstances of the sample evolve
    2) ‘decay’ in sample size as you lose track of who you polled last month/year
    3) processing time – a poll isn’t of value to anyone if it’s 6-months between the day you’re sampled and the results are announced.

    None of these are insurmountable problems, but the ways we’re best able solve the problems are very costly.

    Comment by Phil — July 1, 2014 @ 6:05 pm

  16. Hey Phil

    I think you might be able to do something less robust, but still useful if used only for analysing swings. Perhaps an online panel approach.

    Comment by Andrew — July 1, 2014 @ 6:07 pm

  17. My take on National’s “get the vote out” campaign.

    National is so far ahead in the polls that the election looks a foregone conclusion. Therefore soft Nat voters will feel their efforts won’t make a difference on polling and opt to do something else with their time. This logic has been used to explain why ‘foregone conclusion’ elections in other countries didn’t work out as planned.

    Comment by Bill Bennett — July 1, 2014 @ 7:24 pm

  18. Interesting graphs, Danyl. But can you please put a key or legend on at least the minor parties graph? I don’t know who is who. Perhaps I haven’t followed things closely enough to know, but some of the curves cross, etc., and it would be nice to have a key to follow the players.

    Thanks. Cheers.

    Comment by David in Chch — July 1, 2014 @ 8:55 pm

  19. >I think you might be able to do something less robust, but still useful if used only for analysing swings. Perhaps an online panel approach.

    Another thought is that you could simply ask whether people have changed their vote, and if so, what from, during an ordinary poll. Then you wouldn’t need a longitudinal survey. You could possibly also ask the question which is really burning a hole in all of this anyway: Why? Well OK, that’s going to be a series of questions.

    Sure people can make mistakes or not know. But they can do that in current polls too.

    >On top of that, correlation still isn’t the correct statistic. As Thomas Lumley at Stats Chat noted about The Political Scientist’s conclusions:

    Sure, it’s not perfect. But I think it’s a damned sight better than just making up a story without even taking it into account. And yes, there are all sorts of possibilities that could explain what happened. Another one is that every single person in the country played musical chairs with their vote. It’s not impossible in theory.

    Comment by Ben Wilson — July 2, 2014 @ 1:42 am

  20. “Another thought is that you could simply ask whether people have changed their vote, and if so, what from, during an ordinary poll. Then you wouldn’t need a longitudinal survey. You could possibly also ask the question which is really burning a hole in all of this anyway: Why? Well OK, that’s going to be a series of questions.”

    Yeah that’s what I was kinda thinking of initially too. Probably some of the polls do already ask people who they voted for at the last election, and could compare that against pooled party support results for all of this year. The data are owned by the clients though, so pollsters can’t share anything without permission.

    The problem with presenting these kind of results is that people will always want to see the ‘nett effect’ (ie, not just who has lost votes to who, and who has gained votes from who). I haven’t figured out a way to show that in an easily digestible form (that will still pass the ‘StatsChat test’). I tried to nut something out last week but I just didn’t the have time, and this week is even more crazy.

    Once I sort that out I can show it to my client and say: ‘Check out how cool this is – okay if I pop it in the next report?’ I’m 99.9% sure they would say yes.

    If you have any ideas just let me know🙂. I think it needs to be something that a non-statistics-minded person can easily understand.

    Comment by Andrew — July 2, 2014 @ 5:59 am

  21. I’m 99.9% sure they would say yes.

    Plus or minus 3 percent.

    Comment by Phil — July 2, 2014 @ 8:37 am

  22. “whether the polls just prior to the 2011 election underestimated the level of support for New Zealand First”

    Few people want to feel the shame of admitting to a pollster that they’d vote for NZ First, so they say National. Then they go back to their cup of tea and Emmerdale Farm.

    Comment by rickrowling — July 2, 2014 @ 8:56 am

  23. “Few people want to feel the shame of admitting to a pollster that they’d vote for NZ First, so they say National.”

    Sometimes it’s crossed my mind to wonder how many people might say the opposite of what they think they’d do, just to screw with the process, or to reassure themselves with a warped logic when the results come out that things look unfavourable because they helped to skew the result towards something less correct.🙂

    Comment by izogi — July 2, 2014 @ 8:58 am

  24. >The problem with presenting these kind of results is that people will always want to see the ‘nett effect’ (ie, not just who has lost votes to who, and who has gained votes from who). I haven’t figured out a way to show that in an easily digestible form (that will still pass the ‘StatsChat test’). I tried to nut something out last week but I just didn’t the have time, and this week is even more crazy.

    I get that, but aren’t the big political polls something the polling companies do off their own bats as an excellent self promotional exercise? I’d think any value one could add to that stuff would only make them look better. Lots of free press during election year, senior statisticians on TV explaining what it all means. I guess they do have to dumb it down a lot for soundbite length, but on the more specialized news shows it would really fly for them to be actually making *useful* discoveries.

    Comment by Ben Wilson — July 2, 2014 @ 9:34 am

  25. >If you have any ideas just let me know🙂. I think it needs to be something that a non-statistics-minded person can easily understand

    Absolutely…Although I am about to start stage 3 stats courses so I can’t claim to be totally non-stats minded.

    BTW, was that you on Media Take last night?

    Comment by Ben Wilson — July 2, 2014 @ 10:15 am

  26. Hey Ben – yeah that was me (i was pretty nervous and rambled on a bit I’m afraid)

    “I get that, but aren’t the big political polls something the polling companies do off their own bats as an excellent self promotional exercise? I’d think any value one could add to that stuff would only make them look better. Lots of free press during election year, senior statisticians on TV explaining what it all means. I guess they do have to dumb it down a lot for soundbite length, but on the more specialized news shows it would really fly for them to be actually making *useful* discoveries.”

    We don’t do it all off our own bat. Yeah I totally agree with you. There’s lots of analysis that gets done behind the scenes – legally our clients owns their data though. The potential number of analyses we could run is almost infinite, but there is usually not much time to get extra stuff into the report (which we have about 2-3 hours to prepare). There have been times when I’ve been itching to say certain things but I can’t because the data are not mine.

    We released some extra cross-tabs about marriage equality once, because we had so many queries (from MPs as well as the public). Even then we need to get approval to release it. (I’ve never been denied approval, by the way. It’s just the time-factor, plus it I don’t want to bother my client all the time with requests).

    Comment by Andrew — July 2, 2014 @ 10:28 am

  27. >i was pretty nervous and rambled on a bit I’m afraid

    Nah, it was good. Just a pity the show’s only 30 mins, so you only got a few mins for each interviewee.

    Comment by Ben Wilson — July 2, 2014 @ 10:38 am

  28. I’ve just posted Fairfax-Ipsos Preferred PM breakdowns (by Party Support) here…http://sub-z-p.blogspot.co.nz/

    And, I’ll be following that up with a more detailed analysis of the Preferred PM measure (in the four Public Polls that ask this question) in my next post (in a week or so).

    Then, after that, I’ll be replying to Andrew re: the Undecideds.

    What more could you want ?

    Comment by swordfish — July 4, 2014 @ 12:47 am

  29. Hi Swordfish

    Feel free to email or call me if I can be of any help with your analysis or if I can answer any questions you have – grumpollie@gmail.com (I can reply with my phone number).

    Comment by Andrew — July 4, 2014 @ 7:18 am


RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Create a free website or blog at WordPress.com.

%d bloggers like this: