The Dim-Post

October 29, 2013

Austromancy

Filed under: Politics — danylmc @ 8:53 am

I’ve updated the tracking poll. Bias corrected poll here; non-bias corrected poll here. I think it shows the absurdity of yesterday’s Fairfax-Ipsos poll, which has National on >50% and able to govern alone, a direct reversal of the trend of every other poll published in the last few months.

Media organisations spend a ton of money on polling, so when you get an odd result there must be an obvious temptation to make a big story about this very interesting finding that is exclusive to you. But political reporters are supposed to be experts on this stuff, and you’d think they’d look at a poll that shows a huge reversal on other recent polls for no real reason and wonder if their data is valid. Not so the Dom-Post, who stuck their poll on the front page of the paper (‘Poll a major blow to Labour!’) along with a pretty funny story trying to justify their findings:

National is also up two points and holds a huge 17 point lead over Labour, winning the backing of more than 50 per cent of committed voters.

That comes after several high profile overseas trips by Mr Key where he has rubbed shoulders with world leaders and chaired top level trade talks during the Asia Pacific Economic Copoeration summit.

I guess if you cover these trips as a journalist and write lots of stories about them the idea of tens of thousands of voters switching support to National because Key met Vladimir Putin and Susilo Yudhoyono becomes plausible.

So no, I don’t think Labour will be worried about this poll; I do think they’ll be worried about the credibility of the Fairfax gallery pundits, who predicted Grant Robertson to win the leadership contest and now think that National are so popular they’d be governing alone after an election.

60 Comments »

  1. “Bugger the pollsters”
    The poor Meridian result was partially because the movers and shakers (ie people with money) think the Left may win, with all the downside that may involve for them

    Comment by rayinnz — October 29, 2013 @ 9:00 am

  2. John Key was out of the country for a week and National shot up in the polls!

    Comment by Moz in Newtown — October 29, 2013 @ 9:05 am

  3. you’d think they’d look at a poll that shows a huge reversal on other recent polls for no real reason and wonder if their data is valid

    I’m curious if they’re only looking at the recent results of the Fairfax Media Ipsos Poll which, if isolated in the graph (by hovering over a data point) appears to show a slowly increasing support trend for National since mid-2012, thus is apparently less out of place.

    Comment by izogi — October 29, 2013 @ 9:12 am

  4. Bias corrected poll here; non-bias corrected poll here

    For the stats non-inclined like me…I see the ‘corrected’ graph tracking the lowest results for National with a stack of higher and much-higher results above, and the ‘non bias corrected’ one more in the middle of the results. Which one is more ‘valid’?

    Comment by StephenR — October 29, 2013 @ 9:17 am

  5. I’m sure this has been asked a hundred times, but who are all the data points scoring less than zero percent?

    Comment by Ben Wilson — October 29, 2013 @ 9:18 am

  6. Ah nvm, they’re milestone, rather than datapoints…as you were.

    Comment by Ben Wilson — October 29, 2013 @ 9:22 am

  7. It doesn’t matter what polling company you use, if you tell them you’ve only got “X” dollars to spend on a poll then they’ll always give you “Y” result. My guess is Fairfax are to mean or to broke to run proper surveys, so they ring landlines and try to adjust for bias by magnifying the results of under-represented groups they get that way. Anyway, the main point isn’t to get an accurate result – it is to generate a story.

    Comment by Sanctuary — October 29, 2013 @ 9:25 am

  8. Ah, those Fairfax Reporters Tracey Watkins and Vernon Small, well known National and John Key idolators!

    Comment by Tinakori — October 29, 2013 @ 10:05 am

  9. Why are the trendlines of the individual polling companies parallel?

    Is it fair to have trendlines that continue after the election (Research International, who haven’t polled since the election), or that go back past the last election (Ipsos, who didn’t poll before it)?

    Comment by Graeme Edgeler — October 29, 2013 @ 10:08 am

  10. StephenR wrote: “I see the ‘corrected’ graph tracking the lowest results for National with a stack of higher and much-higher results above, and the ‘non bias corrected’ one more in the middle of the results. Which one is more valid?”

    The ‘bias-correction’ generally puts National at the bottom end of its polling, and New Zealand First at the upper end of its polling, because National have a tendency to do worse in elections than the non-adjusted poll trend points to, and New Zealand First have a tendency to do better in elections than the non-adjusted poll trend points to.

    Comment by kahikatea — October 29, 2013 @ 10:17 am

  11. kahikatea…thanks, that’s interesting. Certainly the case for the last election…I assume the others as well.

    Comment by Stephen Rowe — October 29, 2013 @ 10:27 am

  12. My guess is Fairfax … try to adjust for bias by magnifying the results of under-represented groups they get that way.

    This is standard international best-practice for polling, and an entirely acceptable method to resolve the well known polling problem of “how do I get a male in their 20’s to answer a god-damned phone?”.

    The no-cellphone-calling-makes-you-a-bad-pollster meme is a distraction for the mathematically ignorant. In order for cellphone-specific issues to crop up, the sample population (i.e. landlines only) has to be so different in its political outlook from the out of sample (i.e. cellphone only) populationm, AND the reasons for that difference have to be separate to existing attributes that are controlled for (e.g. age, sex, ethnicity, income). We’re nowhere near that being the case in NZ.

    Comment by Phil — October 29, 2013 @ 11:03 am

  13. National are losing momentum in voter support, although this hasn’t shown up yet in the polls. A lot of polls were actually conducted three to six months ago and the results are only being made public now so they are already out of date. I don’t know much about polls but I reckon that this is the case in this instance. Many people are scratching their heads wondering why National haven’t come down when they know personally that a lot of their mates that used to vote for National are no longer sure that they will in 2014. And what about the Conservative Party? They polled at around three percent in 2011 and reckon they can poll at five percent in 2014 but early indicators suggest to me that about the same amount of people are going to vote for them.

    Comment by Daniel Lang — October 29, 2013 @ 11:21 am

  14. @Phil: The ‘this poll is unrepresentative because it doesn’t capture all those precious snowflakes who only use their mobiles’ is one of three bulk standard excuses for those who want to ignore the implications of polls they don’t like. The other two are ‘the only poll that matters is election day’ and ‘well, this poll doesn’t reflect the opinions of my peer group, so it must be wrong’. I guess it’s more intellectually satisfying than just sticking one’s head in the sand.

    And speaking of sticking one’s head in the sand, wasn’t the author of this blog just last month claiming that a single poll showing an improvement in Labour’s fortunes after Cunliffe was elected was clear sign that Cunliffe was the leader the party needed all along?

    Comment by Hugh — October 29, 2013 @ 11:48 am

  15. Danyl,

    The news media consistently make a big deal about the latest data point in almost any newsworthy field. Take house prices – the same paper will report them going up down up down up down, with apparently no institutional memory.

    The key thing about both of yesterday’s polls was that Labour failed to gain ground on the Nats relative to previous polls. Giving lie to your idea of a Cunliffe bounce that you were trumpeting a couple of weeks ago.

    Comment by Swan — October 29, 2013 @ 11:50 am

  16. A lot of polls were actually conducted three to six months ago and the results are only being made public now so they are already out of date.

    No, that is wrong. The survey of 1030 voters was taken between October 19 and 23 and had a margin of error of 3.1 per cent.

    I think there’s also reason to believe that National may have got a boost in the latest Fairfax-Ipsos poll, especially compared to their last poll in August. For instance, in September the June quarter GDP results came out and showed NZ performing really well. The positive news around that story will do more to sway voters than, say, whether or not a Mayor fucked someone that wasn’t his wife or who the PM sat down with for biscuits and a chat.

    Comment by Phil — October 29, 2013 @ 12:02 pm

  17. @Hugh,

    Agreed. That’s the thing about ‘bounces’ in polls – they tend to wash out after a few weeks and return to trend.

    Comment by Phil — October 29, 2013 @ 12:08 pm

  18. “Media organisations spend a ton of money on polling”

    Do they? I had always thought that political polls were loss leaders for commercial market research, and the media got them fairly cheap in return for boosting the polling organisation’s profile?

    Comment by richdrich — October 29, 2013 @ 12:32 pm

  19. “…The no-cellphone-calling-makes-you-a-bad-pollster meme is a distraction for the mathematically ignorant…”

    However, if, say, you need forty Maori men under thirty and you only get 20, then doubling their weighting to get to the rquired forty also increases the support for National if it just so happens that Maori men under thirty who are home in the evening and have a landline are more likely to vote National than Maori men who don’t have a landline. And so on and so on.

    To my mind, the whole idea of polling by calling landlines is already hopelessly obsolete. The only reason all the people I know have a landline is because their telco package makes them have one. They certainly never answer the phone, or rather will only answer the phone if they are reasonably sure it is someone they know, i.e. they know Mum usually calls at 7.30pm on a Wednesday. I haven’t got a landline, and have no plans for one ever again.

    Comment by Sanctuary — October 29, 2013 @ 12:49 pm

  20. Even if the reasons are unclear (cell phones or whatever), it is clear is that the polls are not particularly accurate. Anyone saying otherwise is just an idiot and is ignoring the obvious facts:

    1) The polls never predict the outcome of the general election correctly, even those conducted a short time before the election. e.g. just look at the splattering of points for the % support for National in the polls leading up to the election. Pretty much ALL the polls had National a lot higher than their actual result. There is clearly a systematic bias in the opinion polls.

    2) Look at the variation in the polls conducted over are short period. There are sometimes huge differences in the results even within a matter of days, which would be very unlikely to occur given the expected margin of error of the polls. In other words, the ‘real’ margin of error must be a lot higher and is probably not easily quantifiable.

    In the end, the only poll that matters is the general election. The polls might give are general indication of trends but they are definitely not accurate enough to given any real idea of a likely election result. But of course that doesn’t stop media commentators and poll-junkies (e.g. this thread) jumping on an individual poll and making up all sorts of bulls**t as to why each party has gain or lost support (or complain when other people point out the obvious flaw in doing so). Worse still, many of the changes/differences highlighted by said poll-junkies are within the given margin of error. In other words, they are not statistical significant and could be due to chance alone.

    Comment by wtl — October 29, 2013 @ 1:13 pm

  21. OK, here’s some basic logic and math, for the benefit of too many journos and those who listen to them:

    Imagine Party A has 50%. Party B has 50%.

    In a poll, Party A gets 55%, and B gets 45%.

    Headline (fair) says “Boost for Party A”.

    In the next poll, Party A gets 55% again, and B gets 45% again.

    Headline (stupid) says: “NO Boost for Party A”.

    A moment’s thought will tell you that if there are polls every week/month, and Party A has to keep getting the “Boost”, then Party A will eventually get up to 100%. An unlikely result outside North Korea.

    A more accurate headline would be “Labour plus Greens keep previous gains.” But news is – by definition – new, so change is a headline, and consolidation is not.

    I confidently predict that Labour and the Greens will only get a handful of “Poll Boosts!11!!1” between now and the election. There may even be “Poll Slumps!111!1”.

    They could win an election comfortably with, say, 38 + 12. But they can only “Boost!” to 38 + 12 in the polls a couple of times. Once they get there, they might not go higher. They would be “stuck” (according to the wisdom of the gallery hacks).

    But they would be stuck on the path to government.

    Comment by sammy 2.0 — October 29, 2013 @ 2:13 pm

  22. From the corrected poll, I make that roughly 64 seats for Labour + Green + NZF.

    Comment by Ethan Tucker — October 29, 2013 @ 3:53 pm

  23. How ludicrous could a poll result get before it was silently canned from the record? Media outlets love using the ‘governing alone’ narrative. It gives them a warm FPP feeling inside.

    Comment by Auto_Immune — October 29, 2013 @ 5:19 pm

  24. Re my point 2 above, note that a Colmar-Brunton poll over the EXACTLY same period had National at 45% (vs 50.2% in the Fairfax).

    Comment by wtl — October 29, 2013 @ 5:31 pm

  25. if, say, you need forty Maori men under thirty and you only get 20, then doubling their weighting to get to the rquired forty also increases the support for National if it just so happens that Maori men under thirty who are home in the evening and have a landline are more likely to vote National than Maori men who don’t have a landline. And so on and so on.

    I’m sure we’ve had this debate before, somewhere? Anyway, in short: There are basically two forms of QA for polling:
    1 – minimum quota’s for particular groups
    2 – weighting the sample to represent the population

    All the NZ polsters do a bit of both. For #1 they probably go as deep as “maori, male, under 30” but it doesn’t make a lot of statistical sense to be more granular than that with your phone calling, because of #2. It’s not the case that they’ll take all the maori males under thirty they got hold of and weight them equally. What will happen in that each individual response will be put under the microscope and weighted individually depending on how it compares to the rest of the sample. This will include a regional weighting as well, I would guess. So, if 19 of your maori males under thirty happend to earn more that $80k a year and live in Christchurch, while one is unemployed in Auckland, then the weighting will be very different between them. It will also raise some red flags with the polling firm, but that’s beside the point for this illustrative example.

    The polls never predict the outcome of the general election correctly, even those conducted a short time before the election. e.g. just look at the splattering of points for the % support for National in the polls leading up to the election. Pretty much ALL the polls had National a lot higher than their actual result. There is clearly a systematic bias in the opinion polls

    Well, of course a poll today isn’t going to “predict” the outcome of an election 12 months away. Life happens between now and then.

    The point i’ve made before is that if you look at the polling results in the lead up to the last election, you can see National dip in the last couple of weeks quite sharply while NZF pick up. That’s the ‘tea-tapes’ clusterfuck, which clearly changed some peoples minds in the last week or so of the campaign, during and AFTER the pollsters had called people. It doesn’t surprise me that a poll one week out from the election overstated National’s outturn, but it also doesn’t mean they were wrong about National’s level of support one week out.

    Comment by Phil — October 29, 2013 @ 5:42 pm

  26. @ phil, how dare you bat Sancy wancys jingoistic dribbling for six. you heartless right wing rich prick, I bet you hate animals too.

    Comment by toby — October 29, 2013 @ 5:48 pm

  27. Phil, you are wrong that all NZ pollsters do a bit of both. Unless you count area stratification.

    NZ public poll methods grid
    http://grumpollie.wordpress.com/nz-public-poll-methods-grid/

    Comment by Andrew — October 29, 2013 @ 5:55 pm

  28. Hi Danyl, can anyone explain why Pundit’s Poll of polls is so radically different to yours? Is it that they have added in the Herald poll also or that they haven’t bothered with the recent positive news ones for Labour or….?
    http://www.pundit.co.nz/content/poll-of-polls/

    Gee I wonder- Phil’s been very active on this one, must be earning his keep.

    Keep telling yourself that it is a ‘bounce’ in the polls and that most New Zealanders are happy with a clusterf*ck extreme right government (find one honest journo on that one- just cos things are done incrementally doesn’t make it centerist) doing this: http://www.nzherald.co.nz/business/news/article.cfm?c_id=3&objectid=11148074 or this:http://union.org.nz/giveusabreak.

    This is an extreme right wing government in sheep’s clothing and there is no natural majority for these kinds of policies. The polls will move around, but it seems in most polls there is a clear trend left.

    The Fairfax poll is a clear 5 points different to other recent polls for the Nats and it is not wrong to observe that it has been that different from other polls before in recording the Nats support, however you want to argue nuts and bolts. I’m sure you want to argue nuts and bolts, because arguing with the trend would be worrying and we can see that in some of the Parliamentary Nats behaviour. The majority is not a strong one now, relying on Banks, Dunne and the Maori Party- it is even less if the left pick up any support at all from the result of the last election.

    Comment by sheesh — October 29, 2013 @ 6:43 pm

  29. phil: My point wasn’t anything to do with polls 12-months away. It was about the cluster of polls just before the election. The actual election is the only REAL data point, so the polls should be judged according to how well they correspond to that point. Simple saying “people changed their minds and it doesn’t mean the polls were wrong” really is putting your head in the sand – you are refusing to evaluate the polls in light of the real data and are simply hand-waving away any discrepancy between the polls and the actual data. In other words, all you are saying is that the polls are correct because they are correct.

    Comment by wtl — October 29, 2013 @ 6:46 pm

  30. On the conservatives too- Key giving out signals for them today. Looks like ACT may be dumped as number one friend in their favour. Possibly see some tactical Nat voting. Be interesting to see if the Conservatives take off Winston or if they are too odd and religious to do that.

    Comment by sheesh — October 29, 2013 @ 6:50 pm

  31. Hi Danyl, can anyone explain why Pundit’s Poll of polls is so radically different to yours?

    I don’t think Rob updates it very often. That one is dated August 5th. I’m sure he aggregates polling for Labour on a much more frequent basis!

    Comment by danylmc — October 29, 2013 @ 7:48 pm

  32. Yeah I agree with Phil.

    How does the correction method account for variation in preferences over time? Theoretically every poll could be correct unless you had two with exactly the same polling period contradicting each other, so you must be making some assumptions about the inertia of movements in real support. Now if this inertia factor/ smoothing is constant with respect to “wall-clock” time (which I’m guessing it is), then I think that is an invalid assumption. People are far more actively engaged in politics in the period immediately before an election than they are at other times – it follows that they are more likely to change their minds or make up their minds in this period.

    The tea tapes was a step change at the last election in a similar way to the exclusive bretheren smear against Brash back in 05.

    Comment by Swan — October 29, 2013 @ 7:49 pm

  33. Saw the front page headline read copy.

    Bad day in the news room. They are making the “NEWS”.

    Yeah right.

    Comment by peterlepaysan — October 29, 2013 @ 8:01 pm

  34. “Media outlets love using the ‘governing alone’ narrative. It gives them a warm FPP feeling inside.”

    Exactly. How many silly old men need to die before that one goes away?

    Comment by Sacha — October 29, 2013 @ 10:46 pm

  35. The Vote has been chopped and Nightline (which to be fair has not done any long form journalism for years) is set to be replaced by a Paul Henry vehicle (I kid you not). So news “hours” padded with weather, interminable ads, sponsored stock market reports and with a reporting style reduced to the level of the Village idiot combined with Paul Henry, Seven Sharp and foreign sourced disease of the week/cute animal fluff is what passes as “news and current affairs” today on both our main TV channels. Campbell Live does some proper journalism, but that is about it. The weekend early morning talking head shows are not watched by anyone and anyway are very weak. They appear to operate a revolving door with Jim Mora’s panel, and the panels they have simply act as uncritical repeaters of received wisdom.

    I mention the death of journalism in New Zealand (and it is dead now, we live in a land where the red top is all we’ve got) because I heard John Key this morning dismissing questioning of our role in the five eyes network as saying “we do nothing illegal”. it was a comment so disingenuous and glib that it was screaming out to be questioned by even the most mildly critical journalist. Instead, it will be repeated without comment in the media, John Armstrong will write a piece about the latest polls, and we’ll move on with another unchallenged lie cemented into the civic mindscape. This governments high polling is surely aided and abetted by it being the biggest bunch of exponents of the techniques and values of institutional corporate bullying being reported on by the weakest bunch of journalistic cowards in our history.

    Comment by Sanctuary — October 30, 2013 @ 8:44 am

  36. I’m not sure where you stand on this @Sanctuary, I think you mean that you hate everyone.

    Comment by TransportationDevice A7-98.1 — October 30, 2013 @ 9:34 am

  37. People are far more actively engaged in politics in the period immediately before an election than they are at other times – it follows that they are more likely to change their minds or make up their minds in this period.

    True, but only that subset of people who count themselves as swing voters.
    Partisan bias is also more likely to harden (and become more vocal) closer to an election as well.

    Comment by Gregor W — October 30, 2013 @ 9:47 am

  38. “…I’m not sure where you stand on this @Sanctuary…”

    Basically, I think the current government has successfully transposed the values of corporate governance to democratic governance, complete with the ambient culture of bullying and that is explicit in the vertical hierarchies of non-democratic corporations. It is, in a sense, the most complete neo-liberal government we’ve ever had – the structures and values of business have now completely subsumed the accountabilities and values of civil service and civil society at the top. At the same time, the crisis in broadcast and traditional journalism has seen it reduced to a tabloid desert that is completely reactive and thus reactionary. The few journalists who remain now operate in tightly controlled organisational environments explicitly aligned with the authoritarian memes of the corporate values as the National government.

    Since the whole point of modern corporate managerialism is to centralise power in the workplace and cow the workforce, the journalists who report on the bullies in the Beehive have already accepted the legitimacy of their own powerlessness and their reporting suffers from institutionalised “battered worker syndrome”. It is telling that the few critics of the government in the “mainstream media” are those who do not work for private corporate entities.

    So… I guess you might be right – I dispair of the state of our fourth estate, and I can’t think of anyone in any political party of the left prepared to do what it takes to do something about it.

    Comment by Sanctuary — October 30, 2013 @ 10:18 am

  39. @Andrew – thanks for the link. I was working on the basis of what I knew AC-N used to do with their political polling and assumed the others had similar methodologies. In short, they all do engage in some kind of analytics over the raw data. I note though that the table is only a ‘best guess’ in many places…🙂

    @sheesh – I do not work for a polling company, but my background is Statistics and I’m an unashamed numbers-nerd. This kinda stuff spins my wheels, so I like debating it.

    Keep telling yourself that it is a ‘bounce’ in the polls
    I’m not saying cunliffe’s elevation to leader is a bounce. What I’m saying is that it looks a lot like the kind of bounces that we regularly see in polling all across the democratic world when a leader of one party gets proportionally more media attention over a short time frame. Kevin Rudd earlier this year, and the Dem and Rep conventions in the lead-up to the US presidential election in 2012 are good examples of this.

    I’m sure you want to argue nuts and bolts…
    You must be new. Welcome to the internet. I hope you enjoy your stay.

    …because arguing with the trend would be worrying
    I’m quite happy to argue the trend, but that would probably be a waste of time. We’re all in agreement that 2013 has, on balance, not been a good one for National – I say that in terms of public perception and subsequent preferred-party choice rather than actual policy merits, which we all debate ad-nauseum.
    What interests me is trying to work out what might be driving the trend, going forward. I personally think the Nats will be comfortable with where they’re polling right now, because an improving economy over 2014 should work in their favour (whether or not they deserve credit).

    Comment by Phil — October 30, 2013 @ 12:47 pm

  40. @wtl
    My point wasn’t anything to do with polls 12-months away. It was about the cluster of polls just before the election. The actual election is the only REAL data point, so the polls should be judged according to how well they correspond to that point… you are refusing to evaluate the polls in light of the real data and are simply hand-waving away any discrepancy between the polls and the actual data.

    Well, here’s the thing. Polling takes time. There are lags between when you get a telephone response and when you can publish a result. In a benign environment where the public have largely switched off (I’m thinking specifically of ’08, where Obama was elected on the Tuesday before our voting on the Saturday) the polls in the lead-up to the election didn’t move much and were, on balance, quite good predictors of the outcome. On the other hand, with no outside distractions to the campaign in ’11, there were strong reasons why someone might change their preferred party choice right up to the Friday night before voting. Pollsters did correctly pick up a clear nose-dive in fortunes for National, and I’m evaluating their work on the basis of both the election result and the media narratives the public were responding to.

    Comment by Phil — October 30, 2013 @ 12:59 pm

  41. @Phil

    Yeah it’s interesting (for me anyway) seeing the variety of approaches.

    Definitely Colmar Brunton and I *think* DigiPoll use stratified probability sampling, or close to it. In Australia Roy Morgan say they use probability sampling (no quotas), but they wouldn’t answer me when I asked them about their NZ poll.

    Reid Research definitely use quotas (the chap who runs it described the method in this presentation – http://www.mrsnz.org.nz/webfiles/MarketResearchSocietyNZ/files/Maintaining_connection_with_NZ_voters_Murray_Campbell_Preso.pdf), and my understanding is that they don’t weight their data.

    Ipsos say they use quotas in their methodology, and they also say they weight – so they use the combination of the two approaches that you describe.

    Comment by Andrew — October 30, 2013 @ 4:26 pm

  42. Anyone still convinced that the polls are accurate should reflect on this:

    Fairfax Ipsos 19-23 October National 50.2% (1030 people)
    One News 19-23 October National 45% (1014 people)
    Roy Morgan October 14-27, 2013 National 42% (847 people)

    The two-tailed p-value for the difference between the Fairfax and the Roy Morgan is 0.0004.

    Comment by wtl — October 31, 2013 @ 10:16 am

  43. Wtl, you must be making assumptions about the variability in the change in preferences with respect to time though.

    Comment by Swan — October 31, 2013 @ 12:15 pm

  44. Swan: The dates are largely coincident (Oct 14-27 vs 19-23). Are you seriously suggesting that a change in 8% is accounted for by the fact that the Roy Morgan polls was conducted over a slightly longer period? If that’s the case, then I there is no point trying to convince you, since no amount of evidence will do so. Others can draw their own conclusions.

    Comment by wtl — October 31, 2013 @ 12:54 pm

  45. @wtl

    You’ve heard of the margin of error, right?
    It’s usually 3.1% or there-abouts. What the media stories don’t include (for no other reason than media mathematical illiteracy) is that the 3.1% is subject to a 95% confidence interval. Meaning that, even when you’re sampling the population in a perfectly un-biased manner, the dumb luck of statistics means that roughly 1 in 20 polls is going to be “rogue”.

    It’s probably the case that Ipsos overstated the Nat’s this time around. It’s also probably the case that MetService overstated the temperature in Wellington today – it’s fucking awful here. That doesn’t mean I’m not going to look at, and gain valuable information from, their forecast for tomorrow.

    Comment by Phil — October 31, 2013 @ 5:05 pm

  46. Phil – the 5% rogue factor only applies for population samples taken under the same (or near ame) conditions though, right?

    What I mean is you would have to see 20 sample polls, taken at a similar time across across the same population and asking the same questions, before you see one outlier.

    Or am I misinterpreting confidence intervals? Never did really have the hang of stats…

    Comment by Gregor W — October 31, 2013 @ 5:22 pm

  47. To make things even more complicated, technically you shouldn’t really apply the MoE to non-probability surveys (quota surveys).

    The RM poll was carried out over two weeks, and we have no idea how the interviews were distributed over those two weeks, so I don’t think its fair to statistically test differences between RM and the other two polls, and argue that a difference means they are inconsistent. I’m not arguing that they are consistent – just that the evidence you presented doesn’t tell us they are not.

    Comment by Andrew — October 31, 2013 @ 6:09 pm

  48. Phil: I know what a 95% confidence interval is. You, on the other hand, either know nothing about statistics or are being deliberately disingenuous. After all, I did give the p-value for the difference which was very very small (0.0004.). For reference, here are the confidence intervals for for the Fairfax poll of 50.2% out of ~1000 people:

    95% confidence interval 47.1 to 53.3%
    99.9% confidence interval 45.0 to 55.4%
    99.9999% confidence interval 42.5 to 57.8%

    Which means that the Roy Morgan result isn’t just way outside the 95% confidence interval, it is also way outside to 99.9% confidence interval and is in fact outside the 99.9999% confidence interval.

    In other words, it is very very very unlikely that the Fairfax and Roy Morgan polls were sampling from the same population. So either:
    1) Both polls are accurate reflections of the NZ voting populations but there was a massive change in support for National (of at least 8%) in the middle of the Roy Morgan polling period that was not seen in the Fairfax due to the slightly different polling periods. (Yeah right.)
    2) At least one of the polls was not a random sample of the NZ voting population, and therefore does correctly not reflect the level of support for National among New Zealand voters (i.e. at least once poll was not accurate).

    Comment by wtl — October 31, 2013 @ 6:51 pm

  49. Of course, “does correctly not reflect ” should be “does not correctly reflect”.

    Comment by wtl — October 31, 2013 @ 6:54 pm

  50. Gregor W: If you can be bothered reading this rather long explanation, it will hopefully explain what a margin of error and ‘rogue poll’ is:

    A poll is a method for determining something about a population using a small random sample of people from that population. For the election poll, the goal is estimate the percentage of voters who would vote for a certain party using a random sample of voters. Since we are only using a sample of people rather than whole population, the percentage determined from the sample will not exactly equal the percentage in the whole population. This is where the margin of error comes in.

    The margin of error (or confidence interval) tells you expected range of the percentage in the whole population given the percentage obtained in the sample. The usual margin of error reported is a 95% confidence interval, which means the population percentage will fall in this range in 95 polls out of 100. For the 50.2% in the Fairfax poll, the 95% confidence interval is 47.1 to 53.3% (corresponding to a the margin of error of about 3%), which means in 95 polls out of 100, the actual population percentage will be between 47.1 to 53.3%, even though the sample gave us 50.2%. In the other 5 times out of 100, you would get a ‘rogue poll’ in which the population percentage is outside the margin of error. However, a ‘rogue poll’ is not a free pass, as implied by Phil. That is, the true population percentage can’t just be any old number, it should still be reasonable close to the sample percentage. To see this effect, we can calculate different confidence intervals. For example:

    The 99.9% confidence interval for the 50.2% in the Fairfax poll is 45.0% to 55.4% which means that in 999 polls out of 1000, the true population percentage is between 45.0% to 55.4%.

    The 99.9999% confidence interval for the 50.2% in the Fairfax poll is 42.5% to 57.8%, which means that in 999999 polls out of 1000000, the true population percentage is between 42.5% to 57.8%.

    At this stage, it should already be apparent that the difference between the Fairfax and the Roy Morgan is unlikely to be explained by the ‘rogue poll’ explanation. However, to properly address this we must remember than both polls are estimates and have their own margins of error, so we should do a statistical test for the difference between the two polls to account for this.

    Here, we do a hypothesis test. First, we propose a null hypothesis that the population percentages for the two polls are the same (i.e. they are from the same population). We also propose an alternative hypothesis that the population percentages for the two polls are not the same (i.e. they are not from the same population). We do this test and we calculate a p-value, which tells us the probability that the any differences seen in the data were due to chance alone (i.e. the random sampling process). The smaller the p-value, it is less probable that the null hypothesis is true.

    For the data in question (i.e. 50.2% of ~1000 people vs 42% of ~850 people), the p-value is 0.0004. With this in hand, lets look at the possible interpretations:
    1) The two polls are actually sampling from the same population (i.e. the null hypothesis is true), and the difference is due to chance alone. However, the p-value tells us that this would only occur 4 times out of 10000 (0.0004) so is very unlikely to be the case.
    2) The two polls were sampling from different populations (i.e. we reject the null hypothesis), and this is due to a change in support for National over time. The statistics cannot tell us if this or (3) is correct, but common sense tells us that this is unlikely to be the case, since the two polls were conducted over overlapping periods and there were no major events that would explain a sudden change in support for National over that period.
    3) The two polls were sampling from different populations (i.e. we reject the null hypothesis), because the polling companies were sampling from different subsets of voters, i.e. at least one sample was not actually a random sample of the NZ voting population. I think that this should be the most likely explanation for anyone who honestly looks at data. It pretty much means that at least some polls are not accurate and there is some underlying bias at play, which is not properly corrected by the adjustment procedures used by the polling companies.

    Note that if (3) is the case, it means all bets are off. As the sampling procedures are in question, the only way we can really know the percentage support for any party among voters is to poll every voter, i.e. have an election. Which is why it is a good idea to evaluate the polls based on how well they correspond to actual election results, as done in danyl’s ‘corrected’ graphs. From the polls alone there is no way of knowing the true percentage support for any party, the best we can do is assume that it is reasonable close to the polls, but we really have no idea exactly how close it is.

    Comment by wtl — October 31, 2013 @ 9:29 pm

  51. Hey Danyl, perhaps you could do a poll to see if (3) is the most likely scenario…

    Comment by Lee C — November 1, 2013 @ 6:08 am

  52. Thx for the detailed explanation, wtl.

    I had assumed 3 given that the result was so far beyond the the upper limit of 95%.

    What I was trying to get my head around is whether the 1 in 20 ‘rogue’ applies to any result beyond the upper bound – in other words, is a discrete result irrepective of value – or as you have explained, reduces the likelihood of it being a valid result – close to the insignificant in this instance – the further you get away from the upper bound of the established 95% confidence.

    Comment by Gregor W — November 1, 2013 @ 9:26 am

  53. @ Phil

    Not this particular poll, but there have been many polls strangely skewed towards National and the information has typically been collated three months (or more) before the results have been released and in the meantime there has been something negative that National has done against what the people want which would probably mean that if the information was up-to-date then it wouldn’t favour National so much, such as forging ahead with the partial privatization of state assets, come up with the idea of abolishing morning and afternoon tea breaks, drill on conservation land, merge and close schools, etc.

    Comment by Dan — November 1, 2013 @ 1:25 pm

  54. The Fairfax poll is a joke. How many people didnt answer/wouldn’t say their voting intentions in the Fairfax poll—. 21.9% thats how many. Which makes the poll a waste of time and effort. So National got 50% of the 78.1% of people polled who gave a preference which equates to 39.05%.

    Comment by Mark — November 1, 2013 @ 2:34 pm

  55. @Dan

    What poll are you talking about? I’ve never heard of that happening in NZ, ever.

    Comment by Andrew — November 1, 2013 @ 3:33 pm

  56. http://imperatorfish.com/2013/10/30/how-to-explain-that-poll/

    Stats nerds get with the programme- Imperator Fish explains that words, not numbers are what you need to understand numbers.

    Jolly good too. I thought for a moment we were about to prove that drinking milk was a strong factor in becoming a heroin addict…

    Comment by sheesh — November 1, 2013 @ 6:04 pm

  57. Hey wtl – We have no idea how the RM interviews were distributed over the two week fieldwork period. Most of the interviews could have been completed in the first week, and then the second week was spent just trying to reach hard-to-achieve quotas. For this reason I don’t think it’s fair to statistically test for a difference between RM and either of the other two polls, and to use p<.05 as an argument for polls being inconsistent. I'm not saying they're consistent – I just disagree that p<.05 is evidence they're inconsistent.

    Also – the confidence intervals you cite are for probability samples. If you're really wanting to get technical, you can't really apply them to non-probability samples (quota surveys).

    Comment by Andrew — November 2, 2013 @ 7:06 am

  58. Wellington voters have, by and large, consistently ignored the pro-National editorial line of the Dom-Post for the 23 years since Richard Long took over as chief National Party booster and Editor in Chief in 1990. It must drive the Dom-Post editors nuts.

    Their problem is that Wellington voters are generally quite well informed about matters policy and governmental….and the Dom-Post can’t claim much credit there….and less as the years have passed.

    Comment by Steve W — November 3, 2013 @ 9:27 pm

  59. @Steve: Well, they did elect Richard Prebble as an MP for a few years there…

    Comment by Hugh — November 4, 2013 @ 9:01 am

  60. Hey Danyl, I was just wondering if there was a poll that was bias-corrected for the minor parties?

    Comment by Matthew — November 8, 2013 @ 9:59 am


RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Blog at WordPress.com.

%d bloggers like this: