I’m not sure if Graeme was suggesting that would be the only information used, just that it could be analyzed in a way that conveys answers to more questions than just how much support each party would get, according to the poll. The tricky bit would be in working out the horse trading based on the numbers. You could work out the probability of each possible coalition getting enough numbers, but working out whether the coalition would be viable, whether the parties would work with each other, involves a lot more analysis.

]]>Right – that’s what I was attempting to say … if it really is different to how I expressed it, then I’m happy to adopt that formulation. I get that there’s a difference between bias and sampling error … my point is that (1) a given polling firm’s method of gaining information from the electorate is as likely to suffer from the former as a given poll is the latter, no matter how carefully pollsters try to correct for it; and so (2) as different polling firms have different sampling techniques, then there is greater uncertainty about the match-up between their approximations of the electorate’s voting intentions and the real-world voting intentions of the electorate than the “margin of error” suggests.

Which is why I think trying to mine even more information out of single polls (which is what Graeme wanted to see happen, which is where we started this discussion) actually isn’t very helpful at all … because single polls conducted using one sampling method aren’t a very good source of information about the real world, and so trying to make them appear even more authoritative (“this poll shows there is only a 12% chance that National will fall short of a majority government!”) is a bad thing. Now, of course, a responsible news media would carefully explain to its readers/listeners all the caveats that need to be placed on such a bald statement. But that would require that we have a responsible news media.

Which is why poll aggregations are gooder sources of information. And which is why Nate Silvers are gooder still.

]]>You know, I’m not sure that that’s what I want to see. As many votes as possible taken off New Zealand First, please.

]]>“Apparently this is a “Level 8″ (postgrad!) concept in maths”

I’m pretty sure that “Level 8” in that context means Year 13. Personally I’d be happy to relegate p-values to postgrad and teach kids Bayesian credible intervals instead.

]]>“using THIS method of polling to determine that a party’s level of support is X, 95 out of 100 other polls using this method will produce a result within the range of X-m to X+m (where m is the margin of error for the 95% confidence level)”.

It’s more like “If we ran this poll 100 times, we expect the true value of this party’s support *in the population we are sampling* to fall in the interval [p-m — p+m] 95 times”. Apparently this is a “Level 8” (postgrad!) concept in maths

As you say, you might be sampling a population that is different from the one that turns up on election day.

I think the point that Graeme is making is that we shouldn’t place undue attention on the edges of confidence intervals, if you are willing to treat polls as estimates of the population proportion then the true values are more likely to be close to the point-estimate than far from it. So, for instance, if you had a poll in which National was on 52% you could say, “ah, well, the CI contains 50% so maybe they don’t really have a stand-alone majority”, or, you could say “if the true support for National was 50% then we’d expect to result this extreme or more so about ~12% of the time”. Though, this is different from saying there is 12% chance that National is below 50% – it’s not actually possible to talk about the probability that an unknown variable takes a given value in the most common approach to statistics.

(Now Pete or Brad can tell me what I got wrong…)

]]>But, of course, we don’t know the “real value of support”. That’s what the poll is approximating (through a randomised sample of respondants, contacted in a particular way). And the process of sampling and contacting can introduce distortions to the data, which pollsters can try to correct for … but not may not be able to do so accurately.

Hence, the most we can say about a given poll is that “using THIS method of polling to determine that a party’s level of support is X, 95 out of 100 other polls using this method will produce a result within the range of X-m to X+m (where m is the margin of error for the 95% confidence level)”. Which is why, as Graeme himself noted, “The biggest polling outliers for both National and New Zealand First appear to be the election” … which is the only poll that gives us “the real value of support” of each party.

]]>