What’s the Shy Tory effect?
The term “shy Tory” originates, as many wacky nicknames for political factions do (see: “Whig”), from the UK. It emerged in the aftermath of the 1992 UK general election, where a race which
polls suggested was a tie with Labour narrowly ahead turned out to instead be a sizeable win for the ruling Conservatives; with an inquiry by the Market Research Society suggesting Conservative voters may have misled pollsters as to their voting intentions. Thus was born the idea that there is a segment of respondents who will not admit to voting for or holding views in line with politically-incorrect/conservative parties but who will vote for said parties in the privacy of the ballot box.
The shy Tory effect is an extension of the more widely established
social desirability bias, where respondents are less likely to admit to behaviours considered socially undesirable (e.g. drug use) and more likely to claim they act in socially desirable ways (e.g. recycling, donating to charity). Furthermore, in the UK, there is some evidence that voting-intention polls have tended to underestimate the Conservatives and that live-interview polling (e.g. an interviewer directly asking people who they intend to vote for, either face to face or over the phone) tends to be more skewed than automated polling (e.g. an online survey), suggesting the possibility of a shy Tory effect there. However, shy Tory effect has since been expanded and is now regularly trotted out whenever a conservative party/candidate/cause overperforms its polls in elections (e.g. Donald Trump’s presidential bids), even when the error is fairly small and/or comes in an electorate with little history of polling skew.
In particular, shy Tory effect has been frequently cited as an explanation for the
2019 Australian polling failure, in which polls collectively underestimated the centre-right Liberal-National Coalition’s vote by 2.4 – 3.4% and over-estimated the centre-left Labor Party’s vote by a similar proportion. 1 At first glance, this might seem somewhat persuasive, given the Association of Market and Social Research Organization’s conclusion that “a skew (to Labor) has been evident in recent election cycles, with 17 of the 25 final poll results since 2010 (68%) overestimating 2PP support for Labor”. x
However, I don’t find the above credible evidence for a polling skew towards Labor. Consider this: while “17 of 25 since 2010 (overestimated)…Labor” may sound like some pretty clear evidence, in reality that period only covers four elections, in which one (2019) Labor was clearly overestimated, another where Labor was slightly overestimated (2010), one where the polls were bang-on (2013) and one where the polls very slightly overestimated the Coalition (2016). Polling errors within elections tend to be at least somewhat correlated (e.g. polls all underestimated the Coalition in the 2019 federal election, but instead they overestimated them at the 2018 Victorian and 2017 WA state elections), and ignoring this fact can make a figure like “in 1 of 4 elections, pollsters overestimated Labor, in another they got the 2pp to within 1%, and in the remaining 2 of 4 they got the 2pp right to within 0.3%” sound a lot more skewed than it really is.
Still, that doesn’t necessarily answer the question of whether or not Australian polling suffers from a shy Tory effect. To determine whether or not pollsters are being misled/ignored by conservative-leaning voters to an extent which skews their polls, let’s take a systematic look at Australian polling for elections and the same-sex marriage survey.
But first, let’s set out what we would expect to see if there was a shy Tory effect in Australian polling:
Over time, polls would systematically underestimate support for conservative and/or politically-incorrect parties and causes. They don’t always have to underestimate the conservative side, of course, but over a bunch of elections, polls should under-estimate conservatives on average if respondents were unwilling to admit to voting for conservatives. Here, I’m going to use voting-intention polls for the Liberal-National Coalition (and its component parties) and One Nation (a nationalist conservative, far-right party), as well as polling for the same-sex marriage survey, to determine if this is the case. Note that I intend to use polls of the first-preference voting intention (aka primary votes), not 2-party-preferred (2pp). While 2pp can be a useful metric, most modern pollsters estimate 2pp by assuming minor party voters’ preferences will flow to the major parties at the same rate as they have at prior elections (e.g. assuming 82% of Greens voters will place Labor above the Coalition candidate(s)), and therefore 2pp estimates can be off due to shifts in preference flows which pollsters don’t usually poll about. This analysis is also limited to final polls (defined as being the last poll released by that pollster for that election, and being taken within 7 days of the election), to minimise the effects of any late swing on pollster error. 3 x Polls which were taken using live-interview methods (i.e. an actual person either meeting with voters in person or over the phone to ask who they intend to vote for) should be more skewed against the conservatives than polls taken using automated methods (i.e. robopolls or online surveys). Anonymised methods of survey-taking reduce social-desirability bias, and given that live-interview polls in 2016 apparently showed Clinton further ahead compared to online/robopolls, live polls should show less skew than automated polls. This is important because even if we find a skew against conservatives in polling, it does not necessarily follow that it is due to voters being unwilling to admit to intending to vote for conservatives – for example, it might be because pollsters are undersampling right-leaning groups, but the ones they do find respond with their genuine voting intention. The difference between live and automated polling is important for determining whether polls simply need to weight their samples differently, or whether a group of respondents are systematically misleading pollsters as to their voting intentions.
The voting-intention polls used in this analysis are available
here, while the polling for the same-sex marriage survey was sourced from Wikipedia. My thanks to Dr Kevin Bonham and William Bowe (of Poll Bludger) for having shared their archives of old Australian polling.
So, do polls underestimate conservatives?
On average, Australian polling does not under-estimate conservative parties
(if you’re on a mobile device, scroll right for full data or turn your device landscape. Click the Previous and Next buttons to view all data.)
Election Method Coalition One Nation 1988 NSW Live -3.5 1988 NSW Live 0 1991 NSW Live 0.4 1995 NSW Live -0.9 1999 NSW Live 4.3 -2 1999 NSW Live 1.3 -2 2003 NSW Live -1.3 2003 NSW Live -1.8 0.2 2007 NSW Live -2 2007 NSW Live -1 2011 NSW Live -1.1 2011 NSW Automated 3.9 2011 NSW Live -0.1 2015 NSW Live -1.6 2015 NSW Automated 3.4 2015 NSW Automated -0.1 2015 NSW Live -0.6 2015 NSW Live 1.4 2019 NSW Automated -0.5 3.9 2019 NSW Automated -0.5 -0.1 1988 VIC Live -5.2 1992 VIC Live -0.5 1996 VIC Live 3.3 1999 VIC Live -1 2002 VIC Live -0.2 2002 VIC Live -1.7 2006 VIC Live -2.5 2006 VIC Live 3.5 2006 VIC Live -0.5 2010 VIC Live 0.3 2010 VIC Live -0.2 2010 VIC Live -0.7 2010 VIC Live 0.3 2014 VIC Live 0 2014 VIC Live -2 2014 VIC Automated -2 2014 VIC Automated 2 2014 VIC Automated -2 2018 VIC Automated 4.8 2018 VIC Automated 0.8 2018 VIC Automated 4.8 1986 QLD Live -1.6 1986 QLD Live -0.6 1989 QLD Live 0.4 1992 QLD Live -1.1 1995 QLD Live -3.4 1998 QLD Live 1.8 -4.2 2001 QLD Live -2.4 3.3 2004 QLD Live -2.4 -1.9 2006 QLD Live 0.1 0.4 2006 QLD Live 1.1 0.4 2006 QLD Live 0.1 0.4 2009 QLD Live 0.4 2009 QLD Live -1.6 2012 QLD Live 0.4 2012 QLD Live 1.4 2012 QLD Live -2.6 2015 QLD Live -0.3 2015 QLD Automated -2.3 2017 QLD Automated 0.3 -0.7 2017 QLD Automated 1.3 -1.7 2017 QLD Automated -3.7 3.3 2020 QLD Automated 0.1 2.9 1986 WA Live -5 1989 WA Live -2.4 1993 WA Live 1.6 1993 WA Live 3.6 1993 WA Live -0.4 1993 WA Live 3.1 1996 WA Live 2.3 2001 WA Live 3.7 -2.5 2005 WA Live 4.7 -0.6 2008 WA Live -0.7 2008 WA Live -0.2 2013 WA Live -3.2 2013 WA Live 0.8 2017 WA Live -0.6 4.1 2017 WA Automated 0.4 3.1 2017 WA Automated 3.4 1.9 2021 WA Automated 0.7 1.8 1989 SA Live 0.8 1993 SA Live 1.2 1997 SA Live -0.4 2002 SA Live 4 2002 SA Live 3 2006 SA Live 1 2010 SA Live 0.9 2010 SA Live 0.4 2014 SA Live -3.2 2018 SA Automated -3.9 2018 SA Automated -3.9 1987 Federal -1.9 1987 Federal Live -2.9 1990 Federal Live -3.9 1990 Federal -1.4 1993 Federal 0.7 1993 Federal Live 3.7 1993 Federal Live 0.7 1996 Federal Live 0.8 1996 Federal -4.2 1996 Federal Live -2.2 1998 Federal Live 0.5 -1.4 1998 Federal Live 2.5 -1.4 2001 Federal Live -0.4 -1.8 2001 Federal Live 3.1 -1.3 2001 Federal Live 3.1 -1.3 2004 Federal Automated -0.7 -0.7 2004 Federal Live 2.3 -0.2 2004 Federal Live -1.7 -0.2 2004 Federal Live 3.8 -0.2 2007 Federal Live -2.1 2007 Federal Live 0.9 2007 Federal -0.6 2010 Federal Live -2.3 2010 Federal Live -1.8 2010 Federal Live 0.2 2010 Federal Live -0.8 2013 Federal Automated -2.5 2013 Federal Live -0.5 2013 Federal Live 0.5 2013 Federal Live 0.5 2013 Federal Automated -2 2013 Federal -1.5 2013 Federal Automated -3.5 2016 Federal Live -2 2016 Federal Automated 1 2016 Federal Automated 0.5 2016 Federal Automated 1 2016 Federal Automated 0 2019 Federal Live -2.9 0.9 2019 Federal Automated -2.9 3.5 2019 Federal Live -2.4 0.9 2019 Federal Automated -2.4 -0.1 2019 Federal Automated -3.4 -0.1 Average -0.2% (134) +0.2% (35) Negative values mean the poll underestimated the party's vote share, while positive values mean the poll overestimated the party's vote share. Number of polls given in brackets for the average error.
As the above table rapidly makes clear, there is practically no bias against conservative parties in Australian polling. On average, the overall skew on both Coalition and One Nation vote shares has been minimal, at just
and -0.2% respectively, neither being statistically significant (p = 0.3 and p = 0.6). For every 2018 South Australia (-3.4%) or 2019 federal election (-2.8%) where the Coalition was underestimated, there’s a 2018 Victoria (+4.8%) or a 2001 WA (+3.6%) where they were instead overestimated. +0.2%
This is similarly reflected in the polling averages for each election:
(if you’re on a mobile device, scroll right for full data or turn your device landscape. Click the Previous and Next buttons to view all data.)
Election Coalition (polling average) One Nation (polling average) 1988 NSW -1.8 1991 NSW 0.4 1995 NSW -0.9 1999 NSW 2.8 -2 2003 NSW -1.6 0.2 2007 NSW -1.5 2011 NSW 0.9 2015 NSW 0.4 2019 NSW -0.5 1.9 1988 VIC -5.2 1992 VIC -0.5 1996 VIC 3.3 1999 VIC -1 2002 VIC -1 2006 VIC 0.1 2010 VIC -0.1 2014 VIC -0.8 2018 VIC 3.4 1986 QLD -1.1 1989 QLD 0.4 1992 QLD -1.1 1995 QLD -3.4 1998 QLD 1.8 -4.2 2001 QLD -2.4 3.3 2004 QLD -2.4 -1.9 2006 QLD 0.4 0.4 2009 QLD -0.7 2012 QLD -0.3 2015 QLD -1.3 2017 QLD -0.8 0.3 2020 QLD 0.1 2.9 1986 WA -5 1989 WA -2.4 1993 WA 1.9 1996 WA 2.3 2001 WA 3.7 -2.5 2005 WA 4.7 -0.6 2008 WA -0.5 2013 WA -1.3 2017 WA 1 3 2021 WA 0.7 1.8 1989 SA 0.8 1993 SA 1.2 1997 SA -0.4 2002 SA 3.5 2006 SA 1 2010 SA 0.6 2014 SA -3.2 2018 SA -3.9 1987 Federal -2.4 1990 Federal -2.7 1993 Federal 1.7 1996 Federal -1.9 1998 Federal 1.5 -1.4 2001 Federal 1.9 -1.5 2004 Federal 0.9 -0.3 2007 Federal -0.6 2010 Federal -1.2 2013 Federal -1.3 2016 Federal 0.1 2019 Federal -2.8 1 Average -0.2% (61) 0.02% (17) Negative values mean that an average of final polls underestimated the party's vote share, while positive values mean that an average of final polls overestimated the party's vote share. Number of poll averages given in brackets for the average error.
Overall, there is no evidence for voting-intention polling in Australia systematically under-estimating voting intention for conservative or politically-incorrect parties (in fact, One Nation, by far the most “politically incorrect” party split out in polls, is actually slightly overestimated on average). Polls under-estimated the Coalition in just 34 of 61 elections (55%), which is pretty close to what might be expected from pure chance, while they under-estimated One Nation in just 8 of 17 (45%) elections (again, very close to pure chance). Even at elections where social issues have been prioritised by the right (e.g. “African gangs” in VIC 2018, or to a lesser extent abortion in QLD 2020), polls have not under-estimated support for the conservative parties (VIC 2018, Coalition over-estimated by 3.4%; QLD 2020, LNP slightly over-estimated by 0.1% and One Nation overestimated by 2.9%) despite “politically incorrect” issues presumably being more noticeable to the electorate.
I think it might be a tad hard to claim shy Tory effect is an issue when Australian voting-intention polls aren’t even biased against conservatives in the first place (what’s the point of “shy Tories” if polls don’t even under-estimate Tories?), but maybe there’s evidence in other aspects of Australian polling for shy Tories. How did Australian pollsters do in predicting the results of the same-sex marriage legalisation postal survey?
Polls somewhat over-estimated the Yes vote in the same-sex marriage survey
(if you’re on a mobile device, scroll right for full data or turn your device landscape)
Pollster Method Yes Error Newspoll Automated 63 +1.4 Essential Automated 66.5 +5.1 YouGov Automated 64 +2.4 Galaxy Automated 64 +2.4 Average 64.3 +2.7 If a poll reported Undecideds, they were split by evenly dividing them between the Yes and No responses.
On average, polls taken within the final two weeks of the conclusion of the same-sex marriage survey somewhat over-estimated support for legalising same-sex marriage (+2.7%), with
most polls taken over the survey period somewhat under-estimating opposition to same-sex marriage legalisation by varying amounts. Does this demonstrate a shy Tory bias?
Well, not really. First off – keep in mind, this is just the results of a single survey, and as I mentioned above, polling errors for any one election tend to be at least somewhat correlated (so it’s hard to draw conclusions about how polling as a whole is doing from any one election, even if lots of polls were conducted for that election). Secondly, most of the polling for the same-sex marriage survey either used automated methods or a mix of automated and live-interview methods, with little evidence of any difference in the results obtained by varying methods. This makes it unlikely that the polling error was due to respondents systematically misleading pollsters; as mentioned above, at elections where shy-Tory effects occurred or are suspected, automated polling generally produced better results for conservatives than live polling.
Finally, and most importantly, the same-sex marriage differs significantly from normal elections in that it was a voluntary survey, conducted over a two-month timeframe. Given that people didn’t have to return their survey forms (and an estimated 20% of registered voters didn’t), it’s entirely possible that the underlying samples were accurate, but pollsters did not or could not correctly weight their samples to get a representative sample of the electorate.
4 For example, looking at x Dr Bonham’s summary table of the same-sex marriage polls, at least some of the polls simply polled voters who had already voted, which may explain part or all of the error; this table of polls from Wikipedia does seem to suggest that in the same poll, Yes usually had a bigger lead amongst voters who had already voted as compared to voters who had not yet voted.
In other words, there’s various, more mundane possibilities for why polls over-estimated the Yes vote in 2016 that don’t involve conservative voters misleading pollsters or even pollsters under-sampling conservative voters. Furthermore, as I’ve stressed repeatedly, the same-sex survey is just one ‘election’ (which has its own set of complicating factors which do not occur at regular elections), and it has to be taken in context of voting-intention polling more broadly getting the vote shares for conservative parties correct.
How do automated polling methods compare to live-polling methods in Australia?
Another key aspect of shy-Tory effect is the fact that live-interview methods of polling (i.e. face-to-face, live interview by telephone) generally show more skew to the left than automated methods (i.e. online surveys, interactive voice response (IVR) aka “robopolls”). This makes sense if you think about shy-Tory effect as an extension of social-desirability bias – if there are people afraid or shy to admit their opinions and voting-intentions to other people, some of their worries may be alleviated if they are surveyed through more impersonal methods.
Hence, one way to ‘rescue’ the shy-Tory hypothesis is to suggest that, maybe, in Australia, a left-bias by live-interview polling is cancelled out or reduced by a right-bias in automated polling. Is that the case?
Coalition One Nation Live polling -0.1% (96) -0.5% (22) Automated polling -0.2% (32) 1.3% (13) Difference -0.1% +1.8% Negative values indicate the party was under-estimated, while positive values indicate the party was over-estimated. Number of polls in each average indicated in brackets. Some polls were not included as their methodology was unclear or included a mix of live and automated methods.
In the case of the Coalition, both live and automated polling methods barely differ, whereas in the case of One Nation live polls very slightly under-estimate the One Nation vote while automated polls slightly over-estimate the One Nation vote. While it might be tempting to point at the One Nation numbers and go “Aha! Automated polls over-estimate One Nation compared to live polls!”, it’s worth noting that we’re working off a very small set of data-points here (just 13 automated polls of the One Nation vote, from just four elections), meaning that such an outcome may well be due to random chance. Furthermore, the result is not statistically significant (i.e. there’s a high probability of us getting an average bias in a random sample at least as extreme as the one we saw, even if there was no actual bias in the underlying polls).
There’s also arguably an issue with simply averaging all of the live/automated polls conducted to see if there’s a difference in whether they over/under-estimate voting intention; it might be that automated polls were only conducted at a few elections where polls of all stripes, live or automated, over-estimated One Nation (a real worry when you’re only working with four elections). If that was the case, then claiming that this definitively shows automated polls over-estimate One Nation would be like a pollster who only polled the 2013 and 2016 federal elections (where most pollsters got really accurate results) claiming to have a lower average error when in reality they simply happened to not poll elections where many pollsters got it wrong.
To examine this possibility, we can simply compare elections where we have both live and automated polls, to see if the pattern still holds:
When we compare live and automated polls conducted at the same election, there is no shy Tory effect
Coalition One Nation Live polling -0.6% +1.6% Automated polling -0.1% +1% Difference +0.5% (10) -0.6% (4) Negative values indicate the party was under-estimated, while positive values indicate the party was over-estimated. Number of elections in each average indicated in brackets.
With the Coalition polls, the pattern remains the same – the averaged bias of live and automated polling barely differs. On the other hand, for One Nation, the pattern reverses – now it’s live polls which over-estimate One Nation by more than automated polls! (although, again, the pattern isn’t statistically significant) This suggests a pretty simple explanation for why automated polling may have appeared to over-estimate One Nation voters – by luck of the draw, automated pollsters only polled elections where everyone, live or automated, was over-estimating the One Nation vote anyway, while live pollsters polled a wider range of elections, including some where they under-estimated One Nation support.
Hence, it’s pretty clear that there is basically no shy-Tory effect in Australian polling. From the fact that polls don’t under-estimate conservative/politically-incorrect parties overall, to the fact that there is basically no difference between live and automated polling overall, there is very little evidence for the idea that there is a group of voters who vote for conservatives but are unwilling to tell pollsters about it. More broadly, there doesn’t even seem to be any systematic skew to the left in Australian polling; polls are about as likely to over-estimate the Coalition vote as they are to under-estimate it, while they slightly tend to over-estimate support for One Nation.
I mean, think about it this way. Let’s say you had someone whose job it is to count the number of chickens in a closed coop, and they usually got within +/- 2 chickens with their errors averaging out to basically 0.
6 Maybe you’d discuss whether they could improve their counts to get closer to the actual number, but it would be kind of ridiculous to claim that they were systematically under-estimating the number of chickens, and it’s because the chickens were scared of them. x So why is the idea of shy Tories/conservative under-estimate so common?
(this is mixed with a bit more opinion than usual)
Here’s a piece from the 2021 WA state election, where they discuss previous examples of famous polling errors when the first Newspoll (Labor 68-32) of the campaign was released.
What are the examples used?
Morrison’s 2019 re-election.
While the first two are a little more common overseas and the latter
a little more common here in Australia, they all share a remarkable similarity – elections where polling showing the left slightly ahead but where the conservative side won narrow victories instead.
Don’t get me wrong,
it’s not like I think the polls did well in 2019. However, pretty much any time anyone brings up the possibility of big poll errors, it’s pretty much always examples of polling errors where the left was over-estimated, despite the fact that there are plenty of counter-examples where it was instead the right which was over-estimated.
You want a big polling error where the right under-performed its polls?
2018 Victorian state election, Labor outperformed its 2pp polling by 3.2%, slightly more than they would under-perform their polls by in 2019.
2017 French presidential election; Macron over-performed his polls against the far-right Marine Le Pen by 4.3% in the second round.
Oh, but maybe “the right lost, but by bigger than expected” isn’t surprising or sensational enough. Maybe the media only likes to bring up examples where the “wrong” side won. Did any of those happen on the left?
2015 Queensland state election; changes in preference flows meant that a small lead for the LNP in final polling became a deficit for them on election day, with Labor winning the 2pp vote and working together with crossbenchers to form government.
2001 Western Australian state election, where the final polls had the race at 50-50 but Labor won by 52.9% on the 2pp (same error size as the 2019 federal election, though in the other direction).
2017 UK general election, where the Labour Party over-performed its polling by 5 points (about 3% on a 2-party basis) and forced Theresa May’s Conservatives, (who had been expected to increase their majority) into minority government and coalition with a minor Northern Irish party.
And of course, federally, there’s always the classic 1993 federal election, where Keating’s Labor beat their polls by about 2% (on a 2pp basis) to unexpectedly retain government.
Despite this, the same examples where conservatives over-performed their polls tend
to get trotted out every time there’s a discussion about not trusting the polls (as a matter of fact, despite being written in 2019 and 2020 respectively, those last two linked pieces cite the Conservative over-performance at the 2015 UK general election instead of the more recent Labour over-performance at the 2017 UK general election as examples of polling failure). This is in spite of the fact that polling errors both in Australia as well as overseas don’t tend to favour right-wing parties, so it’s not like the Trump/Brexit/Morrison errors are representative of polling errors more broadly. Furthermore, in at least some of those cases (Trump 2016, Brexit), the polling errors were fairly small (about 1 – 2% in Trump 2016, about 2% in Brexit), especially when compared to the errors I’ve listed above.
So what gives?
My guess is that the Trump/Brexit/Morrison errors were much more surprising to the people who tend to be the ones writing articles/editorials/programs in which they report both the polls, and also caution us to not assume the polls are going to be exactly correct. However, just because one or two events are surprising and have significant consequences to the people involved doesn’t necessarily mean that they occur all the time or even regularly; it would be like extensive reporting on shark attacks or
nuclear plant meltdowns as if they were a common occurrence.
Continuously repeating examples of the right over-performing their polls, without stopping to consider examples of when polls under-estimated the left, can unduly introduce or reinforce the idea that polls under-estimate the right thanks to the
availability heuristic. This might still be fine if polls actually tended to under-estimate the right (though still problematic as pollsters tend to change their methods in response to previous failures), but they don’t – not here, not in the USA, not in Europe. As I demonstrate above, there is no shy Tory effect or even a systematic under-estimate of conservatives in Australian polling, and those of us who write/produce to inform the public about polling should discuss examples of the left out-performing its polling 7 in order to avoid creating a misleading impression of polling being biased against conservatives. x