A model of the upcoming 2021 Western Australia state election using various inputs such as voting intention polls and historical performance to model the election. We then run 50 000 simulations to forecast the election.
Full technical details of how the model works >>
When the election is over, we’ll update this page with the actual results compared to Meridiem’s predictions, as well as judging how well Meridiem and its various components performed.
See here for how that’s going to happen >>
If you like models and analyses like these, please consider
supporting us on Patreon or donating to us using the PayPal button on the sidebar.
Our WA 2021 forecast below is final. No further updates are expected.
Labor is clearly favoured to win a majority
This is our forecast for the 2-party-preferred (2pp) vote. The probability of Labor’s 2pp vote ending up in each of the above areas is described by the legend above the chart (so for example, there’s a 50% chance that Labor’s 2pp vote ends up in the grey area).
We currently estimate that Labor will win between
and 60.9% of the two-party-preferred (2pp) vote on election day. This range will likely narrow as election day approaches. 70.3%
Seat totals forecast
The forecasted distribution of seats for each party/grouping. Higher bars indicate outcomes which happened more frequently in our simulations.
Dashed lines drawn at the number of seats needed for a majority.
The range of seat estimates for all of the above parties/groupings:
Labor 46 51 55 1 Liberals 1 4 9 2 Nationals 2 4 5 3 Greens 0 0 0 4 One Nation 0 0 0 5 Others 0 0 0 6 Labor vs Liberals/Nationals seat margin forecast
The forecasted distribution of (Lib/Nat seats – Labor seats), which is usually a decent measure of the chance of either side forming government.
Higher bars show outcomes which occurred more often in our simulations. Probability of various outcomes
Numbers may not sum to 100 due to rounding.
1 Labor wins an outright majority > 99 in 100 2 The Liberals win an outright majority < 1 in 100 3 Labor requires the Greens for a majority < 1 in 100 4 The Liberals require the Nationals for a majority < 1 in 100 5 No party or coalition has a majority < 1 in 100
6 Labor holds the most seats > 99 in 100 7 Liberals + Nationals hold the most seats < 1 in 100 8 Neither Labor nor the Liberals/Nationals have a plurality < 1 in 100 How the forecast has changed over time
The figures above refer to how many times each outcome occurred in every 100 simulations. Numbers may not sum to 100 due to rounding.
Seat-by-seat probabilistic forecast
This table shows how often each party/grouping won each electorate for every 100 simulations. Numbers may not sum to 100 due to rounding.
Note: not a forecast of the vote each party is expected to win in each electorate. Most likely median seats (on a two-party-preferred basis)
If you lined up all the districts from highest Labor 2pp to highest Liberal/National 2pp, the median district would be the district right in the middle of this lineup. If all districts are won by either Labor or the Liberals/Nationals, then the median district effectively decides which side forms government. This is very similar, though not identical, to the American concept of a
tipping-point state (our median district calculation only looks at which district is in the middle of the pack in 2pp terms, without accounting for wins by minor parties/independents).
The table below shows the ten most likely median districts on a 2pp basis, and how frequently they end up as the median district in our simulations. Note that we estimate that there is about a
chance that the median district will be one of the 49 3 in 10 listed below. not
Burns Beach 10 Landsdale 9 Darling Range 9 Southern River 9 Wanneroo 7 Balcatta 7 Mount Lawley 5 Albany 5 Pilbara 5 Forrestfield 5
Download the simulations generated by the model
here. : I’ve fixed a bug which resulted in the Others vote being too high and the “big five” (Labor, Liberal, National, Green, One Nation) vote being too low (as compared to what our vote models say they “should” be). While this doesn’t affect the topline forecast by much (the changes are under 0.1%), there are fairly significant seat-by-seat shifts, especially in seats with unusually strong Others candidates. Update (18/Feb/2021)
I opted not to redo the forecast for dates prior to 18/Feb, as I felt that that’s kind of cheating in a way (if I messed up the forecast, that’s on me and the change in forecast graph should reflect that). Just be aware that the model prior to 18/Feb and the model after 18/Feb are slightly different in interpreting the change-in-forecast graph.
I’ve also added the final-two candidates to the simulations output above.
: Added the median district table above. I’ve also opted to round all probabilities written above (e.g. “7.6 in 100”) to the nearest whole number (e.g. “8 in 100”); reading through Update (28/Feb/2021) Nate Silver’s series on The Story of 2016 and Andrew Gelman’s article on incentives for forecasters has convinced me that having the decimal places likely provides a false sense of precision to the forecast.
Also added some info in our simulations download on how many electorates we expect to “call correctly” – i.e. the number of electorates where the party we estimate has the highest chance of winning actually wins. For example, if we estimate Labor has a 40% chance of winning, the Liberals 35% and the Nationals 25%, and Labor wins the seat, that would be a “correct call” (even though the Liberals/Nationals had a combined 60% chance of winning). I personally don’t think that this is a very good way of looking at a probabilistic forecast, but it tends to be how the public and the media judge a forecast, so I’ve included it for informational purposes.
In addition, I’ve also added what I believe to be a better (if harder to interpret) way of judging forecasts to our simulations package – the
Brier score our forecast would get in each simulation. This Brier score is calculated solely off the 2pp in each district – i.e. it uses Meridiem’s forecast of which major (Labor vs Lib/Nat) is likely to win the 2pp in each district instead of using the forecast of which party is likely to win the district overall. The reason for this is because the Brier score isn’t very good at judging forecasts for events which happen very rarely (e.g. a minor party winning a randomly selected district), so calculating a multi-category Brier score which includes the minor parties would make our Brier score look a lot better than it really is.
(if you don’t understand any of that, that’s alright – I’ll be writing up a piece about judging probabilistic forecasts like ours, due approximately the week before the election)
Edit: That’s now up,
here. : Fixed a bug with the elasticity model which was affecting Geraldton and a few other seats. Update (3/Mar/2021)
: The predicted total number of districts “called correctly” was incorrect (for those familiar with R, my code mixed up the rows and columns of the dataframe); I’ve fixed that in our simulations download package. Update (8/Mar/2021)