How a bad year’s rugby led to even more sophisticated approaches at AP.

No matter how you look at it, 2018 was a bad year for predicting Rugby results at AP. Coming off a great 2017 season, confidence was high however our predictions for the year got off to a bad start, which was not recovered from, despite a better performance in the last two-thirds of the season.

Consequently, we needed to understand ‘why’ this happened and use some sophisticated science to answer this question confidently, before the start of the 2019 season. Part of the AP’s greatest strength and weakness is data – we use large amounts of it to build a customised model for every team in the competition, which is updated every week. However, when we think we have discovered something to improve in one of these models, we then have to wait several weeks to collect enough data, to test if it makes a practical and positive difference to the tips we provide our punters.

To test if we could improve on the 2018 season, we needed a way to test our models – and their potential improvements – quickly and using a lot of data. So we created our own ‘back-testing script’, which allows us to make a change to a model and then apply it to every game in the 2018 season. This took some serious effort, for example, check out this bit of code:

These 8 lines create over 4000 models, which in this case, are used to predict specific in-match statistics for each team, like how many times they might pass or kick the ball.

By using this script, we were able to systematically test various questions to improve each model like:

  • How many years of data should we be using to predict if a team will win their next match?
  • Does including specific players increase the model’s performance for every team?
  • Are there certain home grounds which are more difficult for traveling teams to win on?

Cycling through these questions allowed us to dramatically improve every team’s model. Also, one of the 200 factors we consider for every team, are the betting market odds of that team winning. This allows us to compare the chance our model gives to a team winning, with that of the betting market. As a result, we can pinpoint both favorites and underdogs who are the best value for our subscribers.

Whilst we can’t say we are grateful quite yet for learnings that have come from the 2018 Rugby season, we have definitely invested heavily in how we predict these types of sporting outcomes – and when they are applied to Rugby League (18.8% ROI) and Horse Racing (we correctly picked the winner of the Melbourne Cup), well then AP subscribers are set for a bumper year!