Applying analytics to bet on football games

Image result for football betting

I recently came across a paper by Lisandro Kaunitz, Shenjun Zhong and Javier Kreiner titled Beating the bookies with their own numbers – and how the online sports betting market is rigged. The paper describes a method the authors used to make profits on football games.  It also exposed the practices bookmakers employ to restrict successful betting strategies. There are some interesting techniques used in the paper and some problems with the strategy that I have identified. We will dig into both.

The first finding is that 1 divided by the average closing odds (1/avgclosingodds) for a football game drawn from a collection (no less than 3) of bookmakers is a good proxy for the true probability of the games outcome. This is important. In order to profit from betting we need to find a signal that gives us confidence our prediction (i.e. our ability to predict the outcome of the game) is better than 1/n where n is the number of possible outcomes. In this case n is 3 since a game of football has 3 possible outcomes. They are home team wins, away team wins or game ends in a draw.

To confirm (1/avgclosingodds) is a good proxy for game outcomes the authors performed an analysis against 10 years of football games where each games result and the bookmakers closing odds were recorded. The analysis begins with expressing the average closing odds for each games outcome as a consensus probability between 0 and 1. This is calculated as follows:

consensus probability = 1/avgclosingodds

They then took the consensus probabilities for each outcome and grouped them into bins of equal size 0.0125 between 0 and 1. For each bin they calculated the actual probability of each game outcome by taking the frequency of games that resulted in each possible outcome and divided this by all the games in the bin. They then compared the bins actual probability for each outcome to the consensus probability for each outcome. To avoid sparse results each bin had to have a minimum of 100 games.

The hope here is that the consensus probability for each bin and game outcome would agree with the actual probability of each bins game outcomes. That is indeed what they found. There is a strong linear relationship between the consensus probability and actual probability of game outcomes. Here is a reproduction of that linear relationship.

This chart is convincing. It suggests the consensus probability is a good predictor of the actual probability of game outcomes. However, we don’t take it on face value. Lets put it to the test and see how we do predicting the outcome of each game when there is a clear winner. A clear winner is a game where one outcome has a higher consensus probability than the other outcomes.

I ran a simulation over the historic data and found out of 479,388 games 478,387 had a clear winner. From those games I was able to predict the outcome 52% of the time. Since there are 3 outcomes in a football game the probability of predicting the winner with no information is 1/3 or 0.33. Our accuracy is 57% better than a random guess which is good. But there is a catch. Betting accuracy does not result in betting profits. Profits depend on the available odds we can bet on and odds often return less than our initial bet size. In other words we risk more than we can make. Lets walk through an example to understand why this is a problem.

If the odds returned by the 52% of our winning bets were on average 1.8:1 then for each $100 bet we would win $80. For the other 48% we lose $100 per bet. If we bet on 100 games we make $4160 and lose $4800 leaving us with a total loss of $640.

So where do we go from here? The consensus probability is a good predictor of the games outcome. However, if we recall it is derived from the bookmakers odds (1/avgclosingodds). In other words the more we are sure of the outcome the less we can profit.

This leads us to the second finding. To increase our ability to profit we need to find favourable conditions where the maximum odds offered by a bookmaker underestimate the consensus probability. More specifically we want to find an outcome where the payoff which is a combination of the consensus probability and maximum odds is the greatest. The formula taken from the paper is as follows:

max(payoff) = consensus probability \times maximum odds - 1

The authors suggest a margin term is used to control the minimum payoff. The revised formula is:

max(payoff) = (consensus probability - margin) \times maximum odds - 1

The margin controls the minimum spread that needs to be satisfied between the average odds and maximum odds. As the margin increases the available games we can bet on decreases as large spreads are outliers in the distribution of odds which by definition are rare events.

The next step is to choose an appropriate margin parameter. To do that the authors ran a series of simulations with margins between 0.01 and 0.1. After analysing the results they settled on 0.05. This produced the most profit with the largest number of games to bet on.

They back-tested the strategy by placing $50 bets on each game that delivered a payoff greater than 0. The back-test produced a 44.4% accuracy and yield an average return of 3.5% per bet! The strategy bet on 56,435  games over 10 years and returned an overall profit of $98,865. The results of the simulation have been reproduced below.

At first glance these results look compelling. However, there are some problems we need to explore.

The first problem I noticed is the accumulated losses at the beginning of the strategy. If we zoom into the first 250 games we see a streak of losses. Under the original betting conditions our bank balance would at one point have been negative $800. And that’s not the worst case scenario.

If we started betting 5000 games into the period we would have been down around $10,000 on $50 bets. That’s a very tough situation to be in.  Would you have continued with the strategy? I’m confident I would have ceased betting.

The problem arises because the staking strategy (the amount we bet) assumes we can cover our losses. This is unrealistic and over the long run will lead to gamblers ruin! There are a number of staking strategies we can use to minimise this problem however it’s not going to help because there is a more serious problem we explore later.

The second problem is the strategy assumes we live in a perfect world. Do we really believe we can bet on 56,435 games over ten years and not miss any? That is on average ~16 bets a day and that assumes games are uniformly distributed which we know is not true. There will be peak periods of activity that could introduce uncertainty in our capacity to place bets.  For example, holiday periods, weekends and when we are sleeping, just to name a few. In fact the authors experienced this problem first hand. When they started live betting the strategy they found 30% of the games identified by the strategy could not be fulfilled. So they went back and simulated this uncertainty by randomly dropping 30% of the favourable games. In the paper they stated the strategy was still profitable however did not provide results.  So lets simulate what this uncertainty would do to the results. We generated 1000 sample runs placing $50 bets while randomly dropping 30% of the games the strategy identified.

The result is we now have 39,504 games to bet on, the average return is still 3.5% and our profit is on average $69,391. We shrunk our profit by 1/3 which is to be expected since we dropped 30% of the games the strategy identified we should bet on. That’s not the real problem here. The real problem is the wide band in the chart above. That represents how confident we should be in this result. The wider it is the less confident we should be. Remember, the world is not perfect. We cannot expect to close every available bet. Therefore we need to include uncertainty in the back-testing strategy. We do that by running many simulations (in this case 1000) that randomly drop 30% of the games which represent the games we should have bet on but couldn’t for reasons out of our control.

In this particular simulation the standard deviation of the final profit at the end of the period is $8,445. Since the final profit is close to normally distributed we can use the empirical rule to approximate the 95% confidence interval which is between $52,501 and $86,281. That means we can be 95% confident the final profit will be between these figures assuming all conditions stay the same.

Now lets put this into real life terms. Placing bets on 3,950 games every year may amount to a part time job for many people. If we could earn $50,000 per year working part time we would earn $500,000 after 10 years. To get to this figure using this strategy and be 95% confident in the result we need to increase the size of our bets to no less than $290 and up to $476 per game. The problem of course is larger bet sizes can lead to larger losses particularly in this data set where there are long losing streaks and the strategy needs time to recover. How bad could these losing streaks be to earn our part time wage with 95% confidence? We would need to continue betting after incurring a loss of $11,089 within the first 100 games. That is down over 1/5 of the wage we intend on making in the first year. Here is the simulation using $476 per bet.

The third problem which is the most serious is related to the very first finding in the paper. If you look closely at the first graph you might notice a discrepancy. The home and away win consensus probabilities range between 0 and 1. Meaning the distribution of average home win and away odds offered by bookmakers covers a broad range (remember the consensus probability is derived from the average odds). The draw consensus probabilities (red dots) tell a very different story. They don’t go past 0.4! We can see this more clearly with a CDF plot of consensus probabilities for each outcome.

Almost 100% of draw consensus probabilities are < 0.4 while home and away are distributed between 0 and 1. Skew between classes of data can be very problematic when they are not accounted for in models. In this particular case the skew has a significant impact on the strategies predictive power for games that end in draws. If we recall the criteria for betting is derived from the following formula:

max(payoff) = (consensus probability - margin) \times maximum odds - 1

The problem is the spread between average draw odds and maximum draw odds have to be significantly higher than the spread of home or away outcomes for draw odds to deliver the highest payoff for a game. This is because the consensus probabilities for draws are almost always lower than the consensus probabilities for home and away outcomes.  An analysis of the data demonstrates this. Only 633 of a total 479,388 games have a higher consensus probability for draw outcomes which means the strategy will rarely pick draw as an outcome. Yet we know empirically and by looking at the data many games do end in draws. We can show this by breaking down the prediction accuracy of the simulation we’ve recreated. The strategy does a terrible job at predicting games that end in draws.

Home WinDrawAway Win
# of Games216,878119,793142,717
% Predicted78.47%1.36%43.28%

We can further demonstrate the impact this has to the overall strategy by running a simulation on the games that end in draws only.

Ouch! So what can we conclude from these findings? Can we leave our day job and become professional tipsters? I don’t think that would be wise.

While the consensus probability provides good signal for home and away outcomes, it fails for draw outcomes. Perhaps the distribution of draw odds is simply reflecting betting patterns. This implies people prefer to bet on home and away outcomes only which results in very little movement in the distribution of draw odds.

I think a more plausible theory is the imbalance in each outcomes distribution is manufactured intentionally by the bookmakers. They may choose to keep the movement in draw odds to a narrow range in order to limit the signal draw odds provide. How can they do this? By adjusting the home and away odds only as a means to balancing their overall risk. This would allow them to hide 1/3 of the signal that the act of betting generates. Either way, to improve on this strategy the imbalance in predictive power needs to be addressed to limit the risk of losing all of our capital in the event there is a streak of draws. This is important even if it means giving up some of our overall predictive power.

PS – I am not a gambler and this article does not constitute betting advice or condone gambling 🙂

Leave a Comment