World Cup Follow-Up: Update of Winning Probabilities and Betting Results
Find out Etienne’s initial predictions by visiting last week’s World Cup blog post.
The World Cup is half-way through: the group phase is over, and the knockout phase is beginning. Let’s update the winning probabilities for the remaining teams, and analyze how our classifier performed on the group-phase matches.
From the 32 initial teams, 16 are qualified for the knockout phase:
There have been some surprises: from 10 of our favorite teams, 3 have been eliminated (Portugal, England, and, most surprisingly, Spain). But most of the main teams are still there.
Using our classifier, we compute again the winning probabilities of each team. To do so, we update the team features to include the last matches (that is, we update the Elo ratings and the goal average features), and then we run 100,000 Monte Carlo simulations of the World Cup starting from the round of 16. Here are the probabilities we obtained:
Again, Brazil is the favorite, but with a 32% chance to win now. After its impressive victory against Spain, the Netherlands’ odds jumped to 23.5%: it is now the second favorite. Germany (21.6%) and Argentina (8.6%) are following. There is thus, according to our model, an 86% chance that one of these four teams will be champion.
Let’s now look at the possible final matches:
The most probable finals are Brazil vs. Netherlands (21.5%) and Germany vs. Netherlands (16.7%). It is however impossible to have a final Brazil vs. Germany, since these teams are on the same side of the tournament tree. Here is the most likely tournament tree:
In the knockout phase, the position in the tournament tree matters: teams being on the same side as Brazil and Germany (such as France and Colombia) will have a hard time reaching the final. On the other hand, the United Sates, which is in the weakest side of the tree, has about a 6% chance to reach its first World Cup final.
Finally, let’s see how far in the competition teams can hope to go. The following plots show, for the 9 favorite teams, the probabilities to reach (in blue), and to be eliminated at (in orange), a given stage of the competition:
We see that Germany has a 35% chance to be eliminated at the semi-finals stage (probably against Brazil), while France and Colombia will probably be stopped at the quarter-finals stage (probably against Germany and Brazil).
Let’s now analyze how our classifier performed for group-phase matches. Forty-eight matches have been played, and it correctly predicted about 62.5% of them:
Which is close to the 59% accuracy obtained in the test set of the previous post. The accuracy is an interesting property to measure, but it does not reveal the full power of the classifier (we could have obtained a similar accuracy by always predicting the victory of the highest Elo-ranked team). It is more interesting to look at how reliable the probabilities computed by the classifier are. For example, let’s compute the likelihood of the classifier on past matches (that is, the probability attributed by the classifier to the sequence of actual match outcomes P(outcome1) × P(outcome2) × … × P(outcome48)):
This value can be compared to the likelihood computed from commonly believed probabilities: bookmakers’ odds. Bookmakers tune their odds in order to always win money: if $3 has been bet on A, $2 on B, and $5 on C, they will set the odds (the amount you get if you bet $1 on the corresponding outcome) for A, B, and C a bit under:
Therefore, if we inverse the odds, we can obtain the outcome probabilities believed by betters. So, can our classifier compete with this “collective intelligence”?
We scraped the World Cup betting odds as they were right before each match from http://www.oddsportal.com/soccer/world/world-cup-2014/results and converted them to probabilities. We obtained a likelihood of 1.33209 × 10-20, which is more than five times smaller than the likelihood of our classifier: there is thus about an 85% chance that our probabilities are “better” than bookmakers’. The simple fact that our classifier probabilities compete with bookmakers’ is remarkable, as we only used a few simple features to create the classifier. It is thus surprising to see that our classifier probably outperforms bookmakers’ odds: we might even be able to make money!
To test this, let’s imagine that we bet $1 on every match using the classifier (setting the value of UtilityFunction as explained in the previous post). Here are matches that we would have got right, and their corresponding gains:
The classifier only got 38% of its bets right. However, it often chose to bet on the underdog in order to increase its expected gain. In the end, we obtained $16 of profit, which is about 33% of our stake! Have we been lucky? To answer this, we compute the probability distribution of gains (through Monte Carlo simulations) according to our probabilities and to bookmakers’:
The average profit, according to our model, is
Thanks for the update! It would be interesting yet again to update after the end of the competition to see how the classifier has performed overall. ;)
I guess this could also be extended to other tournaments as well, eg NBA or the NFL?
Amazing prediction model, how long did this take you ? I noticed that you said that you ran 100k simulations to predict this. Plus what are the hardware specs ?
Hi Alex,
Thanks, all the computations were done with a recent laptop (4 cores with hyperthreading).
To make the classifier, it took 20 seconds (the time for Classify[trainingset] to output its result!).
The 100,000 simulations (that is, 1.6 millions matches) took about 5 minutes to run (although it has not been really optimised since it was not necessary).
Falta el parámetro huevos!!!! Gana Argentina por afano
I really hope this time this model and statistics dont work. Honestly, I think the best team so far has been the Netherlands, but Colombia is great team with higher chances (according to my appreciation) than predicted for this model.
… and Chile!
Could you compare in a future article your model vs the one from Fivethirtyeight? They have nailed 100% of the results using ESPN SPI numbers:
Here is the link: http://fivethirtyeight.com/interactives/world-cup/
I don’t think they nailed 100%, they updated after every stage. Do you have a copy of their first predictions? Would be interesting.
Wondering how much model could improve with a “comfortable in heat/humidity” variable, maybe use latitude of country as surrogate?
Historically, the host team has won almost 1/3 of the WC’s… thus increasing the odds for Brazil.
Please don’t use history as a parameter. Maybe it’s something you add at first glance. But does nothing clarify the view.
The thing to look at, is how each team is playing lately (The parameter would be from what point we must consider the games, for example, sorting the world); maybe a little, experience of the technical director, taking preference on his last games; and an extra would be the time practicing as a team.
Umm… Colombia, Chile are playing better than Brazil….
I thnik a lot of us can predict Brazil win without any model :)
I don’t complete understand the plots with de blue/orange lines. Can someone explain? Thanks in advance
The blue lines show the probability for a team to reach a given stage. For example, France had a probability of 0.75 to reach the quarter finals. The orange lines show the probability for a team to reach and be stopped at a given stage: it shows when teams are expected to leave the tournament. France was expected to leave the tournament in quarter finals, while it is semi finals for Germany.
This has been right about the first two games so far, (Brazil beats Chile, Colombia beats Uruguay)
Keep in mind that the bookmakers are NOT trying to make accurate predictions. They try to optimize their INCOME. This means getting bettors to invest about equal amounts on the two teams and then taking a small cut from each bet. Bookkeepers don’t make money on average from actually betting against “people”.
That is why when you compare your predictions to the bookkeepers, some bets appear cheap and some expensive.
Looking at the pie chart, your model is amazingly accurate. Only the option Brazil-Mexico (3.1%) in the finals is impossible after the first knock-out games.
Have you updated your model after the first knock-out phase was completed?
Is there going to be any update on the winning probabilities after the last matches?
Man, this is working as magic. it basically guessed the semifinals. Impressive indeed.
Are you planning to come up with any more updates before the semis?
Looks like statistics wins! Now that we are in the semi’s, all 4 top teams your model predicted are in.
What a detailed statistical analysis? I made simple analysis and predict the Germans to be the winners in my site.
Ah, you better reconsider a model improvement after a Brazil 1 x 7 Germany, statistically unpredictable but present in the realm of the possible.
Could we see the source code of your app? I’m especially interested to see how you scraped the internet for data and the use of classify.
Hi, I was wondering if you could explain how you managed to compute the probability distributions of gains for your model and the bookmakers model when you don’t know the true probabilities of the match outcomes. What probabilities do you assume to be “correct” when running the Monte Carlo simulation? Thanks!
Hi Jack,
The probabilities for our model are directly given by the classifier using ClassifierFunction[…][data, “Probabilities”].
For the bookmakers model, we used bookmakers odds (they actually represent probabilities, it is explained above).
Then we run two Monte Carlo simulations (we just randomly sample each match many times in order to get statistics and plot the histograms).
Thanks,
Etienne