# World Cup Follow-Up: Update of Winning Probabilities and Betting Results

June 26, 2014 — Etienne Bernard, Lead Architect, Machine Learning

Find out Etienne’s initial predictions by visiting last week’s World Cup blog post.

The World Cup is half-way through: the group phase is over, and the knockout phase is beginning. Let’s update the winning probabilities for the remaining teams, and analyze how our classifier performed on the group-phase matches.

From the 32 initial teams, 16 are qualified for the knockout phase:

There have been some surprises: from 10 of our favorite teams, 3 have been eliminated (Portugal, England, and, most surprisingly, Spain). But most of the main teams are still there.

Using our classifier, we compute again the winning probabilities of each team. To do so, we update the team features to include the last matches (that is, we update the Elo ratings and the goal average features), and then we run 100,000 Monte Carlo simulations of the World Cup starting from the round of 16. Here are the probabilities we obtained:

Again, Brazil is the favorite, but with a 32% chance to win now. After its impressive victory against Spain, the Netherlands’ odds jumped to 23.5%: it is now the second favorite. Germany (21.6%) and Argentina (8.6%) are following. There is thus, according to our model, an 86% chance that one of these four teams will be champion.

Let’s now look at the possible final matches:

The most probable finals are Brazil vs. Netherlands (21.5%) and Germany vs. Netherlands (16.7%). It is however impossible to have a final Brazil vs. Germany, since these teams are on the same side of the tournament tree. Here is the most likely tournament tree:

In the knockout phase, the position in the tournament tree matters: teams being on the same side as Brazil and Germany (such as France and Colombia) will have a hard time reaching the final. On the other hand, the United Sates, which is in the weakest side of the tree, has about a 6% chance to reach its first World Cup final.

Finally, let’s see how far in the competition teams can hope to go. The following plots show, for the 9 favorite teams, the probabilities to reach (in blue), and to be eliminated at (in orange), a given stage of the competition:

We see that Germany has a 35% chance to be eliminated at the semi-finals stage (probably against Brazil), while France and Colombia will probably be stopped at the quarter-finals stage (probably against Germany and Brazil).

Let’s now analyze how our classifier performed for group-phase matches. Forty-eight matches have been played, and it correctly predicted about 62.5% of them:

Which is close to the 59% accuracy obtained in the test set of the previous post. The accuracy is an interesting property to measure, but it does not reveal the full power of the classifier (we could have obtained a similar accuracy by always predicting the victory of the highest Elo-ranked team). It is more interesting to look at how reliable the probabilities computed by the classifier are. For example, let’s compute the likelihood of the classifier on past matches (that is, the probability attributed by the classifier to the sequence of actual match outcomes P(outcome1) × P(outcome2) × … × P(outcome48)):

This value can be compared to the likelihood computed from commonly believed probabilities: bookmakers’ odds. Bookmakers tune their odds in order to always win money: if \$3 has been bet on A, \$2 on B, and \$5 on C, they will set the odds (the amount you get if you bet \$1 on the corresponding outcome) for A, B, and C a bit under:

Therefore, if we inverse the odds, we can obtain the outcome probabilities believed by betters. So, can our classifier compete with this “collective intelligence”?

We scraped the World Cup betting odds as they were right before each match from http://www.oddsportal.com/soccer/world/world-cup-2014/results and converted them to probabilities. We obtained a likelihood of 1.33209 × 10-20, which is more than five times smaller than the likelihood of our classifier: there is thus about an 85% chance that our probabilities are “better” than bookmakers’. The simple fact that our classifier probabilities compete with bookmakers’ is remarkable, as we only used a few simple features to create the classifier. It is thus surprising to see that our classifier probably outperforms bookmakers’ odds: we might even be able to make money!

To test this, let’s imagine that we bet \$1 on every match using the classifier (setting the value of UtilityFunction as explained in the previous post). Here are matches that we would have got right, and their corresponding gains:

The classifier only got 38% of its bets right. However, it often chose to bet on the underdog in order to increase its expected gain. In the end, we obtained \$16 of profit, which is about 33% of our stake! Have we been lucky? To answer this, we compute the probability distribution of gains (through Monte Carlo simulations) according to our probabilities and to bookmakers’:

The average profit, according to our model, is \$14. We have thus, for sure, been a bit lucky with this \$16 of profit. Can we at least conclude that our probabilities outperform bookmakers’? Again we can’t be sure, but from computing the probability density to obtain a profit of \$16 in both models, we see that there is a 65% chance that our model actually allows us to make money in the future…. To be tested on the next international competition!

Posted in: Wolfram Language

 Thanks for the update! It would be interesting yet again to update after the end of the competition to see how the classifier has performed overall. ;) I guess this could also be extended to other tournaments as well, eg NBA or the NFL? Posted by Monte Carlo    June 27, 2014 at 3:47 am
 Amazing prediction model, how long did this take you ? I noticed that you said that you ran 100k simulations to predict this. Plus what are the hardware specs ? Posted by Alex    June 27, 2014 at 7:54 am
 Hi Alex, Thanks, all the computations were done with a recent laptop (4 cores with hyperthreading). To make the classifier, it took 20 seconds (the time for Classify[trainingset] to output its result!). The 100,000 simulations (that is, 1.6 millions matches) took about 5 minutes to run (although it has not been really optimised since it was not necessary). Posted by Etienne Bernard    June 27, 2014 at 12:36 pm
 Falta el parámetro huevos!!!! Gana Argentina por afano Posted by Chango    June 27, 2014 at 11:09 am
 I really hope this time this model and statistics dont work. Honestly, I think the best team so far has been the Netherlands, but Colombia is great team with higher chances (according to my appreciation) than predicted for this model. Posted by Maria    June 27, 2014 at 11:32 am
 … and Chile! Posted by Jonas Petong    June 28, 2014 at 3:17 am
 Could you compare in a future article your model vs the one from Fivethirtyeight? They have nailed 100% of the results using ESPN SPI numbers: Here is the link: http://fivethirtyeight.com/interactives/world-cup/ Posted by Nestor    June 27, 2014 at 12:28 pm
 I don’t think they nailed 100%, they updated after every stage. Do you have a copy of their first predictions? Would be interesting. Posted by geieraffe    July 1, 2014 at 6:41 am
 Wondering how much model could improve with a “comfortable in heat/humidity” variable, maybe use latitude of country as surrogate? Posted by chris    June 27, 2014 at 1:08 pm
 Historically, the host team has won almost 1/3 of the WC’s… thus increasing the odds for Brazil. Posted by Wayne Cochran    June 27, 2014 at 5:09 pm
 Please don’t use history as a parameter. Maybe it’s something you add at first glance. But does nothing clarify the view. The thing to look at, is how each team is playing lately (The parameter would be from what point we must consider the games, for example, sorting the world); maybe a little, experience of the technical director, taking preference on his last games; and an extra would be the time practicing as a team. Posted by Fabdsa    June 27, 2014 at 9:02 pm
 Umm… Colombia, Chile are playing better than Brazil…. Posted by Alonso    June 28, 2014 at 12:30 am
 I thnik a lot of us can predict Brazil win without any model :) Posted by Sparnas    June 28, 2014 at 1:28 am
 I don’t complete understand the plots with de blue/orange lines. Can someone explain? Thanks in advance Posted by Thijs    June 28, 2014 at 3:04 pm
 The blue lines show the probability for a team to reach a given stage. For example, France had a probability of 0.75 to reach the quarter finals. The orange lines show the probability for a team to reach and be stopped at a given stage: it shows when teams are expected to leave the tournament. France was expected to leave the tournament in quarter finals, while it is semi finals for Germany. Posted by Etienne Bernard    July 7, 2014 at 11:34 am
 This has been right about the first two games so far, (Brazil beats Chile, Colombia beats Uruguay) Posted by Zack    June 28, 2014 at 4:54 pm
 Keep in mind that the bookmakers are NOT trying to make accurate predictions. They try to optimize their INCOME. This means getting bettors to invest about equal amounts on the two teams and then taking a small cut from each bet. Bookkeepers don’t make money on average from actually betting against “people”. That is why when you compare your predictions to the bookkeepers, some bets appear cheap and some expensive. Posted by Lee    June 29, 2014 at 2:05 pm
 Looking at the pie chart, your model is amazingly accurate. Only the option Brazil-Mexico (3.1%) in the finals is impossible after the first knock-out games. Have you updated your model after the first knock-out phase was completed? Posted by herbert    July 3, 2014 at 5:30 am
 Is there going to be any update on the winning probabilities after the last matches? Posted by Ali Ghaderi    July 4, 2014 at 5:49 am
 Man, this is working as magic. it basically guessed the semifinals. Impressive indeed. Posted by JJJ    July 5, 2014 at 10:31 pm
 Are you planning to come up with any more updates before the semis? Posted by Ankur    July 6, 2014 at 2:54 pm
 Looks like statistics wins! Now that we are in the semi’s, all 4 top teams your model predicted are in. Posted by Kay Herbert    July 7, 2014 at 9:48 am
 What a detailed statistical analysis? I made simple analysis and predict the Germans to be the winners in my site. Posted by Shiva kumar    July 8, 2014 at 8:03 am
 Ah, you better reconsider a model improvement after a Brazil 1 x 7 Germany, statistically unpredictable but present in the realm of the possible. Posted by Gerson Faria    July 9, 2014 at 11:47 am
 Could we see the source code of your app? I’m especially interested to see how you scraped the internet for data and the use of classify. Posted by Kay Herbert    July 21, 2014 at 8:03 am
 Hi, I was wondering if you could explain how you managed to compute the probability distributions of gains for your model and the bookmakers model when you don’t know the true probabilities of the match outcomes. What probabilities do you assume to be “correct” when running the Monte Carlo simulation? Thanks! Posted by Jack Davis    March 4, 2015 at 2:43 pm
 Hi Jack, The probabilities for our model are directly given by the classifier using ClassifierFunction[...][data, "Probabilities"]. For the bookmakers model, we used bookmakers odds (they actually represent probabilities, it is explained above). Then we run two Monte Carlo simulations (we just randomly sample each match many times in order to get statistics and plot the histograms). Thanks, Etienne Posted by Etienne    September 2, 2015 at 8:59 am