WOLFRAM

World Cup Follow-Up: Update of Winning Probabilities and Betting Results

Find out Etienne’s initial predictions by visiting last week’s World Cup blog post.

The World Cup is half-way through: the group phase is over, and the knockout phase is beginning. Let’s update the winning probabilities for the remaining teams, and analyze how our classifier performed on the group-phase matches.

From the 32 initial teams, 16 are qualified for the knockout phase:

16 teams are qualified for the knockout phase

There have been some surprises: from 10 of our favorite teams, 3 have been eliminated (Portugal, England, and, most surprisingly, Spain). But most of the main teams are still there.

Using our classifier, we compute again the winning probabilities of each team. To do so, we update the team features to include the last matches (that is, we update the Elo ratings and the goal average features), and then we run 100,000 Monte Carlo simulations of the World Cup starting from the round of 16. Here are the probabilities we obtained:

Winning probabilities

Again, Brazil is the favorite, but with a 32% chance to win now. After its impressive victory against Spain, the Netherlands’ odds jumped to 23.5%: it is now the second favorite. Germany (21.6%) and Argentina (8.6%) are following. There is thus, according to our model, an 86% chance that one of these four teams will be champion.

Let’s now look at the possible final matches:

Possible final matches

The most probable finals are Brazil vs. Netherlands (21.5%) and Germany vs. Netherlands (16.7%). It is however impossible to have a final Brazil vs. Germany, since these teams are on the same side of the tournament tree. Here is the most likely tournament tree:

Tournament tree

In the knockout phase, the position in the tournament tree matters: teams being on the same side as Brazil and Germany (such as France and Colombia) will have a hard time reaching the final. On the other hand, the United Sates, which is in the weakest side of the tree, has about a 6% chance to reach its first World Cup final.

Finally, let’s see how far in the competition teams can hope to go. The following plots show, for the 9 favorite teams, the probabilities to reach (in blue), and to be eliminated at (in orange), a given stage of the competition:

How far can nine favorite teams make it?

We see that Germany has a 35% chance to be eliminated at the semi-finals stage (probably against Brazil), while France and Colombia will probably be stopped at the quarter-finals stage (probably against Germany and Brazil).

Let’s now analyze how our classifier performed for group-phase matches. Forty-eight matches have been played, and it correctly predicted about 62.5% of them:

62.5% prediction accuracy

Which is close to the 59% accuracy obtained in the test set of the previous post. The accuracy is an interesting property to measure, but it does not reveal the full power of the classifier (we could have obtained a similar accuracy by always predicting the victory of the highest Elo-ranked team). It is more interesting to look at how reliable the probabilities computed by the classifier are. For example, let’s compute the likelihood of the classifier on past matches (that is, the probability attributed by the classifier to the sequence of actual match outcomes P(outcome1) × P(outcome2) × … × P(outcome48)):

measurer["Likelihood"]

This value can be compared to the likelihood computed from commonly believed probabilities: bookmakers’ odds. Bookmakers tune their odds in order to always win money: if $3 has been bet on A, $2 on B, and $5 on C, they will set the odds (the amount you get if you bet $1 on the corresponding outcome) for A, B, and C a bit under:

Bookmakers' odds

Therefore, if we inverse the odds, we can obtain the outcome probabilities believed by betters. So, can our classifier compete with this “collective intelligence”?

We scraped the World Cup betting odds as they were right before each match from http://www.oddsportal.com/soccer/world/world-cup-2014/results and converted them to probabilities. We obtained a likelihood of 1.33209 × 10-20, which is more than five times smaller than the likelihood of our classifier: there is thus about an 85% chance that our probabilities are “better” than bookmakers’. The simple fact that our classifier probabilities compete with bookmakers’ is remarkable, as we only used a few simple features to create the classifier. It is thus surprising to see that our classifier probably outperforms bookmakers’ odds: we might even be able to make money!

To test this, let’s imagine that we bet $1 on every match using the classifier (setting the value of UtilityFunction as explained in the previous post). Here are matches that we would have got right, and their corresponding gains:

Betting with UtilityFunction

The classifier only got 38% of its bets right. However, it often chose to bet on the underdog in order to increase its expected gain. In the end, we obtained $16 of profit, which is about 33% of our stake! Have we been lucky? To answer this, we compute the probability distribution of gains (through Monte Carlo simulations) according to our probabilities and to bookmakers’:

Distribution of profits: bookmaker vs. our model

The average profit, according to our model, is $14. We have thus, for sure, been a bit lucky with this $16 of profit. Can we at least conclude that our probabilities outperform bookmakers’? Again we can’t be sure, but from computing the probability density to obtain a profit of $16 in both models, we see that there is a 65% chance that our model actually allows us to make money in the future…. To be tested on the next international competition!

Comments

Join the discussion

!Please enter your comment (at least 5 characters).

!Please enter your name.

!Please enter a valid email address.

32 comments

  1. Thanks for the update! It would be interesting yet again to update after the end of the competition to see how the classifier has performed overall. ;)

    I guess this could also be extended to other tournaments as well, eg NBA or the NFL?

    Reply
  2. Amazing prediction model, how long did this take you ? I noticed that you said that you ran 100k simulations to predict this. Plus what are the hardware specs ?

    Reply
    • Hi Alex,

      Thanks, all the computations were done with a recent laptop (4 cores with hyperthreading).
      To make the classifier, it took 20 seconds (the time for Classify[trainingset] to output its result!).
      The 100,000 simulations (that is, 1.6 millions matches) took about 5 minutes to run (although it has not been really optimised since it was not necessary).

      Reply
  3. Falta el parámetro huevos!!!! Gana Argentina por afano

    Reply
  4. I really hope this time this model and statistics dont work. Honestly, I think the best team so far has been the Netherlands, but Colombia is great team with higher chances (according to my appreciation) than predicted for this model.

    Reply
  5. Could you compare in a future article your model vs the one from Fivethirtyeight? They have nailed 100% of the results using ESPN SPI numbers:
    Here is the link: http://fivethirtyeight.com/interactives/world-cup/

    Reply
  6. Wondering how much model could improve with a “comfortable in heat/humidity” variable, maybe use latitude of country as surrogate?

    Reply
  7. Historically, the host team has won almost 1/3 of the WC’s… thus increasing the odds for Brazil.

    Reply
  8. Please don’t use history as a parameter. Maybe it’s something you add at first glance. But does nothing clarify the view.
    The thing to look at, is how each team is playing lately (The parameter would be from what point we must consider the games, for example, sorting the world); maybe a little, experience of the technical director, taking preference on his last games; and an extra would be the time practicing as a team.

    Reply
  9. Umm… Colombia, Chile are playing better than Brazil….

    Reply
  10. I thnik a lot of us can predict Brazil win without any model :)

    Reply
  11. I don’t complete understand the plots with de blue/orange lines. Can someone explain? Thanks in advance

    Reply
    • The blue lines show the probability for a team to reach a given stage. For example, France had a probability of 0.75 to reach the quarter finals. The orange lines show the probability for a team to reach and be stopped at a given stage: it shows when teams are expected to leave the tournament. France was expected to leave the tournament in quarter finals, while it is semi finals for Germany.

      Reply
  12. This has been right about the first two games so far, (Brazil beats Chile, Colombia beats Uruguay)

    Reply
  13. Keep in mind that the bookmakers are NOT trying to make accurate predictions. They try to optimize their INCOME. This means getting bettors to invest about equal amounts on the two teams and then taking a small cut from each bet. Bookkeepers don’t make money on average from actually betting against “people”.
    That is why when you compare your predictions to the bookkeepers, some bets appear cheap and some expensive.

    Reply
  14. Looking at the pie chart, your model is amazingly accurate. Only the option Brazil-Mexico (3.1%) in the finals is impossible after the first knock-out games.
    Have you updated your model after the first knock-out phase was completed?

    Reply
  15. Is there going to be any update on the winning probabilities after the last matches?

    Reply
  16. Man, this is working as magic. it basically guessed the semifinals. Impressive indeed.

    Reply
  17. Are you planning to come up with any more updates before the semis?

    Reply
  18. Looks like statistics wins! Now that we are in the semi’s, all 4 top teams your model predicted are in.

    Reply
  19. What a detailed statistical analysis? I made simple analysis and predict the Germans to be the winners in my site.

    Reply
  20. Ah, you better reconsider a model improvement after a Brazil 1 x 7 Germany, statistically unpredictable but present in the realm of the possible.

    Reply
  21. Could we see the source code of your app? I’m especially interested to see how you scraped the internet for data and the use of classify.

    Reply
    • Hi, I was wondering if you could explain how you managed to compute the probability distributions of gains for your model and the bookmakers model when you don’t know the true probabilities of the match outcomes. What probabilities do you assume to be “correct” when running the Monte Carlo simulation? Thanks!

      Reply
      • Hi Jack,

        The probabilities for our model are directly given by the classifier using ClassifierFunction[…][data, “Probabilities”].

        For the bookmakers model, we used bookmakers odds (they actually represent probabilities, it is explained above).

        Then we run two Monte Carlo simulations (we just randomly sample each match many times in order to get statistics and plot the histograms).

        Thanks,
        Etienne

        Reply