Though a well known term in the data science industry, Log Loss is not very well known to the average user of this web site. Let this blog serve as a quick introduction to the term, it’s meaning, and all of the ways that it can be applied to our data.

## An Introduction to Log Loss

Log Loss has increasingly become the go-to metric for judging the accuracy of binary outcomes. One could probably tie this to the increasing popularity of the Kaggle, which is a web site that hosts competitions for data scientists. Regardless, we use this metric quite often on DRatings, so it is important for us to give a little bit of an explainer of the metric here.

## Breaking Down Log Loss

If one reads nothing else, then it’s important to know that the analyst is always shooting to have the LOWEST Log Loss possible. The best log loss that one can have is zero, and the worst log loss runs to negative infinity. This is how the breakdown for Log Loss looks as a formula.

Consider two teams, Team A and Team B, playing each other in a contest.

*x* = probability of “Team A” to win.

If “Team A” wins, *Log Loss = ln(x)*.

If “Team B” wins, *Log Loss = ln(1-x)*.

The table below shows what Log Loss looks like at a variety of different endpoints.

x | 0.001 | 0.01 | 0.05 | 0.1 | 0.3 | 0.5 | 0.7 | 0.9 | 0.95 | 0.99 | 0.999 |
---|---|---|---|---|---|---|---|---|---|---|---|

0 | -0.0010 | -0.0101 | -0.0513 | -0.1054 | -0.3567 | -0.6931 | -1.2040 | -2.3026 | -2.9957 | -4.6052 | -6.9078 |

1 | -6.9078 | -4.6052 | -2.9957 | -2.3026 | -1.2040 | -0.6931 | -0.3567 | -0.1054 | -0.0513 | -0.0101 | -0.0010 |

We can further explain with the following two examples. Let us assume that we give Team A a 70% chance to win, and they do win. Looking back at the table, this would mean that our Log Loss for this observation is -0.3567. Now, let’s assume that under the same projection that Team A loses. Our Log Loss for this observation is now -1.2040.

## What is a Good Log Loss?

It completely depends! Evaluating Log Loss can be very tricky. At the end of the day, Log Loss is used to judge which model is the best. The very worst that a model should do over the long term is -0.6931. Why? Because an analyst that is simply guessing, or picking Team A to win at 50% will end up with a result of -0.6931 if they win or lose.

In sports or analyses where the true probabilities are closer to 50%, it’s much harder to get a low Log Loss. As of this writing, our MLB Baseball Predictions have a Log Loss of -0.6755 over 3,000+ games. Compare this to our -0.622 Log Loss over 267 games in the NFL which has much less parity.

The end goal for any sports projection formula is to beat the Log Loss of the sportbook’s closing line on a consistent basis. Any analytic model that could do this would justifiably be able to be profitable betting sports over time.

## Pros/Cons

A few benefits and drawbacks of the Log Loss function as it relates to judging accuracy.

#### Pros

**Very strong with a lot of observations**: When the number of observations gets into the thousands and more, this tool works very well.**It’s the industry standard**: Everyone it data science uses or is familiar with Log Loss.

#### Cons

**Doesn’t work well with a small number of observations**: Generally, this can be said about any method, but it’s worth noting.**Beware of correlations and ability to game**: This is one of my frustrations with a lot of Kaggle’s competitions. For instance, in the NCAA ML Competition, one is allowed two entries. What is stopping someone from creating an entry that “picks a winner” by giving them a 100% chance across the board. Clearly, no team has a 100% chance to win any game and if this team losses, then the entry is out. But, if the entry hits, then that’s six out of 64 observations with a Log Loss of 0. This entry is bound to place high on the leader board if the rest of the analytic model is sound.

## More Reading

For more information on Log Loss, Making Sense of Log Loss is a wonderful reference. Those who really want to get down in the weeds will enjoy Understanding binary cross-entropy / log loss: a visual explanation. Happy reading!