# Log Loss vs. Brier Score When evaluating the accuracy of a predictive model, it’s important to choose a suitable performance metric that measures the model’s ability to make correct predictions. Two commonly used metrics for assessing the accuracy of a predictive model are log loss and Brier score. In this blog post, we’ll explore the differences between these two metrics and discuss when each one may be more appropriate to use.

### Log Loss

Log loss, also known as cross-entropy loss, is a measure of the difference between the predicted probability and the true probability of an event. It is widely used in classification problems where the goal is to predict the probability of each class. The log loss formula penalizes models that are confident but incorrect, and it’s defined as follows:

Log Loss = -1/N * ∑[y*log(p) + (1-y)*log(1-p)]

Where N is the number of instances in the dataset, y is the true label (either 0 or 1), and p is the predicted probability of the positive class. The log loss ranges from 0 to infinity, where a lower score indicates better performance. A perfect model would have a log loss of 0, while a random model would have a log loss of around 0.693.

### Brier Score

The Brier score measures the mean squared difference between the predicted probability and the true label. It is also a popular metric for evaluating the accuracy of binary classification models. The Brier score formula is as follows:

Brier Score = 1/N * ∑(y – p)^2

Where N is the number of instances in the dataset, y is the true label (either 0 or 1), and p is the predicted probability of the positive class. The Brier score ranges from 0 to 1, where a lower score indicates better performance. A perfect model would have a Brier score of 0, while a random model would have a Brier score of around 0.25.

The table below shows the values of being right or wrong at various levels…

Win % Log Loss (Win) Log Loss (Loss) Brier Score (Win) Brier Score (Loss)
0.1% 9.2103 0.0001 0.998001 0.000001
1% 4.6052 0.0100 0.9801 0.000100
5% 2.9957 0.0513 0.9025 0.0025
10% 2.3026 0.1054 0.81 0.01
30% 1.2039 0.3577 0.49 0.09
50% 0.6931 0.6931 0.25 0.25
70% 0.3577 1.2039 0.09 0.49
90% 0.1054 2.3026 0.01 0.81
95% 0.0513 2.9957 0.0025 0.9025
99% 0.0100 4.6052 0.000100 0.9801
99.9% 0.0001 9.2103 0.000001 0.998001

One advantage of log loss over Brier score is that log loss is more sensitive to differences in predicted probabilities. This means that log loss can distinguish between models that are good at predicting probabilities and models that are not. In contrast, the Brier score is less sensitive to differences in predicted probabilities and is mainly concerned with whether the predicted probabilities match the true labels. This means that the Brier score may be less effective at distinguishing between well-calibrated models and poorly calibrated models.

However, one disadvantage of log loss is that it assumes that the probabilities are well calibrated. This means that the predicted probabilities should reflect the true probabilities of the events being predicted. In practice, it can be difficult to achieve well-calibrated probabilities, and a poorly calibrated model may produce unreliable log loss scores. The Brier score, on the other hand, is more robust to calibration issues and is a good metric to use when the predicted probabilities may not be well calibrated.

### Conclusion

In conclusion, both log loss and Brier score are important metrics for evaluating the accuracy of predictive models. While log loss is more sensitive to differences in predicted probabilities and can be used as an objective function for training machine learning models, the Brier score is more robust to calibration issues and is a good metric to use when the predicted probabilities may not be well calibrated. When evaluating a binary classification model, it’s important to consider both metrics and choose the one that is most appropriate.

In sports, I’m of the opinion that log loss greatly outperforms the Brier Score. The limitation of Brier Score is that an incorrect prediction limits the max score one can have on that contest at 1. Meanwhile, log loss can go to infinity which allows for a more realistic penalty for overconfidence in predictions.