Choosing Classification Model Evaluation Criteria | by Viyaleta Apgar | Jan, 2025


Is Recall / Precision better than Sensitivity / Specificity?

Towards Data Science
Photo by mingwei dong on Unsplash

The simplest way to assess the qualify of a classification model is to pair the values we expected and the predicted values from the model and count all the cases in which we were right or wrong; that is — construct a confusion matrix.

For anyone who has come across classification problems in machine learning, a confusion matrix is a fairly familiar concept. It plays a vital role in helping us evaluate classification models and provides clues on how we can improve their performance.

Although classification tasks can produce discrete outputs, these models tend to have some degree of uncertainty.

Most model outputs can be expressed in terms of probabilities of class belonging. Typically, a decision threshold which allows a model to map the output probability to a discrete class is set at the prediction step. Most frequently, this probability threshold is set to 0.5.

However, depending on the use-case and on how well the model is able to capture the right information, this threshold can be adjusted. We can analyze how the model performs at various thresholds to achieve the desired results.

Recent Articles

Related Stories

Leave A Reply

Please enter your comment!
Please enter your name here