F1 score

F1 score is the weighted average (harmonic mean) of precision and recall. The F1 score is calculated by the following formula: F1 score = (2 x Precision x Recall)/(Precision + Recall) The more the precision and recall metrics deviate from each other, the worse their harmonic mean (i.e. the F1 score).


PRC stands for precision–recall curve. It is a method of visualizing the tradeoff between precision and recall. Precision and recall are both model performance metrics , aka cost function, for classification systems.


Recall (also known as sensitivity) is the ratio of true positives (based on the confusion matrix) by all positives (=true positives + false negatives). It is commonly used in conjunction with precision and it is needed when we must minimize false negatives. Recall can be considered the opposite metric of specificity. Recall is a measure ... Read more


The specificity metric in classification problems is defined by the following formula. Specificity = (True Negative)/(True Negative + False Positive) Specificity is the ideal metric when we need to minimize the false positives. This acts as the opposite of the recall metric.