In statistics and machine learning, precision is a measure of how often the positives identified by a learning model are true positives. This is a division of true positives (based on the confusion matrix) by all estimated positives (=true positives + false positives). The precision metric is commonly used in conjunction with recall, to evaluate the contribution of false negatives in a statistical experiment or machine learning model. Use the following reference for some good visual examples of accuracy, precision and recall:

Related Cloud terms