...
Binary Classification Metrics refer to the following two formulas used to calculated the reliability of a binary classification model.
Name | Formula | ||||||
---|---|---|---|---|---|---|---|
| TPR = TP / P = TP / (TP + FN) | ||||||
True Negative Rate (Specificity) | SPC = TN / N = TN / (TN + FP) |
...
Measure | Available for |
---|---|
Confusion Matrix |
|
Accuracy |
|
ROC Curve |
|
AUC |
|
Feature Importance |
|
Predicted vs Actual |
|
MSE |
|
Residual Plot |
|
Precision and Recall |
|
F1 Score |
|
Confusion Matrix
Anchor | ||||
---|---|---|---|---|
|
...
If the above conditions are not satisfied, it is possible that there are some missing/hidden factors/predictor variables that have not been taken into account. Residual plot is available for numerical prediction models. You can select a dataset feature to be plotted with its residuals.
Precision and Recall
Precision and Recall are performance measures used to evaluate search strategies. They are typically used in document retrieval scenarios.
When search is carried out on a set of records in a database, some of the records are relevant to the search and the rest of the records irrelevant to the search. However, the actual set of records retrieved may not perfectly match the set of records that are relevant to the search. Based on this, Precision and Recall can be described as follows.
Measure | Definition | Formula | ||
---|---|---|---|---|
Precision | The number of the records relevant to the search that are retrieved, as a percentage of the total number of records in the database. | TP / (TP + FP) | ||
Recall | The number of the records relevant to the search that are retrieved, as a percentage of the total number of records that are relevant to the search.
| TP / (TP + FN) |
F1 Score
The F1 Score gives the weighted average of Precision and Recall. It is expressed as a value between 0 and 1, where 0 indicates the worst performance and 1 indicates the best performance.
2TP / (2TP + FP + FN)