-
-
Notifications
You must be signed in to change notification settings - Fork 25.9k
Would it make sense to add False Positive Rate as a metric? #10391
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I think we could have some documentation on the relationship between fpr
and recall on the negative class (= specificity). We could even consider a
specificity scorer. But I don't think another metric function is worthwhile.
|
Sorry if I got this wrong but I thought introducing a metric simply means adding a scorer in sklearn/metrics/classification.py. To your comment about specificity, adding such a (specificity) scorer would be fine too. As long as computing FPR is straight forward. In the context of network based anomaly detection, use of FPR is very relevant/common. Please let me know how we can move forward on this. |
Scorers are slightly different to metric functions; they have a different
and more consistent interface. Yes, I'd consider a
specificity_score(y_true, y_pred, pos_label) metric that reuses either
recall_score or confusion_matrix.
|
Sorry for taking this long. Finally got around to adding the specificity score. Cheers! |
If that makes sense, I can create a pull.
Description
Steps/Code to Reproduce
Expected Results
Actual Results
Versions
The text was updated successfully, but these errors were encountered: