8000 [MRG] Bugfix for precision_recall_curve when all labels are negative by varunagrawal · Pull Request #14621 · scikit-learn/scikit-learn · GitHub
[go: up one dir, main page]

Skip to content

[MRG] Bugfix for precision_recall_curve when all labels are negative #14621

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed

Conversation

varunagrawal
Copy link
Contributor

Reference Issue

Fixes #8245

What does this implement/fix? Explain your changes.

When all the y_true labels are negative, precision_recall_curve returns nan because of recall being set to nan instead of 1. This is because of the direction division of the tps vector by tps[-1] which happens to be 0.

This fix checks if tps[-1] is 0 and if yes, sets the recall to 1 directly since there are no True Positives or False Negatives, else we calculate recall as normal.

Any other comments?

I had to update test_precision_recall_curve_toydata since this test was expecting the TrueDivide exception to be raised which is no longer the case as a result of this fix. I added 2 test cases, one to check when all truth labels are negative and the other to check when all truth labels are positive to ensure precision calculation is accurate.

@varunagrawal varunagrawal force-pushed the pr-curve-bugfix branch 2 times, most recently from de354dd to fa9c35a Compare August 9, 2019 23:35
@cmarmo
Copy link
Contributor
cmarmo commented Jun 9, 2020

Hi @varunagrawal, are you still interested in finishing this PR? If yes, do you mind resolving conflicts? Thanks for your work!

@varunagrawal
Copy link
Contributor Author

There are a couple of things that need to be addressed as per #8280.
I will need to re-verify all the tests and ensure the behavior is correct, after that I can go ahead and update this PR.

Thanks for the patience.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

average_precision_score does not return correct AP when all negative ground truth labels
2 participants
0