-
-
Notifications
You must be signed in to change notification settings - Fork 25.9k
FIX/ENH CheckingClassifier support parameters and sparse matrices #17259
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FIX/ENH CheckingClassifier support parameters and sparse matrices #17259
Conversation
Is it possible to add a default |
Yes, I was doing it :). I just pushed. Now I have to check what are the related failures. |
@adrinjalali @NicolasHug @thomasjpfan @jeremiedbb It would be cool if you could have a quick look at this. It will unlock #17233. It does not really change any behaviour and this is just an internal utility for the moment. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks @glemaitre , only nits, LGTM
y_pred = clf.predict(X) | ||
assert_array_equal(y_pred, np.zeros(y_pred.size, dtype=np.int)) | ||
|
||
assert clf.score(X) == pytest.approx(0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we should be able to check for strict equality here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since they are floating-point, why would you make strict equality?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
because they're hard-coded ones and zeros. There's not going to be any floating issue there
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I still consider this a bad pattern to have in the code.
|
||
y_proba = clf.predict_proba(X) | ||
assert y_proba.shape == (150, 3) | ||
assert_allclose(y_proba[:, 0], 1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same here and for most other checks
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
proba are also floating point
Co-authored-by: Nicolas Hug <contact@nicolas-hug.com>
Co-authored-by: Nicolas Hug <contact@nicolas-hug.com>
Co-authored-by: Nicolas Hug <contact@nicolas-hug.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm. @NicolasHug feel free to merge if you think all your comments have been addressed.
Thanks @glemaitre |
Reference Issues/PRs
Required for #17233
What does this implement/fix? Explain your changes.
len
instead of_num_samples
)check_*
functions.predict_proba
anddecision_function
to be compliant with classifier API