You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Apart from the hard-coded estimators in the common tests, we also set some parameters, ostensibly for faster tests.
But some estimators don't pass the tests with default parameters. I just ran into BaggingClassifier that has inconsistent predict_log_proba (which is mostly inf).
We should either make the tests more robust or the estimators.
The text was updated successfully, but these errors were encountered:
on the other hand, not all tests pass when set_checking_parameters (or set_testing_parameters or whatever it is called now) is run... not sure I understand how currently the tests pass
This is not currently possible since the default parameters make assumptions on the number of features, in particular in SelectKBest and GaussianRandomProjection. I'm not entirely sure if this is something we should fix or not.
At this point, _set_checking_parameters is used to:
Run tests faster.
Adopt the parameters to fit the data that is generated for check_estimator. When we change the default, the _set_checking_parameters has comments describing why.
The estimator tags greatly expanded the test coverage in the common tests. With that in mind, I am closing this issue.
Apart from the hard-coded estimators in the common tests, we also set some parameters, ostensibly for faster tests.
But some estimators don't pass the tests with default parameters. I just ran into BaggingClassifier that has inconsistent
predict_log_proba
(which is mostly inf).We should either make the tests more robust or the estimators.
The text was updated successfully, but these errors were encountered: