E521 ENH update code to check response values of an estimator by qbarthelemy · Pull Request #33126 · scikit-learn/scikit-learn · GitHub
[go: up one dir, main page]

Skip to content

Conversation

@qbarthelemy
Copy link
Contributor

Reference Issues/PRs

Fixes #33093. See also #33084.

What does this implement/fix? Explain your changes.

It implements suggestions made here #33093 (comment)

AI usage disclosure

I did not use AI assistance.

Any other comments?

@github-actions github-actions bot removed the CI:Linter failure The linter CI is failing on this PR label Jan 24, 2026
@github-project-automation github-project-automation bot moved this to Todo in Labs Jan 26, 2026
@StefanieSenger StefanieSenger moved this from Todo to In progress in Labs Jan 26, 2026
Copy link
Contributor
@AnneBeyer AnneBeyer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the PR @qbarthelemy!

A few remarks:

  1. To avoid confusion, please change the title to start with "ENH" or "FIX" instead of "DOC" (as this PR does not change the documentation).
  2. Please add a changelog entry like this.
  3. Your tests are currently failing. In general, we need two new tests here:
    • one that checks that no error is thrown for all types of estimators with their supported response_methods, and
    • one that checks that the expected error is thrown for unsupported response_methods.
      Note that response_method can be a string or a list of strings, both cases should be part of the tests.
      All existing tests should not be changed (or only adapted according to the new error message), to make sure nothing else is affected by your change.

Let me know if you need more help with designing the tests.

Co-authored-by: Anne Beyer <anne.beyer@mailbox.org>
@qbarthelemy qbarthelemy changed the title DOC update code to check response values of an estimator ENH update code to check response values of an estimator Jan 26, 2026
@github-actions github-actions bot added the CI:Linter failure The linter CI is failing on this PR label Jan 27, 2026
@github-actions github-actions bot removed the CI:Linter failure The linter CI is failing on this PR label Jan 27, 2026
Copy link
Contributor
@AnneBeyer AnneBeyer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for addressing the comments so fast! Here is another round of feedback:

- for multilabel classification, it is a 2d array of shape `(n_samples, n_outputs)`;
- for outlier detection, it is a 1d array of shape `(n_samples,)`;
- for regression, it is a 1d array of shape `(n_samples,)`.
- for outlier detection, a regressor or a clusterer, it is a 1d array of shape
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you please also add corresponding test cases (for binary and multiclass) for a clusterer here:

def test_response_values_output_shape_(

["predict_proba", "decision_function", ["predict_proba", "predict"]],
["predict_proba", "decision_function", ["predict_proba", "decision_function"]],
)
def test_regressor_unsupported_response(pyplot, response_method):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you please extend this test (and rename it to test_estimator_unsupported_response) by adding a parametrization with regressor, outlier detector and clusterer?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would also be in favor of moving this test to sklearn/utils/tests/test_response.py, to have everything related to testing response values in the same file.


@pytest.mark.parametrize(
"response_method",
["predict", "score", ["predict", "score"]],
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you please extend the list with ["not-a-valid-method", "predict"] (which should also result in predict being selected)? The logic below would also have to be adapted slightly, or you can change the parametrization into a tuple to explicitly specify the expected method as well.

_get_response_values(my_estimator, X, response_method=response_method)


@pytest.mark.parametrize("response_method", ["predict"])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same comment as below

Co-authored-by: Anne Beyer <anne.beyer@mailbox.org>
@github-actions github-actions bot added the CI:Linter failure The linter CI is failing on this PR label Jan 28, 2026
@github-actions github-actions bot removed the CI:Linter failure The linter CI is failing on this PR label Jan 28, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

DOC error message when checking response values of an estimator not clear

2 participants

0