8000 MNT Remove ellipsis from doctests by lesteve · Pull Request #31332 · scikit-learn/scikit-learn · GitHub
[go: up one dir, main page]

Skip to content

MNT Remove ellipsis from doctests #31332

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
May 7, 2025

Conversation

lesteve
Copy link
Member
@lesteve lesteve commented May 7, 2025

Since we are using scipy-doctest #30496, most ellipsis (aka ...) can be removed from doctests. scipy-doctest uses floating point comparison with a default rtol=1e-3.

A few cases where it is not so convenient to remove the ellipsis:

  • print statements seen in sklearn/linear_model/_huber.py
  • only showing the first few values of a very long array seen in doc/modules/partial_dependence.rst
  • brittle doctest see comment below
8000

Copy link
github-actions bot commented May 7, 2025

✔️ Linting Passed

All linting checks passed. Your pull request is in excellent shape! ☀️

Generated for commit: c610526. Link to the linter CI: here

@lesteve
Copy link
Member Author
lesteve commented May 7, 2025

It looks like the GradientBoostingRegressor doctest is quite brittle and varies quite a bit on different machines or different package versions (more than the default 1e-3 rtol used by scipy-doctest) so I am going back to an ellipsis. Got 0.445 locally and the two CI builds don't agree, one is 0.434 and the other is 0.440.

From build log

Linux_Runs pylatest_conda_forge_mkl

___________ [doctest] sklearn.ensemble._gb.GradientBoostingRegressor ___________
2046 >>> from sklearn.model_selection import train_test_split
2047 >>> X, y = make_regression(random_state=0)
2048 >>> X_train, X_test, y_train, y_test = train_test_split(
2049 ...     X, y, random_state=0)
2050 >>> reg = GradientBoostingRegressor(random_state=0)
2051 >>> reg.fit(X_train, y_train)
2052 GradientBoostingRegressor(random_state=0)
2053 >>> reg.predict(X_test[1:2])
2054 array([-61.1])
2055 >>> reg.score(X_test, y_test)
Expected:
    0.445
Got:
    0.43437063706910894

Linux pylatest_pip_openblas_pandas

___________ [doctest] sklearn.ensemble._gb.GradientBoostingRegressor ___________
2046 >>> from sklearn.model_selection import train_test_split
2047 >>> X, y = make_regression(random_state=0)
2048 >>> X_train, X_test, y_train, y_test = train_test_split(
2049 ...     X, y, random_state=0)
2050 >>> reg = GradientBoostingRegressor(random_state=0)
2051 >>> reg.fit(X_train, y_train)
2052 GradientBoostingRegressor(random_state=0)
2053 >>> reg.predict(X_test[1:2])
2054 array([-61.1])
2055 >>> reg.score(X_test, y_test)
Expected:
    0.445
Got:
    0.440031029624667

Copy link
Member
@adrinjalali adrinjalali left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh my. Didn't expect you (or anybody 😅 ) to fix them all. Impressive! Thanks.

@adrinjalali adrinjalali merged commit f44350d into scikit-learn:main May 7, 2025
36 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants
0