8000 Modify linked examples as per suggestions · scikit-learn/scikit-learn@d2e04e0 · GitHub
[go: up one dir, main page]

Skip to content

Commit d2e04e0

Browse files
committed
Modify linked examples as per suggestions
1 parent c3f7f60 commit d2e04e0

File tree

2 files changed

+11
-13
lines changed

2 files changed

+11
-13
lines changed

sklearn/ensemble/_gb.py

Lines changed: 7 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1133,13 +1133,6 @@ class GradientBoostingClassifier(ClassifierMixin, BaseGradientBoosting):
11331133
classification is a special case where only a single regression tree is
11341134
induced.
11351135
1136-
See :ref:`sphx_glr_auto_examples_ensemble_plot_gradient_boosting_oob.py` for
1137-
an example on using Out-of-Bag estimates to estimate the optimal number of
1138-
iterations for Gradient Boosting.
1139-
See
1140-
:ref:`sphx_glr_auto_examples_ensemble_plot_gradient_boosting_regularization.py`
1141-
for an example on using regularization with Gradient Boosting.
1142-
11431136
:class:`~sklearn.ensemble.HistGradientBoostingClassifier` is a much faster variant
11441137
of this algorithm for intermediate and large datasets (`n_samples >= 10_000`) and
11451138
supports monotonic constraints.
@@ -1458,6 +1451,13 @@ class GradientBoostingClassifier(ClassifierMixin, BaseGradientBoosting):
14581451
... max_depth=1, random_state=0).fit(X_train, y_train)
14591452
>>> clf.score(X_test, y_test)
14601453
0.913...
1454+
1455+
See :ref:`sphx_glr_auto_examples_ensemble_plot_gradient_boosting_oob.py` for
1456+
an example on using Out-of-Bag estimates to estimate the optimal number of
1457+
iterations for Gradient Boosting. For a detailed example of utilizing
1458+
regularization with
1459+
:class:`~sklearn.ensemble.GradientBoostingClassifier`, please refer to
1460+
:ref:`sphx_glr_auto_examples_ensemble_plot_gradient_boosting_regularization.py`.
14611461
"""
14621462

14631463
_parameter_constraints: dict = {
@@ -1746,9 +1746,6 @@ class GradientBoostingRegressor(RegressorMixin, BaseGradientBoosting):
17461746
each stage a regression tree is fit on the negative gradient of the given
17471747
loss function.
17481748
1749-
See :ref:`sphx_glr_auto_examples_ensemble_plot_gradient_boosting_regularization.py`
1750-
for an example on using regularization with Gradient Boosting.
1751-
17521749
:class:`~sklearn.ensemble.HistGradientBoostingRegressor` is a much faster variant
17531750
of this algorithm for intermediate and large datasets (`n_samples >= 10_000`) and
17541751
supports monotonic constraints.

sklearn/ensemble/_hist_gradient_boosting/gradient_boosting.py

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1421,9 +1421,7 @@ class HistGradientBoostingRegressor(RegressorMixin, BaseHistGradientBoosting):
14211421
were encountered for a given feature during training, then samples with
14221422
missing values are mapped to whichever child has the most samples.
14231423
See :ref:`sphx_glr_auto_examples_ensemble_plot_hgbt_regression.py` for a
1424-
usecase example of this feature. See
1425-
:ref:`sphx_glr_auto_examples_ensemble_plot_gradient_boosting_categorical.py`
1426-
for an example using histogram-based gradient boosting on categorical features.
1424+
usecase example of this feature.
14271425
14281426
This implementation is inspired by
14291427
`LightGBM <https://github.com/Microsoft/LightGBM>`_.
@@ -1666,6 +1664,9 @@ class HistGradientBoostingRegressor(RegressorMixin, BaseHistGradientBoosting):
16661664
>>> est = HistGradientBoostingRegressor().fit(X, y)
16671665
>>> est.score(X, y)
16681666
0.92...
1667+
1668+
See :ref:`sphx_glr_auto_examples_ensemble_plot_gradient_boosting_categorical.py`
1669+
for an example using histogram-based gradient boosting on categorical features.
16691670
"""
16701671

16711672
_parameter_constraints: dict = {

0 commit comments

Comments
 (0)
0