10000 DOC replace deviance by loss in docstring of GradientBoosting (#25968) · scikit-learn/scikit-learn@c6e0b84 · GitHub
[go: up one dir, main page]

Skip to content

Commit c6e0b84

Browse files
authored
DOC replace deviance by loss in docstring of GradientBoosting (#25968)
1 parent 10dbc14 commit c6e0b84

File tree

1 file changed

+11
-11
lines changed

1 file changed

+11
-11
lines changed

sklearn/ensemble/_gb.py

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -622,7 +622,7 @@ def _fit_stages(
622622
X_csr,
623623
)
624624

625-
# track deviance (= loss)
625+
# track loss
626626
if do_oob:
627627
self.train_score_[i] = loss_(
628628
y[sample_mask],
@@ -1056,28 +1056,28 @@ class GradientBoostingClassifier(ClassifierMixin, BaseGradientBoosting):
10561056
:func:`sklearn.inspection.permutation_importance` as an alternative.
10571057
10581058
oob_improvement_ : ndarray of shape (n_estimators,)
1059-
The improvement in loss (= deviance) on the out-of-bag samples
1059+
The improvement in loss on the out-of-bag samples
10601060
relative to the previous iteration.
10611061
``oob_improvement_[0]`` is the improvement in
10621062
loss of the first stage over the ``init`` estimator.
10631063
Only available if ``subsample < 1.0``.
10641064
10651065
oob_scores_ : ndarray of shape (n_estimators,)
1066-
The full history of the loss (= deviance) values on the out-of-bag
1066+
The full history of the loss values on the out-of-bag
10671067
samples. Only available if `subsample < 1.0`.
10681068
10691069
.. versionadded:: 1.3
10701070
10711071
oob_score_ : float
1072-
The last value of the loss (= deviance) on the out-of-bag samples. It is
1072+
The last value of the loss on the out-of-bag samples. It is
10731073
the same as `oob_scores_[-1]`. Only available if `subsample < 1.0`.
10741074
10751075
.. versionadded:: 1.3
10761076
10771077
train_score_ : ndarray of shape (n_estimators,)
1078-
The i-th score ``train_score_[i]`` is the deviance (= loss) of the
1078+
The i-th score ``train_score_[i]`` is the loss of the
10791079
model at iteration ``i`` on the in-bag sample.
1080-
If ``subsample == 1`` this is the deviance on the training data.
1080+
If ``subsample == 1`` this is the loss on the training data.
10811081
10821082
init_ : estimator
10831083
The estimator that provides the initial predictions.
@@ -1619,28 +1619,28 @@ class GradientBoostingRegressor(RegressorMixin, BaseGradientBoosting):
16191619
:func:`sklearn.inspection.permutation_importance` as an alternative.
16201620
16211621
oob_improvement_ : ndarray of shape (n_estimators,)
1622-
The improvement in loss (= deviance) on the out-of-bag samples
1622+
The improvement in loss on the out-of-bag samples
16231623
relative to the previous iteration.
16241624
``oob_improvement_[0]`` is the improvement in
16251625
loss of the first stage over the ``init`` estimator.
16261626
Only available if ``subsample < 1.0``.
16271627
16281628
oob_scores_ : ndarray of shape (n_estimators,)
1629-
The full history of the loss (= deviance) values on the out-of-bag
1629+
The full history of the loss values on the out-of-bag
16301630
samples. Only available if `subsample < 1.0`.
16311631
16321632
.. versionadded:: 1.3
16331633
16341634
oob_score_ : float
1635-
The last value of the loss (= deviance) on the out-of-bag samples. It is
1635+
The last value of the loss on the out-of-bag samples. It is
16361636
the same as `oob_scores_[-1]`. Only available if `subsample < 1.0`.
16371637
16381638
.. versionadded:: 1.3
16391639
16401640
train_score_ : ndarray of shape (n_estimators,)
1641-
The i-th score ``train_score_[i]`` is the deviance (= loss) of the
1641+
The i-th score ``train_score_[i]`` is the loss of the
16421642
model at iteration ``i`` on the in-bag sample.
1643-
If ``subsample == 1`` this is the deviance on the training data.
1643+
If ``subsample == 1`` this is the loss on the training data.
16441644
16451645
init_ : estimator
16461646
The estimator that provides the initial predictions.

0 commit comments

Comments
 (0)
0