8000 DOC Remove deprecated loss function names from docstrings (#21314) · scikit-learn/scikit-learn@c23ff7e · GitHub
[go: up one dir, main page]

Skip to content

Commit c23ff7e

Browse files
authored
DOC Remove deprecated loss function names from docstrings (#21314)
1 parent 7b489b5 commit c23ff7e

File tree

4 files changed

+9
-11
lines changed

4 files changed

+9
-11
lines changed

sklearn/ensemble/_forest.py

+1-2
Original file line numberDiff line numberDiff line change
@@ -2061,8 +2061,7 @@ class ExtraTreesRegressor(ForestRegressor):
20612061
The default value of ``n_estimators`` changed from 10 to 100
20622062
in 0.22.
20632063
2064-
criterion : {"squared_error", "mse", "absolute_error", "mae"}, \
2065-
default="squared_error"
2064+
criterion : {"squared_error", "absolute_error"}, default="squared_error"
20662065
The function to measure the quality of a split. Supported criteria
20672066
are "squared_error" for the mean squared error, which is equal to
20682067
variance reduction as feature selection criterion, and "absolute_error"

sklearn/ensemble/_gb.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -1474,8 +1474,8 @@ class GradientBoostingRegressor(RegressorMixin, BaseGradientBoosting):
14741474
14751475
Parameters
14761476
----------
1477-
loss : {'squared_error', 'ls', 'absolute_error', 'lad', 'huber', \
1478-
'quantile'}, default='squared_error'
1477+
loss : {'squared_error', 'absolute_error', 'huber', 'quantile'}, \
1478+
default='squared_error'
14791479
Loss function to be optimized. 'squared_error' refers to the squared
14801480
error for regression. 'absolute_error' refers to the absolute error of
14811481
regression and is a robust loss function. 'huber' is a

sklearn/ensemble/_hist_gradient_boosting/gradient_boosting.py

+3-3
Original file line numberDiff line numberDiff line change
@@ -1021,10 +1021,10 @@ class HistGradientBoostingRegressor(RegressorMixin, BaseHistGradientBoosting):
10211021
10221022
Parameters
10231023
----------
1024-
loss : {'squared_error', 'least_squares', 'absolute_error', \
1025-
'least_absolute_deviation', 'poisson'}, default='squared_error'
1024+
loss : {'squared_error', 'absolute_error', 'poisson'}, \
1025+
default='squared_error'
10261026
The loss function to use in the boosting process. Note that the
1027-
"least squares" and "poisson" losses actually implement
1027+
"squared error" and "poisson" losses actually implement
10281028
"half least squares loss" and "half poisson deviance" to simplify the
10291029
computation of the gradient. Furthermore, "poisson" loss internally
10301030
uses a log-link and requires ``y >= 0``.

sklearn/tree/_classes.py

+3-4
Original file line numberDiff line numberDiff line change
@@ -1038,8 +1038,8 @@ class DecisionTreeRegressor(RegressorMixin, BaseDecisionTree):
10381038
10391039
Parameters
10401040
----------
1041-
criterion : {"squared_error", "mse", "friedman_mse", "absolute_error", \
1042-
"mae", "poisson"}, default="squared_error"
1041+
criterion : {"squared_error", "friedman_mse", "absolute_error", \
1042+
"poisson"}, default="squared_error"
10431043
The function to measure the quality of a split. Supported criteria
10441044
are "squared_error" for the mean squared error, which is equal to
10451045
variance reduction as feature selection criterion and minimizes the L2
@@ -1630,8 +1630,7 @@ class ExtraTreeRegressor(DecisionTreeRegressor):
16301630
16311631
Parameters
16321632
----------
1633-
criterion : {"squared_error", "mse", "friedman_mse", "mae"}, \
1634-
default="squared_error"
1633+
criterion : {"squared_error", "friedman_mse"}, default="squared_error"
16351634
The function to measure the quality of a split. Supported criteria
16361635
are "squared_error" for the mean squared error, which is equal to
16371636
variance reduction as feature selection criterion and "mae" for the

0 commit comments

Comments
 (0)
0