8000 DOC Fixed typo, added missing comma in plot_forest_hist_grad_boosting… · scikit-learn/scikit-learn@405a5a0 · GitHub
[go: up one dir, main page]

Skip to content

Commit 405a5a0

Browse files
authored
DOC Fixed typo, added missing comma in plot_forest_hist_grad_boosting_comparison example (#26954)
1 parent dcf0510 commit 405a5a0

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

examples/ensemble/plot_forest_hist_grad_boosting_comparison.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@
1212
trees according to each estimator:
1313
1414
- `n_estimators` controls the number of trees in the forest. It's a fixed number.
15-
- `max_iter` is the the maximum number of iterations in a gradient boosting
15+
- `max_iter` is the maximum number of iterations in a gradient boosting
1616
based model. The number of iterations corresponds to the number of trees for
1717
regression and binary classification problems. Furthermore, the actual number
1818
of trees required by the model depends on the stopping criteria.
@@ -210,7 +210,7 @@
210210
# models uniformly dominate the Random Forest models in the "test score vs
211211
# training speed trade-off" (the HGBDT curve should be on the top left of the RF
212212
# curve, without ever crossing). The "test score vs prediction speed" trade-off
213-
# can also be more disputed but it's most often favorable to HGBDT. It's always
213+
# can also be more disputed, but it's most often favorable to HGBDT. It's always
214214
# a good idea to check both kinds of model (with hyper-parameter tuning) and
215215
# compare their performance on your specific problem to determine which model is
216216
# the best fit but **HGBT almost always offers a more favorable speed-accuracy

0 commit comments

Comments
 (0)
0