8000 DOC fixes sphinx warning due to rendering issue (#27951) · punndcoder28/scikit-learn@0f8a777 · GitHub
[go: up one dir, main page]

Skip to content

Commit 0f8a777

Browse files
authored
DOC fixes sphinx warning due to rendering issue (scikit-learn#27951)
1 parent e430f6e commit 0f8a777

File tree

2 files changed

+12
-10
lines changed

2 files changed

+12
-10
lines changed

examples/ensemble/plot_gradient_boosting_early_stopping.py

Lines changed: 11 additions & 10 deletions
< 8000 col width="100%"/>
Original file line numberDiff line numberDiff line change
@@ -112,13 +112,15 @@
112112
val_errors_with.append(mean_squared_error(y_val, val_pred))
113113

114114
# %%
115-
# Visualize Comparision
116-
# ---------------------
115+
# Visualize Comparison
116+
# --------------------
117117
# It includes three subplots:
118+
#
118119
# 1. Plotting training errors of both models over boosting iterations.
119120
# 2. Plotting validation errors of both models over boosting iterations.
120121
# 3. Creating a bar chart to compare the training times and the estimator used
121-
# of the models with and without early stopping.
122+
# of the models with and without early stopping.
123+
#
122124

123125
fig, axes = plt.subplots(ncols=3, figsize=(12, 4))
124126

@@ -170,11 +172,10 @@
170172
# practical benefits of early stopping:
171173
#
172174
# - **Preventing Overfitting:** We showed how the validation error stabilizes
173-
# or starts to increase after a certain point, indicating that the model
174-
# generalizes better to unseen data. This is achieved by stopping the training
175-
# process before overfitting occurs.
176-
#
175+
# or starts to increase after a certain point, indicating that the model
176+
# generalizes better to unseen data. This is achieved by stopping the training
177+
# process before overfitting occurs.
177178
# - **Improving Training Efficiency:** We compared training times between
178-
# models with and without early stopping. The model with early stopping
179-
# achieved comparable accuracy while requiring significantly fewer
180-
# estimators, resulting in faster training.
179+
# models with and without early stopping. The model with early stopping
180+
# achieved comparable accuracy while requiring significantly fewer
181+
# estimators, resulting in faster training.

examples/ensemble/plot_gradient_boosting_quantile.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -191,6 +191,7 @@ def highlight_min(x):
191191
# outliers and overfits less.
192192
#
193193
# .. _calibration-section:
194+
#
194195
# Calibration of the confidence interval
195196
# --------------------------------------
196197
#

0 commit comments

Comments
 (0)
0