|
112 | 112 | val_errors_with.append(mean_squared_error(y_val, val_pred))
|
113 | 113 |
|
114 | 114 | # %%
|
115 |
| -# Visualize Comparision |
116 |
| -# --------------------- |
| 115 | +# Visualize Comparison |
| 116 | +# -------------------- |
117 | 117 | # It includes three subplots:
|
| 118 | +# |
118 | 119 | # 1. Plotting training errors of both models over boosting iterations.
|
119 | 120 | # 2. Plotting validation errors of both models over boosting iterations.
|
120 | 121 | # 3. Creating a bar chart to compare the training times and the estimator used
|
121 |
| -# of the models with and without early stopping. |
| 122 | +# of the models with and without early stopping. |
| 123 | +# |
122 | 124 |
|
123 | 125 | fig, axes = plt.subplots(ncols=3, figsize=(12, 4))
|
124 | 126 |
|
|
170 | 172 | # practical benefits of early stopping:
|
171 | 173 | #
|
172 | 174 | # - **Preventing Overfitting:** We showed how the validation error stabilizes
|
173 |
| -# or starts to increase after a certain point, indicating that the model |
174 |
| -# generalizes better to unseen data. This is achieved by stopping the training |
175 |
| -# process before overfitting occurs. |
176 |
| -# |
| 175 | +# or starts to increase after a certain point, indicating that the model |
| 176 | +# generalizes better to unseen data. This is achieved by stopping the training |
| 177 | +# process before overfitting occurs. |
177 | 178 | # - **Improving Training Efficiency:** We compared training times between
|
178 |
| -# models with and without early stopping. The model with early stopping |
179 |
| -# achieved comparable accuracy while requiring significantly fewer |
180 |
| -# estimators, resulting in faster training. |
| 179 | +# models with and without early stopping. The model with early stopping |
| 180 | +# achieved comparable accuracy while requiring significantly fewer |
| 181 | +# estimators, resulting in faster training. |
0 commit comments