8000 DOC fix deprecated log loss argument in user guide (#24753) · scikit-learn/scikit-learn@55b55af · GitHub
[go: up one dir, main page]

Skip to content

Commit 55b55af

Browse files
authored
DOC fix deprecated log loss argument in user guide (#24753)
1 parent ff33ffb commit 55b55af

File tree

2 files changed

+5
-5
lines changed

2 files changed

+5
-5
lines changed

doc/glossary.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -284,7 +284,7 @@ General Concepts
284284
>>> from sklearn.model_selection import GridSearchCV
285285
>>> from sklearn.linear_model import SGDClassifier
286286
>>> clf = GridSearchCV(SGDClassifier(),
287-
... param_grid={'loss': ['log', 'hinge']})
287+
... param_grid={'loss': ['log_loss', 'hinge']})
288288

289289
This means that we can only check for duck-typed attributes after
290290
fitting, and that we must be careful to make :term:`meta-estimators`

doc/modules/linear_model.rst

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -126,9 +126,9 @@ its ``coef_`` member::
126126
>>> reg.intercept_
127127
0.13636...
128128

129-
Note that the class :class:`Ridge` allows for the user to specify that the
130-
solver be automatically chosen by setting `solver="auto"`. When this option
131-
is specified, :class:`Ridge` will choose between the `"lbfgs"`, `"cholesky"`,
129+
Note that the class :class:`Ridge` allows for the user to specify that the
130+
solver be automatically chosen by setting `solver="auto"`. When this option
131+
is specified, :class:`Ridge` will choose between the `"lbfgs"`, `"cholesky"`,
132132
and `"sparse_cg"` solvers. :class:`Ridge` will begin checking the conditions
133133
shown in the following table from top to bottom. If the condition is true,
134134
the corresponding solver is chosen.
@@ -1020,7 +1020,7 @@ The following table summarizes the penalties supported by each solver:
10201020
The "lbfgs" solver is used by default for its robustness. For large datasets
10211021
the "saga" solver is usually faster.
10221022
For large dataset, you may also consider using :class:`SGDClassifier`
1023-
with 'log' loss, which might be even faster but requires more tuning.
1023+
with `loss="log_loss"`, which might be even faster but requires more tuning.
10241024

10251025
.. topic:: Examples:
10261026

0 commit comments

Comments
 (0)
0