8000 Revert "Update linear_model.rst (#12735)" · xhluca/scikit-learn@99c93a3 · GitHub
[go: up one dir, main page]

Skip to content

Commit 99c93a3

Browse files
author
Xing
committed
Revert "Update linear_model.rst (scikit-learn#12735)"
This reverts commit 9a751f3.
1 parent 79ad33d commit 99c93a3

File tree

1 file changed

+3
-13
lines changed

1 file changed

+3
-13
lines changed

doc/modules/linear_model.rst

Lines changed: 3 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -786,12 +786,8 @@ non-smooth `penalty="l1"`. This is therefore the solver of choice for sparse
786786
multinomial logistic regression. It is also the only solver that supports
787787
`penalty="elasticnet"`.
788788

789-
The "lbfgs" is an optimization algorithm that approximates the
790-
Broyden–Fletcher–Goldfarb–Shanno algorithm [8]_, which belongs to
791-
quasi-Newton methods. The "lbfgs" solver is recommended for use for
792-
small data-sets but for larger datasets its performance suffers. [9]_
793-
794-
The following table summarizes the penalties supported by each solver:
789+
In a nutshell, the following table summarizes the penalties supported by
790+
each solver:
795791

796792
+------------------------------+-----------------+-------------+-----------------+-----------+------------+
797793
| | **Solvers** |
@@ -818,7 +814,7 @@ The following table summarizes the penalties supported by each solver:
818814
+------------------------------+-----------------+-------------+-----------------+-----------+------------+
819815

820816
The "saga" solver is often the best choice but requires scaling. The
821-
"lbfgs" solver is used by default for historical reasons.
817+
"liblinear" solver is used by default for historical reasons.
822818

823819
For large dataset, you may also consider using :class:`SGDClassifier`
824820
with 'log' loss.
@@ -870,12 +866,6 @@ to warm-starting (see :term:`Glossary <warm_start>`).
870866
871867
.. [7] Aaron Defazio, Francis Bach, Simon Lacoste-Julien: `SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives. <https://arxiv.org/abs/1407.0202>`_
872868
873-
.. [8] https://en.wikipedia.org/wiki/Broyden%E2%80%93Fletcher%E2%80%93Goldfarb%E2%80%93Shanno_algorithm
874-
875-
.. [9] `"Performance Evaluation of Lbfgs vs other solvers"
876-
<http://www.fuzihao.org/blog/2016/01/16/Comparison-of-Gradient-Descent-Stochastic-Gradient-Descent-and-L-BFGS/>`_
877-
878-
879869
Stochastic Gradient Descent - SGD
880870
=================================
881871

0 commit comments

Comments
 (0)
0