8000 DOC : fix rho=1 is L1 penalty #1139 · seckcoder/scikit-learn@121cb20 · GitHub
[go: up one dir, main page]

Skip to content

Commit 121cb20

Browse files
agramfortGaelVaroquaux
authored andcommitted
DOC : fix rho=1 is L1 penalty scikit-learn#1139
1 parent 9e16ea7 commit 121cb20

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

sklearn/linear_model/coordinate_descent.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -60,7 +60,7 @@ class ElasticNet(LinearModel, RegressorMixin):
6060
6161
rho : float
6262
The ElasticNet mixing parameter, with 0 < rho <= 1. For rho = 0
63-
the penalty is an L1 penalty. For rho = 1 it is an L2 penalty.
63+
the penalty is an L1 penalty. For rho = 1 it is an L1 penalty.
6464
For 0 < rho < 1, the penalty is a combination of L1 and L2
6565
6666
fit_intercept: bool
@@ -860,7 +860,7 @@ class ElasticNetCV(LinearModelCV, RegressorMixin):
860860
rho : float, optional
861861
float between 0 and 1 passed to ElasticNet (scaling between
862862
l1 and l2 penalties). For rho = 0
863-
the penalty is an L1 penalty. For rho = 1 it is an L2 penalty.
863+
the penalty is an L1 penalty. For rho = 1 it is an L1 penalty.
864864
For 0 < rho < 1, the penalty is a combination of L1 and L2
865865
This parameter can be a list, in which case the different
866866
values are tested by cross-validation and the one giving the best
@@ -1001,7 +1001,7 @@ class MultiTaskElasticNet(Lasso):
10011001
10021002
rho : float
10031003
The ElasticNet mixing parameter, with 0 < rho <= 1. For rho = 0
1004-
the penalty is an L1/L2 penalty. For rho = 1 it is an L2 penalty.
1004+
the penalty is an L1/L2 penalty. For rho = 1 it is an L1 penalty.
10051005
For 0 < rho < 1, the penalty is a combination of L1/L2 and L2
10061006
10071007
fit_intercept : boolean

0 commit comments

Comments
 (0)
0