8000 DOC remove default parameter values for private function in logistic … · scikit-learn/scikit-learn@c36ab3a · GitHub
[go: up one dir, main page]

Skip to content

Commit c36ab3a

Browse files
authored
DOC remove default parameter values for private function in logistic module (#27787)
1 parent a672d6d commit c36ab3a

File tree

1 file changed

+18
-21
lines changed

1 file changed

+18
-21
lines changed

sklearn/linear_model/_logistic.py

Lines changed: 18 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -624,34 +624,32 @@ def _log_reg_scoring_path(
624624
test : list of indices
625625
The indices of the test set.
626626
627-
pos_class : int, default=None
627+
pos_class : int
628628
The class with respect to which we perform a one-vs-all fit.
629629
If None, then it is assumed that the given problem is binary.
630630
631-
Cs : int or list of floats, default=10
631+
Cs : int or list of floats
632632
Each of the values in Cs describes the inverse of
633633
regularization strength. If Cs is as an int, then a grid of Cs
634634
values are chosen in a logarithmic scale between 1e-4 and 1e4.
635-
If not provided, then a fixed set of values for Cs are used.
636635
637-
scoring : callable, default=None
636+
scoring : callable
638637
A string (see model evaluation documentation) or
639638
a scorer callable object / function with signature
640639
``scorer(estimator, X, y)``. For a list of scoring functions
641-
that can be used, look at :mod:`sklearn.metrics`. The
642-
default scoring option used is accuracy_score.
640+
that can be used, look at :mod:`sklearn.metrics`.
643641
644-
fit_intercept : bool, default=False
642+
fit_intercept : bool
645643
If False, then the bias term is set to zero. Else the last
646644
term of each coef_ gives us the intercept.
647645
648-
max_iter : int, default=100
646+
max_iter : int
649647
Maximum number of iterations for the solver.
650648
651-
tol : float, default=1e-4
649+
tol : float
652650
Tolerance for stopping criteria.
653651
654-
class_weight : dict or 'balanced', default=None
652+
class_weight : dict or 'balanced'
655653
Weights associated with classes in the form ``{class_label: weight}``.
656654
If not given, all classes are supposed to have weight one.
657655
@@ -662,25 +660,24 @@ def _log_reg_scoring_path(
662660
Note that these weights will be multiplied with sample_weight (passed
663661
through the fit method) if sample_weight is specified.
664662
665-
verbose : int, default=0
663+
verbose : int
666664
For the liblinear and lbfgs solvers set verbose to any positive
667665
number for verbosity.
668666
669-
solver : {'lbfgs', 'liblinear', 'newton-cg', 'newton-cholesky', 'sag', 'saga'}, \
670-
default='lbfgs'
667+
solver : {'lbfgs', 'liblinear', 'newton-cg', 'newton-cholesky', 'sag', 'saga'}
671668
Decides which solver to use.
672669
673-
penalty : {'l1', 'l2', 'elasticnet'}, default='l2'
670+
penalty : {'l1', 'l2', 'elasticnet'}
674671
Used to specify the norm used in the penalization. The 'newton-cg',
675672
'sag' and 'lbfgs' solvers support only l2 penalties. 'elasticnet' is
676673
only supported by the 'saga' solver.
677674
678-
dual : bool, default=False
675+
dual : bool
679676
Dual or primal formulation. Dual formulation is only implemented for
680677
l2 penalty with liblinear solver. Prefer dual=False when
681678
n_samples > n_features.
682679
683-
intercept_scaling : float, default=1.
680+
intercept_scaling : float
684681
Useful only when the solver 'liblinear' is used
685682
and self.fit_intercept is set to True. In this case, x becomes
686683
[x, self.intercept_scaling],
@@ -692,26 +689,26 @@ def _log_reg_scoring_path(
692689
To lessen the effect of regularization on synthetic feature weight
693690
(and therefore on the intercept) intercept_scaling has to be increased.
694691
695-
multi_class : {'auto', 'ovr', 'multinomial'}, default='auto'
692+
multi_class : {'auto', 'ovr', 'multinomial'}
696693
If the option chosen is 'ovr', then a binary problem is fit for each
697694
label. For 'multinomial' the loss minimised is the multinomial loss fit
698695
across the entire probability distribution, *even when the data is
699696
binary*. 'multinomial' is unavailable when solver='liblinear'.
700697
701-
random_state : int, RandomState instance, default=None
698+
random_state : int, RandomState instance
702699
Used when ``solver`` == 'sag', 'saga' or 'liblinear' to shuffle the
703700
data. See :term:`Glossary <random_state>` for details.
704701
705-
max_squared_sum : float, default=None
702+
max_squared_sum : float
706703
Maximum squared sum of X over samples. Used only in SAG solver.
707704
If None, it will be computed, going through all the samples.
708705
The value should be precomputed to speed up cross validation.
709706
710-
sample_weight : array-like of shape(n_samples,), default=None
707+
sample_weight : array-like of shape(n_samples,)
711708
Array of weights that are assigned to individual samples.
712709
If not provided, then each sample is given unit weight.
713710
714-
l1_ratio : float, default=None
711+
l1_ratio : float
715712
The Elastic-Net mixing parameter, with ``0 <= l1_ratio <= 1``. Only
716713
used if ``penalty='elasticnet'``. Setting ``l1_ratio=0`` is equivalent
717714
to using ``penalty='l2'``, while setting ``l1_ratio=1`` is equivalent

0 commit comments

Comments
 (0)
0