10000 DOC Minor fixes for parameter documentation in ridge by alexitkes · Pull Request #14453 · scikit-learn/scikit-learn · GitHub
[go: up one dir, main page]

Skip to content

DOC Minor fixes for parameter documentation in ridge #14453

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
54 changes: 27 additions & 27 deletions sklearn/linear_model/ridge.py
Original file line number Diff line number Diff line change
Expand Up @@ -262,7 +262,7 @@ def ridge_regression(X, y, alpha, sample_weight=None, solver='auto',
assumed to be specific to the targets. Hence they must correspond in
number.

sample_weight : float or numpy array of shape [n_samples]
sample_weight : float or numpy array of shape (n_samples,), default=None
Individual weights for each sample. If sample_weight is not None and
solver='auto', the solver will be set to 'cholesky'.

Expand Down Expand Up @@ -314,10 +314,10 @@ def ridge_regression(X, y, alpha, sample_weight=None, solver='auto',
by scipy.sparse.linalg. For 'sag' and saga solver, the default value is
1000.

tol : float
tol : float, default=1e-3
Precision of the solution.

verbose : int
verbose : int, default=0
Verbosity level. Setting verbose > 0 will display additional
information depending on the solver used.

Expand All @@ -328,21 +328,21 @@ def ridge_regression(X, y, alpha, sample_weight=None, solver='auto',
generator; If None, the random number generator is the RandomState
instance used by `np.random`. Used when ``solver`` == 'sag'.

return_n_iter : boolean, default False
return_n_iter : bool, default=False
If True, the method also returns `n_iter`, the actual number of
iteration performed by the solver.

.. versionadded:: 0.17

return_intercept : boolean, default False
return_intercept : bool, default=False
If True and if X is sparse, the method also returns the intercept,
and the solver is automatically changed to 'sag'. This is only a
temporary fix for fitting the intercept with sparse data. For dense
data, use sklearn.linear_model._preprocess_data before your regression.

.. versionadded:: 0.17

check_input : boolean, default True
check_input : bool, default=True
If False, the input arrays X and y will not be checked.

.. versionadded:: 0.21
Expand Down Expand Up @@ -619,7 +619,7 @@ class Ridge(MultiOutputMixin, RegressorMixin, _BaseRidge):

Parameters
----------
alpha : {float, array-like}, shape (n_targets)
alpha : {float, array-like of shape (n_targets,)}, default=1.0
Regularization strength; must be a positive float. Regularization
improves the conditioning of the problem and reduces the variance of
the estimates. Larger values specify stronger regularization.
Expand All @@ -628,28 +628,28 @@ class Ridge(MultiOutputMixin, RegressorMixin, _BaseRidge):
assumed to be specific to the targets. Hence they must correspond in
number.

fit_intercept : bool, default True
fit_intercept : bool, default=True
Whether to calculate the intercept for this model. If set
to false, no intercept will be used in calculations
(i.e. data is expected to be centered).

normalize : boolean, optional, default False
normalize : bool, default=False
This parameter is ignored when ``fit_intercept`` is set to False.
If True, the regressors X will be normalized before regression by
subtracting the mean and dividing by the l2-norm.
If you wish to standardize, please use
:class:`sklearn.preprocessing.StandardScaler` before calling ``fit``
on an estimator with ``normalize=False``.

copy_X : boolean, optional, default True
copy_X : bool, default=True
If True, X will be copied; else, it may be overwritten.

max_iter : int, optional
Maximum number of iterations for conjugate gradient solver.
For 'sparse_cg' and 'lsqr' solvers, the default value is determined
by scipy.sparse.linalg. For 'sag' solver, the default value is 1000.

tol : float
tol : float, default=1e-3
Precision of the solution.

solver : {'auto', 'svd', 'cholesky', 'lsqr', 'sparse_cg', 'sag', 'saga'}
Expand Down Expand Up @@ -771,34 +771,34 @@ class RidgeClassifier(LinearClassifierMixin, _BaseRidge):

Parameters
----------
alpha : float
alpha : float, default=1.0
Regularization strength; must be a positive float. Regularization
improves the conditioning of the problem and reduces the variance of
the estimates. Larger values specify stronger regularization.
Alpha corresponds to ``C^-1`` in other linear models such as
LogisticRegression or LinearSVC.

fit_intercept : boolean
fit_intercept : bool, default=True
Whether to calculate the intercept for this model. If set to false, no
intercept will be used in calculations (e.g. data is expected to be
already centered).

normalize : boolean, optional, default False
normalize : bool, default=False
This parameter is ignored when ``fit_intercept`` is set to False.
If True, the regressors X will be normalized before regression by
subtracting the mean and dividing by the l2-norm.
If you wish to standardize, please use
:class:`sklearn.preprocessing.StandardScaler` before calling ``fit``
on an estimator with ``normalize=False``.

copy_X : boolean, optional, default True
copy_X : bool, default=True
If True, X will be copied; else, it may be overwritten.

max_iter : int, optional
Maximum number of iterations for conjugate gradient solver.
The default value is determined by scipy.sparse.linalg.

tol : float
tol : float, default=1e-3
Precision of the solution.

class_weight : dict or 'balanced', optional
Expand Down Expand Up @@ -843,7 +843,7 @@ class RidgeClassifier(LinearClassifierMixin, _BaseRidge):
.. versionadded:: 0.19
SAGA solver.

random_state : int, RandomState instance or None, optional, default None
random_state : int, RandomState instance or None, default=None
The seed of the pseudo random number generator to use when shuffling
the data. If int, random_state is the seed used by the random number
generator; If RandomState instance, random_state is the random number
Expand Down Expand Up @@ -909,7 +909,7 @@ def fit(self, X, y, sample_weight=None):
y : array-like of shape (n_samples,)
Target values

sample_weight : float or numpy array of shape (n_samples,)
sample_weight : {float, array-like of shape (n_samples,)}, default=None
Sample weight.

.. versionadded:: 0.17
Expand Down Expand Up @@ -1590,7 +1590,7 @@ class RidgeCV(MultiOutputMixin, RegressorMixin, _BaseRidgeCV):

Parameters
----------
alphas : numpy array of shape [n_alphas]
alphas : numpy array of shape (n_alphas,), default=(0.1, 1.0, 10.0)
Array of alpha values to try.
Regularization strength; must be a positive float. Regularization
improves the conditioning of the problem and reduces the variance of
Expand All @@ -1599,20 +1599,20 @@ class RidgeCV(MultiOutputMixin, RegressorMixin, _BaseRidgeCV):
LogisticRegression or LinearSVC.
If using generalized cross-validation, alphas must be positive.

fit_intercept : bool, default True
fit_intercept : bool, default=True
Whether to calculate the intercept for this model. If set
to false, no intercept will be used in calculations
(i.e. data is expected to be centered).

normalize : boolean, optional, default False
normalize : bool, default=False
This parameter is ignored when ``fit_intercept`` is set to False.
If True, the regressors X will be normalized before regression by
subtracting the mean and dividing by the l2-norm.
If you wish to standardize, please use
:class:`sklearn.preprocessing.StandardScaler` before calling ``fit``
on an estimator with ``normalize=False``.

scoring : string, callable or None, optional, default: None
scoring : string, callable or None, default=None
A string (see model evaluation documentation) or
a scorer callable object / function with signature
``scorer(estimator, X, y)``.
Expand Down Expand Up @@ -1704,28 +1704,28 @@ class RidgeClassifierCV(LinearClassifierMixin, _BaseRidgeCV):

Parameters
----------
alphas : numpy array of shape [n_alphas]
alphas : numpy array of shape (n_alphas,), default=(0.1, 1.0, 10.0)
Array of alpha values to try.
Regularization strength; must be a positive float. Regularization
improves the conditioning of the problem and reduces the variance of
the estimates. Larger values specify stronger regularization.
Alpha corresponds to ``C^-1`` in other linear models such as
LogisticRegression or LinearSVC.

fit_intercept : boolean
fit_intercept : bool, default=True
Whether to calculate the intercept for this model. If set
to false, no intercept will be used in calculations
(i.e. data is expected to be centered).

normalize : boolean, optional, default False
normalize : bool, default=False
This parameter is ignored when ``fit_intercept`` is set to False.
If True, the regressors X will be normalized before regression by
subtracting the mean and dividing by the l2-norm.
If you wish to standardize, please use
:class:`sklearn.preprocessing.StandardScaler` before calling ``fit``
on an estimator with ``normalize=False``.

scoring : string, callable or None, optional, default: None
scoring : string, callable or None, default=None
A string (see model evaluation documentation) or
a scorer callable object / function with signature
``scorer(estimator, X, y)``.
Expand Down Expand Up @@ -1823,7 +1823,7 @@ def fit(self, X, y, sample_weight=None):
y : array-like, shape (n_samples,)
Target values. Will be cast to X's dtype if necessary

sample_weight : float or numpy array of shape (n_samples,)
sample_weight : {float, array-like of shape (n_samples,)}, default=None
Sample weight.

Returns
Expand Down
0