8000 Fix attribute mismatches in documentation strings. by alexitkes · Pull Request #14320 · scikit-learn/scikit-learn · GitHub
[go: up one dir, main page]

Skip to content

Fix attribute mismatches in documentation strings. #14320

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 13 commits into from
Jul 14, 2019
7 changes: 7 additions & 0 deletions sklearn/decomposition/fastica_.py
Original file line number Diff line number Diff line change
Expand Up @@ -436,11 +436,18 @@ def my_g(x):
mixing_ : array, shape (n_features, n_components)
The mixing matrix.

mean_ : array, shape(n_features)
The mean over features. Only set if `self.whiten` is True.

n_iter_ : int
If the algorithm is "deflation", n_iter is the
maximum number of iterations run across all components. Else
they are just the number of iterations taken to converge.

whitening_ : array, shape (n_components, n_features)
Only set if whiten is 'True'. This is the pre-whitening matrix
that projects data onto the first `n_components` principal components.

Examples
--------
>>> from sklearn.datasets import load_digits
Expand Down
5 changes: 5 additions & 0 deletions sklearn/decomposition/nmf.py
Original file line number Diff line number Diff line change
Expand Up @@ -1192,6 +1192,11 @@ class NMF(BaseEstimator, TransformerMixin):
components_ : array, [n_components, n_features]
Factorization matrix, sometimes called 'dictionary'.

n_components_ : integer
The number of components. It is same as the `n_components` parameter
if it was given. Otherwise, it will be same as the number of
features.

reconstruction_err_ : number
Frobenius norm of the matrix difference, or beta-divergence, between
the training data ``X`` and the reconstructed data ``WH`` from
Expand Down
6 changes: 6 additions & 0 deletions sklearn/decomposition/pca.py
Original file line number Diff line number Diff line change
Expand Up @@ -237,6 +237,12 @@ class PCA(_BasePCA):
n_components, or the lesser value of n_features and n_samples
if n_components is None.

n_features_ : int
Number of features in the training data.

n_samples_ : int
Number of samples in the training data.

noise_variance_ : float
The estimated noise covariance following the Probabilistic PCA model
from Tipping and Bishop 1999. See "Pattern Recognition and
Expand Down
4 changes: 2 additions & 2 deletions
Original file line number Diff line number Diff line change
Expand Up @@ -453,7 +453,7 @@ class GaussianRandomProjection(BaseRandomProjection):

Attributes
----------
n_component_ : int
n_components_ : int
Concrete number of components computed when n_components="auto".

components_ : numpy array of shape [n_components, n_features]
Expand Down Expand Up @@ -573,7 +573,7 @@ class SparseRandomProjection(BaseRandomProjection):

Attributes
----------
n_component_ : int
n_components_ : int
Concrete number of components computed when n_components="auto".

components_ : CSR matrix with shape [n_components, n_features]
Expand Down
0