8000 added versionadded 0.21 and versionchanged 0.21 where applicable, aft… by alhewpl · Pull Request #16214 · scikit-learn/scikit-learn · GitHub
[go: up one dir, main page]

Skip to content

added versionadded 0.21 and versionchanged 0.21 where applicable, aft… #16214

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@
*.lprof
*.swp
*.swo
env/
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please revert this change

.DS_Store
build
sklearn/datasets/__config__.py
Expand Down
2 changes: 2 additions & 0 deletions sklearn/cluster/_agglomerative.py
Original file line number Diff line number Diff line change
Expand Up @@ -755,6 +755,8 @@ class AgglomerativeClustering(ClusterMixin, BaseEstimator):
n_connected_components_ : int
The estimated number of connected components in the graph.

.. versionchanged:: 0.21
Copy link
Member
10000

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

versionchanged really needs some description of how the behaviour has changed. And I think it should be used only rarely, when it would really help the user to understand a substantial change across versions.


children_ : array-like of shape (n_samples-1, 2)
The children of each non-leaf node. Values less than `n_samples`
correspond to leaves of the tree which are the original samples.
Expand Down
4 changes: 4 additions & 0 deletions sklearn/cluster/_optics.py
Original file line number Diff line number Diff line change
Expand Up @@ -51,6 +51,8 @@ class OPTICS(ClusterMixin, BaseEstimator):
number or a fraction of the number of samples (rounded to be at least
2).

.. versionadded:: 0.21

max_eps : float, optional (default=np.inf)
The maximum distance between two samples for one to be considered as
in the neighborhood of the other. Default value of ``np.inf`` will
Expand Down Expand Up @@ -119,6 +121,8 @@ class OPTICS(ClusterMixin, BaseEstimator):
at least 2). If ``None``, the value of ``min_samples`` is used instead.
Used only when ``cluster_method='xi'``.

.. versionadded:: 0.21

algorithm : {'auto', 'ball_tree', 'kd_tree', 'brute'}, optional
Algorithm used to compute the nearest neighbors:

Expand Down
4 changes: 4 additions & 0 deletions sklearn/compose/_column_transformer.py
Original file line number Diff line number Diff line change
Expand Up @@ -110,6 +110,8 @@ class ColumnTransformer(TransformerMixin, _BaseComposition):
If True, the time elapsed while fitting each transformer will be
printed as it is completed.

.. versionadded:: 0.21

Attributes
----------
transformers_ : list
Expand Down Expand Up @@ -717,6 +719,8 @@ def make_column_transformer(*transformers, **kwargs):
If True, the time elapsed while fitting each transformer will be
printed as it is completed.

.. versionadded:: 0.21

Returns
-------
ct : ColumnTransformer
Expand Down
2 changes: 2 additions & 0 deletions sklearn/externals/_scipy_linalg.py
Original file line number Diff line number Diff line change
Expand Up @@ -51,6 +51,8 @@ def pinvh(a, cond=None, rcond=None, lower=True, return_rank=False,
using its eigenvalue decomposition and including all eigenvalues with
'large' absolute value.

.. versionadded:: 0.21

Parameters
----------
a : (N, N) array_like
Expand Down
11 changes: 11 additions & 0 deletions sklearn/pipeline.py
Original file line number Diff line number Diff line change
Expand Up @@ -70,6 +70,8 @@ class Pipeline(_BaseComposition):
If True, the time elapsed while fitting each step will be printed as it
is completed.

.. versionadded:: 0.21

Attributes
----------
named_steps : bunch object, a dictionary with attribute access
Expand Down Expand Up @@ -211,6 +213,9 @@ def _iter(self, with_final=True, filter_passthrough=True):
def __len__(self):
"""
Returns the length of the Pipeline

.. versionadded:: 0.21

"""
return len(self.steps)

Expand Down Expand Up @@ -678,6 +683,8 @@ def make_pipeline(*steps, **kwargs):
If True, the time elapsed while fitting each step will be printed as it
is completed.

.. versionadded:: 0.21

See Also
--------
sklearn.pipeline.Pipeline : Class for creating a pipeline of
Expand Down Expand Up @@ -787,6 +794,8 @@ class FeatureUnion(TransformerMixin, _BaseComposition):
If True, the time elapsed while fitting each transformer will be
printed as it is completed.

.. versionadded:: 0.21

See Also
--------
sklearn.pipeline.make_union : Convenience function for simplified
Expand Down Expand Up @@ -1020,6 +1029,8 @@ def make_union(*transformers, **kwargs):
If True, the time elapsed while fitting each transformer will be
printed as it is completed.

.. versionadded:: 0.21

Returns
-------
f : FeatureUnion
Expand Down
2 changes: 2 additions & 0 deletions sklearn/preprocessing/_csr_polynomial_expansion.pyx
Original file line number Diff line number Diff line change
Expand Up @@ -83,6 +83,8 @@ def _csr_polynomial_expansion(ndarray[DATA_T, ndim=1] data,
----------
"Leveraging Sparsity to Speed Up Polynomial Feature Expansions of CSR
Matrices Using K-Simplex Numbers" by Andrew Nystrom and John Hughes.

.. versionadded:: 0.21
"""

assert degree in (2, 3)
Expand Down
2 changes: 2 additions & 0 deletions sklearn/preprocessing/_data.py
Original file line number Diff line number Diff line change
Expand Up @@ -2161,6 +2161,8 @@ class QuantileTransformer(TransformerMixin, BaseEstimator):
a better approximation of the cumulative distribution function
estimator.

.. versionchanged:: 0.21

output_distribution : str, optional (default='uniform')
Marginal distribution for the transformed data. The choices are
'uniform' (default) or 'normal'.
Expand Down
4 changes: 4 additions & 0 deletions sklearn/preprocessing/_encoders.py
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,8 @@ def _check_X(self, X):
of pandas DataFrame columns, as otherwise information is lost
and cannot be used, eg for the `categories_` attribute.

.. versionchanged:: 0.21

"""
if not (hasattr(X, 'iloc') and getattr(X, 'ndim', 0) == 2):
# if not a dataframe, do normal check_array validation
Expand Down Expand Up @@ -198,6 +200,8 @@ class OneHotEncoder(_BaseEncoder):
- array : ``drop[i]`` is the category in feature ``X[:, i]`` that
should be dropped.

.. versionadded:: 0.21

sparse : bool, default=True
Will return sparse matrix if set True else will return an array.

Expand Down
2 changes: 2 additions & 0 deletions sklearn/preprocessing/_label.py
Original file line number Diff line number Diff line change
Expand Up @@ -950,6 +950,8 @@ def transform(self, y):
return yt

def _build_cache(self):
""" .. versionadded:: 0.21"""

if self._cached_dict is None:
self._cached_dict = dict(zip(self.classes_,
range(len(self.classes_))))
Expand Down
4 changes: 4 additions & 0 deletions sklearn/tree/_classes.py
Original file line number Diff line number Diff line change
Expand Up @@ -118,6 +118,8 @@ def get_depth(self):
The depth of a tree is the maximum distance between the root
and any leaf.

.. versionadded:: 0.21

Returns
-------
self.tree_.max_depth : int
Expand All @@ -129,6 +131,8 @@ def get_depth(self):
def get_n_leaves(self):
"""Return the number of leaves of the decision tree.

.. versionadded:: 0.21

Returns
-------
self.tree_.n_leaves : int
Expand Down
2 changes: 2 additions & 0 deletions sklearn/tree/_export.py
Original file line number Diff line number Diff line change
Expand Up @@ -809,6 +809,8 @@ def export_text(decision_tree, feature_names=None, max_depth=10,

Note that backwards compatibility may not be supported.

.. versionadded:: 0.21

Parameters
----------
decision_tree : object
Expand Down
2 changes: 2 additions & 0 deletions sklearn/utils/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -486,6 +486,8 @@ def resample(*arrays, **options):
If not None, data is split in a stratified fashion, using this as
the class labels.

.. versionadded:: 0.21

Returns
-------
resampled_arrays : sequence of indexable data-structures
Expand Down
17 changes: 11 additions & 6 deletions sklearn/utils/estimator_checks.py
Original file line number Diff line number Diff line change
Expand Up @@ -2787,12 +2787,17 @@ def check_fit_non_negative(name, estimator_orig):


def check_fit_idempotent(name, estimator_orig):
# Check that est.fit(X) is the same as est.fit(X).fit(X). Ideally we would
# check that the estimated parameters during training (e.g. coefs_) are
# the same, but having a universal comparison function for those
# attributes is difficult and full of edge cases. So instead we check that
# predict(), predict_proba(), decision_function() and transform() return
# the same results.
"""
Check that est.fit(X) is the same as est.fit(X).fit(X). Ideally we would
check that the estimated parameters during training (e.g. coefs_) are
the same, but having a universal comparison function for those
attributes is difficult and full of edge cases. So instead we check that
predict(), predict_proba(), decision_function() and transform() return
the same results.

.. versionadded:: 0.21

"""

check_methods = ["predict", "transform", "decision_function",
"predict_proba"]
Expand Down
3 changes: 3 additions & 0 deletions sklearn/utils/fixes.py
Original file line number Diff line number Diff line change
Expand Up @@ -190,6 +190,9 @@ def _astype_copy_false(X):
"""Returns the copy=False parameter for
{ndarray, csr_matrix, csc_matrix}.astype when possible,
otherwise don't specify

.. versionadded:: 0.21

"""
if sp_version >= (1, 1) or not sp.issparse(X):
return {'copy': False}
Expand Down
0