8000 DOC Unify usage of 'w.r.t.' abbreviations in docstrings by ChVeen · Pull Request #25683 · scikit-learn/scikit-learn · GitHub
[go: up one dir, main page]

Skip to content

DOC Unify usage of 'w.r.t.' abbreviations in docstrings #25683

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Feb 24, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion doc/common_pitfalls.rst
Original file line number Diff line number Diff line change
Expand Up @@ -243,7 +243,7 @@ Some scikit-learn objects are inherently random. These are usually estimators
splitters (e.g. :class:`~sklearn.model_selection.KFold`). The randomness of
these objects is controlled via their `random_state` parameter, as described
in the :term:`Glossary <random_state>`. This section expands on the glossary
entry, and describes good practices and common pitfalls w.r.t. to this
entry, and describes good practices and common pitfalls w.r.t. this
subtle parameter.

.. note:: Recommendation summary
Expand Down
4 changes: 2 additions & 2 deletions sklearn/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -681,7 +681,7 @@ def score(self, X, y, sample_weight=None):
Returns
-------
score : float
Mean accuracy of ``self.predict(X)`` wrt. `y`.
Mean accuracy of ``self.predict(X)`` w.r.t. `y`.
"""
from .metrics import accuracy_score

Expand Down Expand Up @@ -725,7 +725,7 @@ def score(self, X, y, sample_weight=None):
Returns
-------
score : float
:math:`R^2` of ``self.predict(X)`` wrt. `y`.
:math:`R^2` of ``self.predict(X)`` w.r.t. `y`.

Notes
-----
Expand Down
4 changes: 2 additions & 2 deletions sklearn/dummy.py
Original file line number Diff line number Diff line change
Expand Up @@ -435,7 +435,7 @@ def score(self, X, y, sample_weight=None):
Returns
-------
score : float
Mean accuracy of self.predict(X) wrt. y.
Mean accuracy of self.predict(X) w.r.t. y.
"""
if X is None:
X = np.zeros(shape=(len(y), 1))
Expand Down Expand Up @@ -667,7 +667,7 @@ def score(self, X, y, sample_weight=None):
Returns
-------
score : float
R^2 of `self.predict(X)` wrt. y.
R^2 of `self.predict(X)` w.r.t. y.
"""
if X is None:
X = np.zeros(shape=(len(y), 1))
Expand Down
2 changes: 1 addition & 1 deletion sklearn/linear_model/_glm/_newton_solver.py
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,7 @@ class NewtonSolver(ABC):
Newton step.

gradient : ndarray of shape coef.shape
Gradient of the loss wrt. the coefficients.
Gradient of the loss w.r.t. the coefficients.

gradient_old : ndarray of shape coef.shape
Gradient of previous iteration.
Expand Down
2 changes: 1 addition & 1 deletion sklearn/linear_model/_logistic.py
Original file line number Diff line number Diff line change
Expand Up @@ -2087,7 +2087,7 @@ def score(self, X, y, sample_weight=None):
Returns
-------
score : float
Score of self.predict(X) wrt. y.
Score of self.predict(X) w.r.t. y.
"""
scoring = self.scoring or "accuracy"
scoring = get_scorer(scoring)
Expand Down
4 changes: 2 additions & 2 deletions sklearn/model_selection/_split.py
Original file line number Diff line number Diff line change
Expand Up @@ -2164,8 +2164,8 @@ def split(self, X, y, groups=None):

def _validate_shuffle_split(n_samples, test_size, train_size, default_test_size=None):
"""
Validation helper to check if the test/test sizes are meaningful wrt to the
size of the data (n_samples)
Validation helper to check if the test/test sizes are meaningful w.r.t. the
size of the data (n_samples).
"""
if test_size is None and train_size is None:
test_size = default_test_size
Expand Down
6 changes: 3 additions & 3 deletions sklearn/neighbors/_lof.py
Original file line number Diff line number Diff line change
Expand Up @@ -240,7 +240,7 @@ def fit_predict(self, X, y=None):
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features), default=None
The query sample or samples to compute the Local Outlier Factor
w.r.t. to the training samples.
w.r.t. the training samples.

y : Ignored
Not used, present for API consistency by convention.
Expand Down Expand Up @@ -343,7 +343,7 @@ def predict(self, X=None):
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
The query sample or samples to compute the Local Outlier Factor
w.r.t. to the training samples.
w.r.t. the training samples.

Returns
-------
Expand All @@ -361,7 +361,7 @@ def _predict(self, X=None):
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features), default=None
The query sample or samples to compute the Local Outlier Factor
w.r.t. to the training samples. If None, makes prediction on the
w.r.t. the training samples. If None, makes prediction on the
training data without considering them as their own neighbors.

Returns
Expand Down
2 changes: 1 addition & 1 deletion sklearn/neighbors/_quad_tree.pyx
Original file line number Diff line number Diff line change
Expand Up @@ -426,7 +426,7 @@ cdef class _QuadTree:

# Check whether we can use this node as a summary
# It's a summary node if the angular size as measured from the point
# is relatively small (w.r.t. to theta) or if it is a leaf node.
# is relatively small (w.r.t. theta) or if it is a leaf node.
# If it can be summarized, we use the cell center of mass
# Otherwise, we go a higher level of resolution and into the leaves.
if cell.is_leaf or (
Expand Down
0