8000 MAINT Fix several typos in src and doc files · scikit-learn/scikit-learn@31ca34e · GitHub
[go: up one dir, main page]

Skip to content

Commit 31ca34e

Browse files
committed
MAINT Fix several typos in src and doc files
1 parent 66a4d96 commit 31ca34e

File tree

12 files changed

+15
-15
lines changed

12 files changed

+15
-15
lines changed

doc/computing/computational_performance.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -195,7 +195,7 @@ support vectors.
195195
.. centered:: |nusvr_model_complexity|
196196

197197
For :mod:`sklearn.ensemble` of trees (e.g. RandomForest, GBT,
198-
ExtraTrees etc) the number of trees and their depth play the most
198+
ExtraTrees, etc.) the number of trees and their depth play the most
199199
important role. Latency and throughput should scale linearly with the number
200200
of trees. In this case we used directly the ``n_estimators`` parameter of
201201
:class:`~ensemble.GradientBoostingRegressor`.

doc/developers/contributing.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -548,8 +548,8 @@ message, the following actions are taken.
548548
[cd build gh] CD is run only for GitHub Actions
549549
[cd build cirrus] CD is run only for Cirrus CI
550550
[lint skip] Azure pipeline skips linting
551-
[scipy-dev] Build & test with our dependencies (numpy, scipy, etc ...) development builds
552-
[nogil] Build & test with the nogil experimental branches of CPython, Cython, NumPy, SciPy...
551+
[scipy-dev] Build & test with our dependencies (numpy, scipy, etc.) development builds
552+
[nogil] Build & test with the nogil experimental branches of CPython, Cython, NumPy, SciPy, ...
553553
[pypy] Build & test with PyPy
554554
[azure parallel] Run Azure CI jobs in parallel
555555
[float32] Run float32 tests by setting `SKLEARN_RUN_FLOAT32_TESTS=1`. See :ref:`environment_variable` for more details

doc/getting_started.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -37,8 +37,8 @@ The :term:`fit` method generally accepts 2 inputs:
3737
represented as rows and features are represented as columns.
3838
- The target values :term:`y` which are real numbers for regression tasks, or
3939
integers for classification (or any other discrete set of values). For
40-
unsupervized learning tasks, ``y`` does not need to be specified. ``y`` is
41-
usually 1d array where the ``i`` th entry corresponds to the target of the
40+
unsupervised learning tasks, ``y`` does not need to be specified. ``y`` is
41+
usually a 1d array where the ``i`` th entry corresponds to the target of the
4242
``i`` th sample (row) of ``X``.
4343

4444
Both ``X`` and ``y`` are usually expected to be numpy arrays or equivalent

doc/modules/cross_decomposition.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ PLS draws similarities with `Principal Component Regression
2828
<https://en.wikipedia.org/wiki/Principal_component_regression>`_ (PCR), where
2929
the samples are first projected into a lower-dimensional subspace, and the
3030
targets `y` are predicted using `transformed(X)`. One issue with PCR is that
31-
the dimensionality reduction is unsupervized, and may lose some important
31+
the dimensionality reduction is unsupervised, and may lose some important
3232
variables: PCR would keep the features with the most variance, but it's
3333
possible that features with a small variances are relevant from predicting
3434
the target. In a way, PLS allows for the same kind of dimensionality

doc/modules/feature_extraction.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -846,7 +846,7 @@ Note that the dimensionality does not affect the CPU training time of
846846
algorithms which operate on CSR matrices (``LinearSVC(dual=True)``,
847847
``Perceptron``, ``SGDClassifier``, ``PassiveAggressive``) but it does for
848848
algorithms that work with CSC matrices (``LinearSVC(dual=False)``, ``Lasso()``,
849-
etc).
849+
etc.).
850850

851851
Let's try again with the default setting::
852852

doc/modules/lda_qda.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -137,7 +137,7 @@ Mathematical formulation of LDA dimensionality reduction
137137
First note that the K means :math:`\mu_k` are vectors in
138138
:math:`\mathcal{R}^d`, and they lie in an affine subspace :math:`H` of
139139
dimension at most :math:`K - 1` (2 points lie on a line, 3 points lie on a
140-
plane, etc).
140+
plane, etc.).
141141

142142
As mentioned above, we can interpret LDA as assigning :math:`x` to the class
143143
whose mean :math:`\mu_k` is the closest in terms of Mahalanobis distance,

sklearn/ensemble/_hist_gradient_boosting/splitting.pyx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -499,9 +499,9 @@ cdef class Splitter:
499499
split_infos[split_info_idx].feature_idx = feature_idx
500500

501501
# For each feature, find best bin to split on
502-
# Start with a gain of -1 (if no better split is found, that
502+
# Start with a gain of -1 if no better split is found, that
503503
# means one of the constraints isn't respected
504-
# (min_samples_leaf, etc) and the grower will later turn the
504+
# (min_samples_leaf, etc.) and the grower will later turn the
505505
# node into a leaf.
506506
split_infos[split_info_idx].gain = -1
507507
split_infos[split_info_idx].is_categorical = is_categorical[feature_idx]

sklearn/metrics/_classification.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -316,7 +316,7 @@ def confusion_matrix(
316316
[0, 0, 1],
317317
[1, 0, 2]])
318318
319-
In the binary case, we can extract true positives, etc as follows:
319+
In the binary case, we can extract true positives, etc. as follows:
320320
321321
>>> tn, fp, fn, tp = confusion_matrix([0, 1, 0, 1], [1, 1, 1, 0]).ravel()
322322
>>> (tn, fp, fn, tp)

sklearn/model_selection/tests/test_search.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -379,7 +379,7 @@ def test_no_refit():
379379
and hasattr(grid_search, "best_params_")
380380
)
381381

382-
# Make sure the functions predict/transform etc raise meaningful
382+
# Make sure the functions predict/transform etc. raise meaningful
383383
# error messages
384384
for fn_name in (
385385
"predict",

sklearn/neural_network/_multilayer_perceptron.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -360,7 +360,7 @@ def _backprop(self, X, y, activations, deltas, coef_grads, intercept_grads):
360360
return loss, coef_grads, intercept_grads
361361

362362
def _initialize(self, y, layer_units, dtype):
363-
# set all attributes, allocate weights etc for first call
363+
# set all attributes, allocate weights etc. for first call
364364
# Initialize parameters
365365
self.n_iter_ = 0
366366
self.t_ = 0

0 commit comments

Comments
 (0)
0