8000 Typos found by codespell · scikit-learn/scikit-learn@e409451 · GitHub
[go: up one dir, main page]

Skip to content

Commit e409451

Browse files
Typos found by codespell
1 parent d69e759 commit e409451

File tree

124 files changed

+177
-177
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

124 files changed

+177
-177
lines changed

benchmarks/bench_mnist.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66
Benchmark on the MNIST dataset. The dataset comprises 70,000 samples
77
and 784 features. Here, we consider the task of predicting
88
10 classes - digits from 0 to 9 from their raw images. By contrast to the
9-
covertype dataset, the feature space is homogenous.
9+
covertype dataset, the feature space is homogeneous.
1010
1111
Example of output :
1212
[..]

benchmarks/bench_random_projections.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -43,10 +43,10 @@ def compute_time(t_start, delta):
4343
return delta.seconds + delta.microseconds / mu_second
4444

4545

46-
def bench_scikit_transformer(X, transfomer):
46+
def bench_scikit_transformer(X, transformer):
4747
gc.collect()
4848

49-
clf = clone(transfomer)
49+
clf = clone(transformer)
5050

5151
# start time
5252
t_start = datetime.now()
@@ -195,7 +195,7 @@ def print_row(clf_type, time_fit, time_transform):
195195
###########################################################################
196196
n_nonzeros = int(opts.ratio_nonzeros * opts.n_features)
197197

198-
print("Dataset statics")
198+
print("Dataset statistics")
199199
print("===========================")
200200
print("n_samples \t= %s" % opts.n_samples)
201201
print("n_features \t= %s" % opts.n_features)

build_tools/azure/posix-docker.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ jobs:
3939
${{ insert }}: ${{ parameters.matrix }}
4040

4141
steps:
42-
# Container is detached and sleeping, allowing steps to run commmands
42+
# Container is detached and sleeping, allowing steps to run commands
4343
# in the container. The TEST_DIR is mapped allowing the host to access
4444
# the JUNITXML file
4545
- script: >

build_tools/circle/list_versions.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ def human_readable_data_quantity(quantity, multiple=1024):
3434

3535
def get_file_extension(version):
3636
if "dev" in version:
37-
# The 'dev' branch should be explictly handled
37+
# The 'dev' branch should be explicitly handled
3838
return "zip"
3939

4040
current_version = LooseVersion(version)

build_tools/shared.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ get_dep() {
55
# do not install with none
66
echo
77
elif [[ "${version%%[^0-9.]*}" ]]; then
8-
# version number is explicity passed
8+
# version number is explicitly passed
99
echo "$package==$version"
1010
elif [[ "$version" == "latest" ]]; then
1111
# use latest

doc/common_pitfalls.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -560,7 +560,7 @@ bad performance. Similarly, we want a random forest to be robust w.r.t the
560560
set of randomly selected features that each tree will be using.
561561

562562
For these reasons, it is preferable to evaluate the cross-validation
563-
preformance by letting the estimator use a different RNG on each fold. This
563+
performance by letting the estimator use a different RNG on each fold. This
564564
is done by passing a `RandomState` instance (or `None`) to the estimator
565565
initialization.
566566

doc/conf.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -240,7 +240,7 @@
240240
"release_highlights"
241241
] = f"auto_examples/release_highlights/{latest_highlights}"
242242

243-
# get version from higlight name assuming highlights have the form
243+
# get version from highlight name assuming highlights have the form
244244
# plot_release_highlights_0_22_0
245245
highlight_version = ".".join(latest_highlights.split("_")[-3:-1])
246246
html_context["release_highlights_version"] = highlight_version

doc/developers/advanced_installation.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -377,7 +377,7 @@ isolation from the Python packages installed via the system packager. When
377377
using an isolated environment, ``pip3`` should be replaced by ``pip`` in the
378378
above commands.
379379

380-
When precompiled wheels of the runtime dependencies are not avalaible for your
380+
When precompiled wheels of the runtime dependencies are not available for your
381381
architecture (e.g. ARM), you can install the system versions:
382382

383383
.. prompt:: bash $

doc/developers/contributing.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1004,7 +1004,7 @@ installed in your current Python environment:
10041004

10051005
asv run --python=same
10061006

1007-
It's particulary useful when you installed scikit-learn in editable mode to
1007+
It's particularly useful when you installed scikit-learn in editable mode to
10081008
avoid creating a new environment each time you run the benchmarks. By default
10091009
the results are not saved when using an existing installation. To save the
10101010
results you must specify a commit hash:

doc/developers/maintainer.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ Before a release
3333

3434
- ``maint_tools/sort_whats_new.py`` can put what's new entries into
3535
sections. It's not perfect, and requires manual checking of the changes.
36-
If the whats new list is well curated, it may not be necessary.
36+
If the what's new list is well curated, it may not be necessary.
3737

3838
- The ``maint_tools/whats_missing.sh`` script may be used to identify pull
3939
requests that were merged but likely missing from What's New.
@@ -198,7 +198,7 @@ Making a release
198198
`Continuous Integration
199199
<https://en.wikipedia.org/wiki/Continuous_integration>`_. The CD workflow on
200200
GitHub Actions is also used to automatically create nightly builds and
201-
publish packages for the developement branch of scikit-learn. See
201+
publish packages for the development branch of scikit-learn. See
202202
:ref:`install_nightly_builds`.
203203

204204
4. Once all the CD jobs have completed successfully in the PR, merge it,

doc/install.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -158,7 +158,7 @@ Installing on Apple Silicon M1 hardware
158158

159159
The recently introduced `macos/arm64` platform (sometimes also known as
160160
`macos/aarch64`) requires the open source community to upgrade the build
161-
configuation and automation to properly support it.
161+
configuration and automation to properly support it.
162162

163163
At the time of writing (January 2021), the only way to get a working
164164
installation of scikit-learn on this hardware is to install scikit-learn and its
@@ -204,7 +204,7 @@ It can be installed by typing the following command:
204204
Debian/Ubuntu
205205
-------------
206206

207-
The Debian/Ubuntu package is splitted in three different packages called
207+
The Debian/Ubuntu package is split in three different packages called
208208
``python3-sklearn`` (python modules), ``python3-sklearn-lib`` (low-level
209209
implementations and bindings), ``python3-sklearn-doc`` (documentation).
210210
Only the Python 3 version is available in the Debian Buster (the more recent

doc/modules/compose.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -573,7 +573,7 @@ many estimators. This visualization is activated by setting the
573573

574574
>>> from sklearn import set_config
575575
>>> set_config(display='diagram') # doctest: +SKIP
576-
>>> # diplays HTML representation in a jupyter context
576+
>>> # displays HTML representation in a jupyter context
577577
>>> column_trans # doctest: +SKIP
578578

579579
An example of the HTML output can be seen in the

doc/modules/cross_decomposition.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -64,7 +64,7 @@ Set :math:`X_1` to :math:`X` and :math:`Y_1` to :math:`Y`. Then, for each
6464
:math:`C = X_k^T Y_k`.
6565
:math:`u_k` and :math:`v_k` are called the *weights*.
6666
By definition, :math:`u_k` and :math:`v_k` are
67-
choosen so that they maximize the covariance between the projected
67+
chosen so that they maximize the covariance between the projected
6868
:math:`X_k` and the projected target, that is :math:`\text{Cov}(X_k u_k,
6969
Y_k v_k)`.
7070
- b) Project :math:`X_k` and :math:`Y_k` on the singular vectors to obtain

doc/modules/cross_validation.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -974,7 +974,7 @@ test is therefore only able to show when the model reliably outperforms
974974
random guessing.
975975

976976
Finally, :func:`~sklearn.model_selection.permutation_test_score` is computed
977-
using brute force and interally fits ``(n_permutations + 1) * n_cv`` models.
977+
using brute force and internally fits ``(n_permutations + 1) * n_cv`` models.
978978
It is therefore only tractable with small datasets for which fitting an
979979
individual model is very fast.
980980

doc/modules/decomposition.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -829,7 +829,7 @@ and the intensity of the regularization with the :attr:`alpha_W` and :attr:`alph
829829
(:math:`\alpha_W` and :math:`\alpha_H`) parameters. The priors are scaled by the number
830830
of samples (:math:`n\_samples`) for `H` and the number of features (:math:`n\_features`)
831831
for `W` to keep their impact balanced with respect to one another and to the data fit
832-
term as independant as possible of the size of the training set. Then the priors terms
832+
term as independent as possible of the size of the training set. Then the priors terms
833833
are:
834834

835835
.. math::

doc/modules/lda_qda.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -187,7 +187,7 @@ an estimate for the covariance matrix). Setting this parameter to a value
187187
between these two extrema will estimate a shrunk version of the covariance
188188
matrix.
189189

190-
The shrinked Ledoit and Wolf estimator of covariance may not always be the
190+
The shrunk Ledoit and Wolf estimator of covariance may not always be the
191191
best choice. For example if the distribution of the data
192192
is normally distributed, the
193193
Oracle Shrinkage Approximating estimator :class:`sklearn.covariance.OAS`
@@ -234,7 +234,7 @@ For QDA, the use of the SVD solver relies on the fact that the covariance
234234
matrix :math:`\Sigma_k` is, by definition, equal to :math:`\frac{1}{n - 1}
235235
X_k^tX_k = \frac{1}{n - 1} V S^2 V^t` where :math:`V` comes from the SVD of the (centered)
236236
matrix: :math:`X_k = U S V^t`. It turns out that we can compute the
237-
log-posterior above without having to explictly compute :math:`\Sigma`:
237+
log-posterior above without having to explicitly compute :math:`\Sigma`:
238238
computing :math:`S` and :math:`V` via the SVD of :math:`X` is enough. For
239239
LDA, two SVDs are computed: the SVD of the centered input matrix :math:`X`
240240
and the SVD of the class-wise mean vectors.

doc/modules/model_evaluation.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2381,7 +2381,7 @@ of 0.0.
23812381
A scorer object with a specific choice of ``power`` can be built by::
23822382

23832383
>>> from sklearn.metrics import d2_tweedie_score, make_scorer
2384-
>>> d2_tweedie_score_15 = make_scorer(d2_tweedie_score, pwoer=1.5)
2384+
>>> d2_tweedie_score_15 = make_scorer(d2_tweedie_score, power=1.5)
23852385

23862386
.. _pinball_loss:
23872387

doc/modules/outlier_detection.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -332,7 +332,7 @@ chosen 1) greater than the minimum number of objects a cluster has to contain,
332332
so that other objects can be local outliers relative to this cluster, and 2)
333333
smaller than the maximum number of close by objects that can potentially be
334334
local outliers.
335-
In practice, such informations are generally not available, and taking
335+
In practice, such information are generally not available, and taking
336336
n_neighbors=20 appears to work well in general.
337337
When the proportion of outliers is high (i.e. greater than 10 \%, as in the
338338
example below), n_neighbors should be greater (n_neighbors=35 in the example

doc/modules/sgd.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -123,7 +123,7 @@ Please refer to the :ref:`mathematical section below
123123
The first two loss functions are lazy, they only update the model
124124
parameters if an example violates the margin constraint, which makes
125125
training very efficient and may result in sparser models (i.e. with more zero
126-
coefficents), even when L2 penalty is used.
126+
coefficients), even when L2 penalty is used.
127127

128128
Using ``loss="log"`` or ``loss="modified_huber"`` enables the
129129
``predict_proba`` method, which gives a vector of probability estimates
@@ -408,7 +408,7 @@ parameters, we minimize the regularized training error given by
408408
where :math:`L` is a loss function that measures model (mis)fit and
409409
:math:`R` is a regularization term (aka penalty) that penalizes model
410410
complexity; :math:`\alpha > 0` is a non-negative hyperparameter that controls
411-
the regularization stength.
411+
the regularization strength.
412412

413413
Different choices for :math:`L` entail different classifiers or regressors:
414414

doc/modules/svm.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -623,7 +623,7 @@ misclassified or within the margin boundary. Ideally, the value :math:`y_i
623623
(w^T \phi (x_i) + b)` would be :math:`\geq 1` for all samples, which
624624
indicates a perfect prediction. But problems are usually not always perfectly
625625
separable with a hyperplane, so we allow some samples to be at a distance :math:`\zeta_i` from
626-
their correct margin boundary. The penalty term `C` controls the strengh of
626+
their correct margin boundary. The penalty term `C` controls the strength of
627627
this penalty, and as a result, acts as an inverse regularization parameter
628628
(see note below).
629629

doc/roadmap.rst

Lines changed: 1 addition & 1 deletion
< B41A tbody>
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,7 @@ external to the core library.
5151
(i.e. rectangular data largely invariant to column and row order;
5252
predicting targets with simple structure)
5353
* improve the ease for users to develop and publish external components
54-
* improve inter-operability with modern data science tools (e.g. Pandas, Dask)
54+
* improve interoperability with modern data science tools (e.g. Pandas, Dask)
5555
and infrastructures (e.g. distributed processing)
5656

5757
Many of the more fine-grained goals can be found under the `API tag

doc/themes/scikit-learn-modern/static/css/theme.css

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1237,7 +1237,7 @@ table.sk-sponsor-table td {
12371237
text-align: center
12381238
}
12391239

1240-
/* pygments - highlightning */
1240+
/* pygments - highlighting */
12411241

12421242
.highlight .hll { background-color: #ffffcc }
12431243
.highlight { background: #f8f8f8; }

doc/tutorial/machine_learning_map/ML_MAPS_README.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ by Andreas Mueller:
77

88
(https://peekaboo-vision.blogspot.de/2013/01/machine-learning-cheat-sheet-for-scikit.html)
99

10-
The image is made interactive using an imagemap, and uses the jQuery Map Hilight plugin module
10+
The image is made interactive using an imagemap, and uses the jQuery Map Highlight plugin module
1111
by David Lynch (https://davidlynch.org/projects/maphilight/docs/) to highlight
1212
the different items on the image upon mouseover.
1313

doc/tutorial/machine_learning_map/pyparsing.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2836,7 +2836,7 @@ class QuotedString(Token):
28362836
def __init__( self, quoteChar, escChar=None, escQuote=None, multiline=False, unquoteResults=True, endQuoteChar=None, convertWhitespaceEscapes=True):
28372837
super(QuotedString,self).__init__()
28382838

2839-
# remove white space from quote chars - wont work anyway
2839+
# remove white space from quote chars - won't work anyway
28402840
quoteChar = quoteChar.strip()
28412841
if not quoteChar:
28422842
warnings.warn("quoteChar cannot be the empty string",SyntaxWarning,stacklevel=2)

doc/whats_new/v0.16.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,7 @@ Highlights
5454

5555
- Out-of core learning of PCA via :class:`decomposition.IncrementalPCA`.
5656

57-
- Probability callibration of classifiers using
57+
- Probability calibration of classifiers using
5858
:class:`calibration.CalibratedClassifierCV`.
5959

6060
- :class:`cluster.Birch` clustering method for large-scale datasets.

doc/whats_new/v0.20.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1286,7 +1286,7 @@ Support for Python 3.3 has been officially dropped.
12861286
be used for novelty detection, i.e. predict on new unseen data. Available
12871287
prediction methods are ``predict``, ``decision_function`` and
12881288
``score_samples``. By default, ``novelty`` is set to ``False``, and only
1289-
the ``fit_predict`` method is avaiable.
1289+
the ``fit_predict`` method is available.
12901290
By :user:`Albert Thomas <albertcthomas>`.
12911291

12921292
- |Fix| Fixed a bug in :class:`neighbors.NearestNeighbors` where fitting a

doc/whats_new/v0.21.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1060,7 +1060,7 @@ These changes mostly affect library developers.
10601060

10611061
- Add ``check_fit_idempotent`` to
10621062
:func:`~utils.estimator_checks.check_estimator`, which checks that
1063-
when `fit` is called twice with the same data, the ouput of
1063+
when `fit` is called twice with the same data, the output of
10641064
`predict`, `predict_proba`, `transform`, and `decision_function` does not
10651065
change. :pr:`12328` by :user:`Nicolas Hug <NicolasHug>`
10661066

doc/whats_new/v0.23.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -341,7 +341,7 @@ Changelog
341341
:pr:`16006` by :user:`Rushabh Vasani <rushabh-v>`.
342342

343343
- |API| The `StreamHandler` was removed from `sklearn.logger` to avoid
344-
double logging of messages in common cases where a hander is attached
344+
double logging of messages in common cases where a handler is attached
345345
to the root logger, and to follow the Python logging documentation
346346
recommendation for libraries to leave the log message handling to
347347
users and application code. :pr:`16451` by :user:`Christoph Deil <cdeil>`.

doc/whats_new/v0.24.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -713,7 +713,7 @@ Changelog
713713
:user:`Joseph Willard <josephwillard>`
714714

715715
- |Fix| bug in :func:`metrics.hinge_loss` where error occurs when
716-
``y_true`` is missing some labels that are provided explictly in the
716+
``y_true`` is missing some labels that are provided explicitly in the
717717
``labels`` parameter.
718718
:pr:`17935` by :user:`Cary Goltermann <Ultramann>`.
719719

examples/applications/plot_cyclical_feature_engineering.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -215,7 +215,7 @@
215215
# %%
216216
#
217217
# Lets evaluate our gradient boosting model with the mean absolute error of the
218-
# relative demand averaged accross our 5 time-based cross-validation splits:
218+
# relative demand averaged across our 5 time-based cross-validation splits:
219219

220220

221221
def evaluate(model, X, y, cv):

examples/calibration/plot_calibration_multiclass.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -178,7 +178,7 @@ class of an instance (red: class 1, green: class 2, blue: class 3).
178178
print(f" * calibrated classifier: {cal_score:.3f}")
179179

180180
# %%
181-
# Finally we generate a grid of possibile uncalibrated probabilities over
181+
# Finally we generate a grid of possible uncalibrated probabilities over
182182
# the 2-simplex, compute the corresponding calibrated probabilities and
183183
# plot arrows for each. The arrows are colored according the highest
184184
# uncalibrated probability. This illustrates the learned calibration map:

examples/covariance/plot_mahalanobis_distances.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -70,7 +70,7 @@
7070
# are Gaussian distributed with mean of 0 but feature 1 has a standard
7171
# deviation equal to 2 and feature 2 has a standard deviation equal to 1. Next,
7272
# 25 samples are replaced with Gaussian outlier samples where feature 1 has
73-
# a standard devation equal to 1 and feature 2 has a standard deviation equal
73+
# a standard deviation equal to 1 and feature 2 has a standard deviation equal
7474
# to 7.
7575

7676
import numpy as np

examples/cross_decomposition/plot_pcr_vs_pls.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -134,7 +134,7 @@
134134
#
135135
# On the other hand, the PLS regressor manages to capture the effect of the
136136
# direction with the lowest variance, thanks to its use of target information
137-
# during the transformation: it can recogize that this direction is actually
137+
# during the transformation: it can recognize that this direction is actually
138138
# the most predictive. We note that the first PLS component is negatively
139139
# correlated with the target, which comes from the fact that the signs of
140140
# eigenvectors are arbitrary.

examples/ensemble/plot_gradient_boosting_early_stopping.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@
1717
model is trained using the training set and evaluated using the validation set.
1818
When each additional stage of regression tree is added, the validation set is
1919
used to score the model. This is continued until the scores of the model in
20-
the last ``n_iter_no_change`` stages do not improve by atleast `tol`. After
20+
the last ``n_iter_no_change`` stages do not improve by at least `tol`. After
2121
that the model is considered to have converged and further addition of stages
2222
is "stopped early".
2323
@@ -64,7 +64,7 @@
6464
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2,
6565
random_state=0)
6666

67-
# We specify that if the scores don't improve by atleast 0.01 for the last
67+
# We specify that if the scores don't improve by at least 0.01 for the last
6868
# 10 stages, stop fitting additional stages
6969
gbes = ensemble.GradientBoostingClassifier(n_estimators=n_estimators,
7070
validation_fraction=0.2,

examples/ensemble/plot_gradient_boosting_quantile.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -184,7 +184,7 @@ def highlight_min(x):
184184
# the fact the squared error estimator is very sensitive to large outliers
185185
# which can cause significant overfitting. This can be seen on the right hand
186186
# side of the previous plot. The conditional median estimator is biased
187-
# (underestimation for this asymetric noise) but is also naturally robust to
187+
# (underestimation for this asymmetric noise) but is also naturally robust to
188188
# outliers and overfits less.
189189
#
190190
# Calibration of the confidence interval

examples/inspection/plot_linear_model_coefficient_interpretation.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -354,7 +354,7 @@
354354

355355
# %%
356356
# Two regions are populated: when the EXPERIENCE coefficient is
357-
# positive the AGE one is negative and viceversa.
357+
# positive the AGE one is negative and vice-versa.
358358
#
359359
# To go further we remove one of the 2 features and check what is the impact
360360
# on the model stability.
@@ -664,7 +664,7 @@
664664
# It is important to keep in mind that the coefficients that have been
665665
# dropped may still be related to the outcome by themselves: the model
666666
# chose to suppress them because they bring little or no additional
667-
# information on top of the other features. Additionnaly, this selection
667+
# information on top of the other features. Additionally, this selection
668668
# is unstable for correlated features, and should be interpreted with
669669
# caution.
670670
#

0 commit comments

Comments
 (0)
0