8000 [MRG] DOC Replacing "the scikit" with "scikit-learn" (#10126) · maskani-moh/scikit-learn@2cafde9 · GitHub
[go: up one dir, main page]

Skip to content

Commit 2cafde9

Browse files
FarahSaeedmaskani-moh
authored andcommitted
[MRG] DOC Replacing "the scikit" with "scikit-learn" (scikit-learn#10126)
1 parent 1eb2e8a commit 2cafde9

File tree

14 files changed

+19
-19
lines changed

14 files changed

+19
-19
lines changed

doc/datasets/index.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -64,15 +64,15 @@ require to download any file from some external website.
6464
load_breast_cancer
6565

6666
These datasets are useful to quickly illustrate the behavior of the
67-
various algorithms implemented in the scikit. They are however often too
67+
various algorithms implemented in scikit-learn. They are however often too
6868
small to be representative of real world machine learning tasks.
6969

7070
.. _sample_images:
7171

7272
Sample images
7373
=============
7474

75-
The scikit also embed a couple of sample JPEG images published under Creative
75+
Scikit-learn also embed a couple of sample JPEG images published under Creative
7676
Commons license by their authors. Those image can be useful to test algorithms
7777
and pipeline on 2D data.
7878

doc/developers/performance.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -94,7 +94,7 @@ loads and prepare you data and then use the IPython integrated profiler
9494
for interactively exploring the relevant part for the code.
9595

9696
Suppose we want to profile the Non Negative Matrix Factorization module
97-
of the scikit. Let us setup a new IPython session and load the digits
97+
of scikit-learn. Let us setup a new IPython session and load the digits
9898
dataset and as in the :ref:`sphx_glr_auto_examples_classification_plot_digits_classification.py` example::
9999

100100
In [1]: from sklearn.decomposition import NMF

doc/modules/dp-derivation.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ complex, or even more. For this reason we present here a full
2323
derivation of the inference algorithm and all the update and
2424
lower-bound equations. If you're not interested in learning how to
2525
derive similar algorithms yourself and you're not interested in
26-
changing/debugging the implementation in the scikit this document is
26+
changing/debugging the implementation in scikit-learn this document is
2727
not for you.
2828

2929
The complexity of this implementation is linear in the number of

doc/modules/model_persistence.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ security and maintainability issues when working with pickle serialization.
1313
Persistence example
1414
-------------------
1515

16-
It is possible to save a model in the scikit by using Python's built-in
16+
It is possible to save a model in scikit-learn by using Python's built-in
1717
persistence model, namely `pickle <https://docs.python.org/2/library/pickle.html>`_::
1818

1919
>>> from sklearn import svm
@@ -35,7 +35,7 @@ persistence model, namely `pickle <https://docs.python.org/2/library/pickle.html
3535
>>> y[0]
3636
0
3737

38-
In the specific case of the scikit, it may be more interesting to use
38+
In the specific case of scikit-learn, it may be more interesting to use
3939
joblib's replacement of pickle (``joblib.dump`` & ``joblib.load``),
4040
which is more efficient on objects that carry large numpy arrays internally as
4141
is often the case for fitted scikit-learn estimators, but can only pickle to the

doc/presentations.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ Videos
3737
<http://videolectures.net/icml2010_varaquaux_scik/>`_ by `Gael Varoquaux`_ at
3838
ICML 2010
3939

40-
A three minute video from a very early stage of the scikit, explaining the
40+
A three minute video from a very early stage of scikit-learn, explaining the
4141
basic idea and approach we are following.
4242

4343
- `Introduction to statistical learning with scikit-learn <http://archive.org/search.php?query=scikit-learn>`_

doc/tutorial/basic/tutorial.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -209,7 +209,7 @@ example that you can run and study:
209209
Model persistence
210210
-----------------
211211

212-
It is possible to save a model in the scikit by using Python's built-in
212+
It is possible to save a model in scikit-learn by using Python's built-in
213213
persistence model, namely `pickle <https://docs.python.org/2/library/pickle.html>`_::
214214

215215
>>> from sklearn import svm
@@ -231,7 +231,7 @@ persistence model, namely `pickle <https://docs.python.org/2/library/pickle.html
231231
>>> y[0]
232232
0
233233

234-
In the specific case of the scikit, it may be more interesting to use
234+
In the specific case of scikit-learn, it may be more interesting to use
235235
joblib's replacement of pickle (``joblib.dump`` & ``joblib.load``),
236236
which is more efficient on big data, but can only pickle to the disk
237237
and not to a string::

doc/tutorial/statistical_inference/settings.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ list of multi-dimensional observations. We say that the first axis of
1212
these arrays is the **samples** axis, while the second is the
1313
**features** axis.
1414

15-
.. topic:: A simple example shipped with the scikit: iris dataset
15+
.. topic:: A simple example shipped with scikit-learn: iris dataset
1616

1717
::
1818

@@ -46,7 +46,7 @@ needs to be preprocessed in order to be used by scikit-learn.
4646
>>> plt.imshow(digits.images[-1], cmap=plt.cm.gray_r) #doctest: +SKIP
4747
<matplotlib.image.AxesImage object at ...>
4848

49-
To use this dataset with the scikit, we transform each 8x8 image into a
49+
To use this dataset with scikit-learn, we transform each 8x8 image into a
5050
feature vector of length 64 ::
5151

5252
>>> data = digits.images.reshape((digits.images.shape[0], -1))

doc/tutorial/statistical_inference/unsupervised_learning.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -171,7 +171,7 @@ Connectivity-constrained clustering
171171
.....................................
172172

173173
With agglomerative clustering, it is possible to specify which samples can be
174-
clustered together by giving a connectivity graph. Graphs in the scikit
174+
clustered together by giving a connectivity graph. Graphs in scikit-learn
175175
are represented by their adjacency matrix. Often, a sparse matrix is used.
176176
This can be useful, for instance, to retrieve connected regions (sometimes
177177
also referred to as connected components) when

examples/README.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,4 +6,4 @@ Examples
66
General examples
77
----------------
88

9-
General-purpose and introductory examples for the scikit.
9+
General-purpose and introductory examples for scikit-learn.

examples/applications/wikipedia_principal_eigenvector.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@
2323
https://en.wikipedia.org/wiki/Power_iteration
2424
2525
Here the computation is achieved thanks to Martinsson's Randomized SVD
26-
algorithm implemented in the scikit.
26+
algorithm implemented in scikit-learn.
2727
2828
The graph data is fetched from the DBpedia dumps. DBpedia is an extraction
2929
of the latent structured data of the Wikipedia content.

0 commit comments

Comments
 (0)
0