8000 docstring fixed · scikit-learn/scikit-learn@bf6416a · GitHub
[go: up one dir, main page]

Skip to content

Commit bf6416a

Browse files
author
giorgiop
committed
docstring fixed
1 parent 3125d33 commit bf6416a

File tree

3 files changed

+124
-129
lines changed

3 files changed

+124
-129
lines changed

doc/tutorial/statistical_inference/unsupervised_learning.rst

Lines changed: 18 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -12,16 +12,16 @@ Clustering: grouping observations together
1212
**clustering task**: split the observations into well-separated group
1313
called *clusters*.
1414

15-
..
16-
>>> # Set the PRNG
15+
..
16+
>>> # Set the PRNG
1717
>>> import numpy as np
1818
>>> np.random.seed(1)
1919
2020
K-means clustering
2121
-------------------
2222

2323
Note that there exist a lot of different clustering criteria and associated
24-
algorithms. The simplest clustering algorithm is
24+
algorithms. The simplest clustering algorithm is
2525
:ref:`k_means`.
2626

2727
.. image:: ../../auto_examples/cluster/images/plot_cluster_iris_002.png
@@ -30,7 +30,7 @@ algorithms. The simplest clustering algorithm is
3030
:align: right
3131

3232

33-
::
33+
::
3434

3535
>>> from sklearn import cluster, datasets
3636
>>> iris = datasets.load_iris()
@@ -57,30 +57,30 @@ algorithms. The simplest clustering algorithm is
5757
:target: ../../auto_examples/cluster/plot_cluster_iris.html
5858
:scale: 63
5959

60-
.. warning::
61-
60+
.. warning::
61+
6262
There is absolutely no guarantee of recovering a ground truth. First,
6363
choosing the right number of clusters is hard. Second, the algorithm
6464
is sensitive to initialization, and can fall into local minima,
6565
although scikit-learn employs several tricks to mitigate this issue.
6666

6767
.. list-table::
6868
:class: centered
69-
70-
*
71-
69+
70+
*
71+
7272
- |k_means_iris_bad_init|
7373

7474
- |k_means_iris_8|
7575

7676
- |cluster_iris_truth|
7777

78-
*
79-
78+
*
79+
8080
- **Bad initialization**
81-
81+
8282
- **8 clusters**
83-
83+
8484
- **Ground truth**
8585

8686
**Don't over-interpret clustering results**
@@ -105,8 +105,8 @@ algorithms. The simplest clustering algorithm is
105105

106106
Clustering in general and KMeans, in particular, can be seen as a way
107107
of choosing a small number of exemplars to compress the information.
108-
The problem is sometimes known as
109-
`vector quantization <http://en.wikipedia.org/wiki/Vector_quantization>`_.
108+
The problem is sometimes known as
109+
`vector quantization <http://en.wikipedia.org/wiki/Vector_quantization>`_.
110110
For instance, this can be used to posterize an image::
111111

112112
>>> import scipy as sp
@@ -125,7 +125,7 @@ algorithms. The simplest clustering algorithm is
125125
>>> lena_compressed.shape = lena.shape
126126

127127
.. list-table::
128-
:class: centered
128+
:class: centered
129129

130130
*
131131
- |lena|
@@ -275,8 +275,7 @@ data by projecting on a principal subspace.
275275
>>> from sklearn import decomposition
276276
>>> pca = decomposition.PCA()
277277
>>> pca.fit(X)
278-
PCA(copy=True, iterated_power=3, n_components=None, random_state=None,
279-
svd_solver='auto', tol=0.0, whiten=False)
278+
PCA(copy=True, n_components=None, whiten=False)
280279
>>> print(pca.explained_variance_) # doctest: +SKIP
281280
[ 2.18565811e+00 1.19346747e+00 8.43026679e-32]
282281

@@ -321,3 +320,4 @@ a maximum amount of independent information. It is able to recover
321320
>>> A_ = ica.mixing_.T
322321
>>> np.allclose(X, np.dot(S_, A_) + ica.mean_)
323322
True
323+

0 commit comments

Comments
 (0)
0