8000 DOC Correct links and typos in 6.6 Random Projections (#30848) · scikit-learn/scikit-learn@6d14a58 · GitHub
[go: up one dir, main page]

Skip to content

Commit 6d14a58

Browse files
authored
DOC Correct links and typos in 6.6 Random Projections (#30848)
1 parent 3c0e722 commit 6d14a58

File tree

1 file changed

+5
-5
lines changed

1 file changed

+5
-5
lines changed

doc/modules/random_projection.rst

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ technique for distance based method.
2828
Kaufmann Publishers Inc., San Francisco, CA, USA, 143-151.
2929

3030
* Ella Bingham and Heikki Mannila. 2001.
31-
`Random projection in dimensionality reduction: applications to image and text data. <https://citeseerx.ist.psu.edu/doc_view/pid/aed77346f737b0ed5890b61ad02e5eb4ab2f3dc6>`_
31+
`Random projection in dimensionality reduction: applications to image and text data. <https://cs-people.bu.edu/evimaria/cs565/kdd-rp.pdf>`_
3232
In Proceedings of the seventh ACM SIGKDD international conference on
3333
Knowledge discovery and data mining (KDD '01). ACM, New York, NY, USA,
3434
245-250.
@@ -84,7 +84,7 @@ bounded distortion introduced by the random projection::
8484

8585
* Sanjoy Dasgupta and Anupam Gupta, 1999.
8686
`An elementary proof of the Johnson-Lindenstrauss Lemma.
87-
<https://citeseerx.ist.psu.edu/doc_view/pid/95cd464d27c25c9c8690b378b894d337cdf021f9>`_
87+
<https://cseweb.ucsd.edu/~dasgupta/papers/jl.pdf>`_
8888

8989
.. _gaussian_random_matrix:
9090

@@ -95,7 +95,7 @@ dimensionality by projecting the original input space on a randomly generated
9595
matrix where components are drawn from the following distribution
9696
:math:`N(0, \frac{1}{n_{components}})`.
9797

98-
Here a small excerpt which illustrates how to use the Gaussian random
98+
Here is a small excerpt which illustrates how to use the Gaussian random
9999
projection transformer::
100100

101101
>>> import numpy as np
@@ -136,7 +136,7 @@ where :math:`n_{\text{components}}` is the size of the projected subspace.
136136
By default the density of non zero elements is set to the minimum density as
137137
recommended by Ping Li et al.: :math:`1 / \sqrt{n_{\text{features}}}`.
138138

139-
Here a small excerpt which illustrates how to use the sparse random
139+
Here is a small excerpt which illustrates how to use the sparse random
140140
projection transformer::
141141

142142
>>> import numpy as np
@@ -179,7 +179,7 @@ been computed during fit, they are reused at each call to ``inverse_transform``.
179179
Otherwise they are recomputed each time, which can be costly. The result is always
180180
dense, even if ``X`` is sparse.
181181

182-
Here a small code example which illustrates how to use the inverse transform
182+
Here is a small code example which illustrates how to use the inverse transform
183183
feature::
184184

185185
>>> import numpy as np

0 commit comments

Comments
 (0)
0