From 8fdebc83b82d102414d4e7f7ac3bf94438ee0fdc Mon Sep 17 00:00:00 2001 From: "Christine P. Chai" Date: Mon, 17 Feb 2025 13:48:39 -0800 Subject: [PATCH] DOC: Correct links and typos in 6.6 Random Projections --- doc/modules/random_projection.rst | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/doc/modules/random_projection.rst b/doc/modules/random_projection.rst index 079773e286841..ec437c60c7d4c 100644 --- a/doc/modules/random_projection.rst +++ b/doc/modules/random_projection.rst @@ -28,7 +28,7 @@ technique for distance based method. Kaufmann Publishers Inc., San Francisco, CA, USA, 143-151. * Ella Bingham and Heikki Mannila. 2001. - `Random projection in dimensionality reduction: applications to image and text data. `_ + `Random projection in dimensionality reduction: applications to image and text data. `_ In Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining (KDD '01). ACM, New York, NY, USA, 245-250. @@ -84,7 +84,7 @@ bounded distortion introduced by the random projection:: * Sanjoy Dasgupta and Anupam Gupta, 1999. `An elementary proof of the Johnson-Lindenstrauss Lemma. - `_ + `_ .. _gaussian_random_matrix: @@ -95,7 +95,7 @@ dimensionality by projecting the original input space on a randomly generated matrix where components are drawn from the following distribution :math:`N(0, \frac{1}{n_{components}})`. -Here a small excerpt which illustrates how to use the Gaussian random +Here is a small excerpt which illustrates how to use the Gaussian random projection transformer:: >>> import numpy as np @@ -136,7 +136,7 @@ where :math:`n_{\text{components}}` is the size of the projected subspace. By default the density of non zero elements is set to the minimum density as recommended by Ping Li et al.: :math:`1 / \sqrt{n_{\text{features}}}`. -Here a small excerpt which illustrates how to use the sparse random +Here is a small excerpt which illustrates how to use the sparse random projection transformer:: >>> import numpy as np @@ -179,7 +179,7 @@ been computed during fit, they are reused at each call to ``inverse_transform``. Otherwise they are recomputed each time, which can be costly. The result is always dense, even if ``X`` is sparse. -Here a small code example which illustrates how to use the inverse transform +Here is a small code example which illustrates how to use the inverse transform feature:: >>> import numpy as np