8000 DOC Adding dropdown for module 2.2 Manifold Learning by punndcoder28 · Pull Request #26720 · scikit-learn/scikit-learn · GitHub
[go: up one dir, main page]

Skip to content

DOC Adding dropdown for module 2.2 Manifold Learning #26720

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
70 changes: 50 additions & 20 deletions doc/modules/manifold.rst
8000
Original file line numberDiff line number Diff line change
Expand Up @@ -130,8 +130,10 @@ distances between all points. Isomap can be performed with the object
:align: center
:scale: 50

Complexity
----------
|details-start|
**Complexity**
|details-split|

The Isomap algorithm comprises three stages:

1. **Nearest neighbor search.** Isomap uses
Expand Down Expand Up @@ -162,6 +164,8 @@ The overall complexity of Isomap is
* :math:`k` : number of nearest neighbors
* :math:`d` : output dimension

|details-end|

.. topic:: References:

* `"A global geometric framework for nonlinear dimensionality reduction"
Expand All @@ -187,8 +191,9 @@ Locally linear embedding can be performed with function
:align: center
:scale: 50

Complexity
----------
|details-start|
**Complexity**
|details-split|

The standard LLE algorithm comprises three stages:

Expand All @@ -209,6 +214,8 @@ The overall complexity of standard LLE is
* :math:`k` : number of nearest neighbors
* :math:`d` : output dimension

|details-end|

.. topic:: References:

* `"Nonlinear dimensionality reduction by locally linear embedding"
Expand Down Expand Up @@ -241,8 +248,9 @@ It requires ``n_neighbors > n_components``.
:align: center
:scale: 50

Complexity
----------
|details-start|
**Complexity**
|details-split|

The MLLE algorithm comprises three stages:

Expand All @@ -265,6 +273,8 @@ The overall complexity of MLLE is
* :math:`k` : number of nearest neighbors
* :math:`d` : output dimension

|details-end|

.. topic:: References:

* `"MLLE: Modified Locally Linear Embedding Using Multiple Weights"
Expand All @@ -291,8 +301,9 @@ It requires ``n_neighbors > n_components * (n_components + 3) / 2``.
:align: center
:scale: 50

Complexity
----------
|details-start|
**Complexity**
|details-split|

The HLLE algorithm comprises three stages:

Expand All @@ -313,6 +324,8 @@ The overall complexity of standard HLLE is
* :math:`k` : number of nearest neighbors
* :math:`d` : output dimension

|details-end|

.. topic:: References:

* `"Hessian Eigenmaps: Locally linear embedding techniques for
Expand All @@ -335,8 +348,9 @@ preserving local distances. Spectral embedding can be performed with the
function :func:`spectral_embedding` or its object-oriented counterpart
:class:`SpectralEmbedding`.

Complexity
----------
|details-start|
**Complexity**
|details-split|

The Spectral Embedding (Laplacian Eigenmaps) algorithm comprises three stages:

Expand All @@ -358,6 +372,8 @@ The overall complexity of spectral embedding is
* :math:`k` : number of nearest neighbors
* :math:`d` : output dimension

|details-end|

.. topic:: References:

* `"Laplacian Eigenmaps for Dimensionality Reduction
Expand All @@ -383,8 +399,9 @@ tangent spaces to learn the embedding. LTSA can be performed with function
:align: center
:scale: 50

Complexity
----------
|details-start|
**Complexity**
|details-split|

The LTSA algorithm comprises three stages:

Expand All @@ -404,6 +421,8 @@ The overall complexity of standard LTSA is
* :math:`k` : number of nearest neighbors
* :math:`d` : output dimension

|details-end|

.. topic:: References:

* :arxiv:`"Principal manifolds and nonlinear dimensionality reduction via
Expand Down Expand Up @@ -448,8 +467,9 @@ the similarities chosen in some optimal ways. The objective, called the
stress, is then defined by :math:`\sum_{i < j} d_{ij}(X) - \hat{d}_{ij}(X)`


Metric MDS
----------
|details-start|
**Metric MDS**
|details-split|

The simplest metric :class:`MDS` model, called *absolute MDS*, disparities are defined by
:math:`\hat{d}_{ij} = S_{ij}`. With absolute MDS, the value :math:`S_{ij}`
Expand All @@ -458,8 +478,11 @@ should then correspond exactly to the distance between point :math:`i` and

Most commonly, disparities are set to :math:`\hat{d}_{ij} = b S_{ij}`.

Nonmetric MDS
-------------
|details-end|

|details-start|
**Nonmetric MDS**
|details-split|

Non metric :class:`MDS` focuses on the ordination of the data. If
:math:`S_{ij} > S_{jk}`, then the embedding should enforce :math:`d_{ij} <
Expand Down Expand Up @@ -490,6 +513,7 @@ in the metric case.
:align: center
:scale: 60

|details-end|

.. topic:: References:

Expand Down Expand Up @@ -551,8 +575,10 @@ The disadvantages to using t-SNE are roughly:
:align: center
:scale: 50

Optimizing t-SNE
----------------
|details-start|
**Optimizing t-SNE**
|details-split|

The main purpose of t-SNE is visualization of high-dimensional data. Hence,
it works best when the data will be embedded on two or three dimensions.

Expand Down Expand Up @@ -601,8 +627,11 @@ but less accurate results.
provides a good discussion of the effects of the various parameters, as well
as interactive plots to explore the effects of different parameters.

Barnes-Hut t-SNE
----------------
|details-end|

|details-start|
**Barnes-Hut t-SNE**
|details-split|

The Barnes-Hut t-SNE that has been implemented here is usually much slower than
other manifold learning algorithms. The optimization is quite difficult
Expand Down Expand Up @@ -638,6 +667,7 @@ imply that the data cannot be correctly classified by a supervised model. It
might be the case that 2 dimensions are not high enough to accurately represent
the internal structure of the data.

|details-end|

.. topic:: References:

Expand Down
0