8000 doc: adding dropdown for gaussian mixtures · scikit-learn/scikit-learn@4288734 · GitHub
[go: up one dir, main page]

Skip to content

Commit 4288734

Browse files
committed
doc: adding dropdown for gaussian mixtures
Signed-off-by: punndcoder28 <puneethk.2899@gmail.com>
1 parent 2b0eef8 commit 4288734

File tree

1 file changed

+31
-15
lines changed

1 file changed

+31
-15
lines changed

doc/modules/mixture.rst

Lines changed: 31 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -68,8 +68,9 @@ full covariance.
6868
* See :ref:`sphx_glr_auto_examples_mixture_plot_gmm_pdf.py` for an example on plotting the
6969
density estimation.
7070

71-
Pros and cons of class :class:`GaussianMixture`
72-
-----------------------------------------------
71+
|details-start|
72+
**Pros and cons of class GaussianMixture**
73+
|details-split|
7374

7475
Pros
7576
....
@@ -93,8 +94,12 @@ Cons
9394
or information theoretical criteria to decide how many components to use
9495
in the absence of external cues.
9596

96-
Selecting the number of components in a classical Gaussian Mixture Model
97-
------------------------------------------------------------------------
97+
|details-end|
98+
99+
100+
|details-start|
101+
**Selecting the number of components in a classical Gaussian Mixture model**
102+
|details-split|
98103

99104
The BIC criterion can be used to select the number of components in a Gaussian
100105
Mixture in an efficient way. In theory, it recovers the true number of
@@ -116,8 +121,11 @@ model.
116121

117122
.. _expectation_maximization:
118123

119-
Estimation algorithm Expectation-maximization
120-
-----------------------------------------------
124+
|details-end|
125+
126+
|details-start|
127+
**Estimation algorithm expectation-maximization**
128+
|details-split|
121129

122130
The main difficulty in learning Gaussian mixture models from unlabeled
123131
data is that one usually doesn't know which points came from
@@ -135,8 +143,11 @@ parameters to maximize the likelihood of the data given those
135143
assignments. Repeating this process is guaranteed to always converge
136144
to a local optimum.
137145

138-
Choice of the Initialization Method
139-
-----------------------------------
146+
|details-end|
147+
148+
|details-start|
149+
**Choice of the Initialization method**
150+
|details-split|
140151

141152
There is a choice of four initialization methods (as well as inputting user defined
142153
initial means) to generate the initial centers for the model components:
@@ -174,6 +185,8 @@ random
174185

175186
.. _bgmm:
176187

188+
|details-end|
189+
177190
Variational Bayesian Gaussian Mixture
178191
=====================================
179192

@@ -183,8 +196,7 @@ similar to the one defined by :class:`GaussianMixture`.
183196

184197
.. _variational_inference:
185198

186-
Estimation algorithm: variational inference
187-
---------------------------------------------
199+
**Estimation algorithm: variational inference**
188200

189201
Variational inference is an extension of expectation-maximization that
190202
maximizes a lower bound on model evidence (including
@@ -282,9 +294,9 @@ from the two resulting mixtures.
282294
``weight_concentration_prior_type`` for different values of the parameter
283295
``weight_concentration_prior``.
284296

285-
286-
Pros and cons of variational inference with :class:`BayesianGaussianMixture`
287-
----------------------------------------------------------------------------
297+
|details-start|
298+
**Pros and cons of variational inference with BayesianGaussianMixture**
299+
|details-split|
288300

289301
Pros
290302
.....
@@ -324,11 +336,13 @@ Cons
324336
these biases and the data it might be possible to fit better models using a
325337
finite mixture.
326338

339+
|details-end|
327340

328341
.. _dirichlet_process:
329342

330-
The Dirichlet Process
331-
---------------------
343+
|details-start|
344+
**The Dirichlet Process**
345+
|details-split|
332346

333347
Here we describe variational inference algorithms on Dirichlet process
334348
mixture. The Dirichlet process is a prior probability distribution on
@@ -361,3 +375,5 @@ use, one just specifies the concentration parameter and an upper bound
361375
on the number of mixture components (this upper bound, assuming it is
362376
higher than the "true" number of components, affects only algorithmic
363377
complexity, not the actual number of components used).
378+
379+
|details-end|

0 commit comments

Comments
 (0)
0