8000 doc: adding dropdown for gaussian mixtures · scikit-learn/scikit-learn@07acad3 · GitHub
[go: up one dir, main page]

Skip to content

Commit 07acad3

Browse files
committed
doc: adding dropdown for gaussian mixtures
Signed-off-by: punndcoder28 <puneethk.2899@gmail.com>
1 parent 9cbcc1f commit 07acad3

File tree

1 file changed

+33
-14
lines changed

1 file changed

+33
-14
lines changed

doc/modules/mixture.rst

Lines changed: 33 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -68,8 +68,9 @@ full covariance.
6868
* See :ref:`sphx_glr_auto_examples_mixture_plot_gmm_pdf.py` for an example on plotting the
6969
density estimation.
7070

71-
Pros and cons of class :class:`GaussianMixture`
72-
-----------------------------------------------
71+
|details-start|
72+
**Pros and cons of class :class:`GaussianMixture`**
73+
|details-split|
7374

7475
Pros
7576
....
@@ -93,8 +94,11 @@ Cons
9394
or information theoretical criteria to decide how many components to use
9495
in the absence of external cues.
9596

96-
Selecting the number of components in a classical Gaussian Mixture Model
97-
------------------------------------------------------------------------
97+
|details-end|
98+
99+
|details-start|
100+
**Selecting the number of components in a classical Gaussian Mixture model**
101+
|details-split|
98102

99103
The BIC criterion can be used to select the number of components in a Gaussian
100104
Mixture in an efficient way. In theory, it recovers the true number of
@@ -116,8 +120,11 @@ model.
116120

117121
.. _expectation_maximization:
118122

119-
Estimation algorithm Expectation-maximization
120-
-----------------------------------------------
10000 123+
|details-end|
124+
125+
|details-start|
126+
**Estimation algorithm expectation-maximization**
127+
|details-split|
121128

122129
The main difficulty in learning Gaussian mixture models from unlabeled
123130
data is that one usually doesn't know which points came from
@@ -135,8 +142,11 @@ parameters to maximize the likelihood of the data given those
135142
assignments. Repeating this process is guaranteed to always converge
136143
to a local optimum.
137144

138-
Choice of the Initialization Method
139-
-----------------------------------
145+
|details-end|
146+
147+
|details-start|
148+
**Choice of the Initialization method**
149+
|details-split|
140150

141151
There is a choice of four initialization methods (as well as inputting user defined
142152
initial means) to generate the initial centers for the model components:
@@ -174,6 +184,8 @@ random
174184

175185
.. _bgmm:
176186

187+
|details-end|
188+
177189
Variational Bayesian Gaussian Mixture
178190
=====================================
179191

@@ -183,8 +195,9 @@ similar to the one defined by :class:`GaussianMixture`.
183195

184196
.. _variational_inference:
185197

186-
Estimation algorithm: variational inference
187-
---------------------------------------------
198+
|details-start|
199+
**Estimation algorithm: variational inference**
200+
|details-split|
188201

189202
Variational inference is an extension of expectation-maximization that
190203
maximizes a lower bound on model evidence (including
@@ -282,9 +295,11 @@ from the two resulting mixtures.
282295
``weight_concentration_prior_type`` for different values of the parameter
283296
``weight_concentration_prior``.
284297

298+
|details-end|
285299

286-
Pros and cons of variational inference with :class:`BayesianGaussianMixture`
287-
----------------------------------------------------------------------------
300+
|details-start|
301+
**Pros and cons of variational inference with :class:`BayesianGaussianMixture`**
302+
|details-split|
288303

289304
Pros
290305
.....
@@ -324,11 +339,13 @@ Cons
324339
these biases and the data it might be possible to fit better models using a
325340
finite mixture.
326341

342+
|details-end|
327343

328344
.. _dirichlet_process:
329345

330-
The Dirichlet Process
331-
---------------------
346+
|details-start|
347+
**The Dirichlet Process**
348+
|details-split|
332349

333350
Here we describe variational inference algorithms on Dirichlet process
334351
mixture. The Dirichlet process is a prior probability distribution on
@@ -361,3 +378,5 @@ use, one just specifies the concentration parameter and an upper bound
361378
on the number of mixture components (this upper bound, assuming it is
362379
higher than the "true" number of components, affects only algorithmic
363380
complexity, not the actual number of components used).
381+
382+
|details-end|

0 commit comments

Comments
 (0)
0