8000 doc: adding dropdown for gaussian mixtures · scikit-learn/scikit-learn@faaaae0 · GitHub
[go: up one dir, main page]

Skip to content

Commit faaaae0

Browse files
committed
doc: adding dropdown for gaussian mixtures
1 parent 9cbcc1f commit faaaae0

File tree

1 file changed

+27
-10
lines changed

1 file changed

+27
-10
lines changed

doc/modules/mixture.rst

Lines changed: 27 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -68,8 +68,9 @@ full covariance.
6868
* See :ref:`sphx_glr_auto_examples_mixture_plot_gmm_pdf.py` for an example on plotting the
6969
density estimation.
7070

71+
|details-start|
7172
Pros and cons of class :class:`GaussianMixture`
72-
-----------------------------------------------
73+
|details-split|
7374

7475
Pros
7576
....
@@ -92,9 +93,12 @@ Cons
9293
components it has access to, needing held-out data
9394
or information theoretical criteria to decide how many components to use
9495
in the absence of external cues.
96+
|details-end|
9597

96-
Selecting the number of components in a classical Gaussian Mixture Model
97-
------------------------------------------------------------------------
98+
99+
|details-start|
100+
Selecting the number of components in a classical Gaussian Mixture model
101+
|details-split|
98102

99103
The BIC criterion can be used to select the number of components in a Gaussian
100104
Mixture in an efficient way. In theory, it recovers the true number of
@@ -115,9 +119,12 @@ model.
115119
of model selection performed with classical Gaussian mixture.
116120

117121
.. _expectation_maximization:
122+
|details-end|
123+
118124

119-
Estimation algorithm Expectation-maximization
120-
-----------------------------------------------
125+
|details-start|
126+
Estimation algorithm expectation-maximization
127+
|details-split|
121128

122129
The main difficulty in learning Gaussian mixture models from unlabeled
123130
data is that one usually doesn't know which points came from
@@ -134,9 +141,11 @@ each component of the model. Then, one tweaks the
134141
parameters to maximize the likelihood of the data given those
135142
assignments. Repeating this process is guaranteed to always converge
136143
to a local optimum.
144+
|details-end|
137145

138-
Choice of the Initialization Method
139-
-----------------------------------
146+
|details-start|
147+
Choice of the Initialization method
148+
|details-split|
140149

141150
There is a choice of four initialization methods (as well as inputting user defined
142151
initial means) to generate the initial centers for the model components:
@@ -173,6 +182,8 @@ random
173182
using different initializations in Gaussian Mixture.
174183

175184
.. _bgmm:
185+
|details-end|
186+
176187

177188
Variational Bayesian Gaussian Mixture
178189
=====================================
@@ -183,8 +194,9 @@ similar to the one defined by :class:`GaussianMixture`.
183194

184195
.. _variational_inference:
185196

197+
|details-start|
186198
Estimation algorithm: variational inference
187-
---------------------------------------------
199+
|details-split|
188200

189201
Variational inference is an extension of expectation-maximization that
190202
maximizes a lower bound on model evidence (including
@@ -281,10 +293,12 @@ from the two resulting mixtures.
281293
:class:`BayesianGaussianMixture` with different
282294
``weight_concentration_prior_type`` for different values of the parameter
283295
``weight_concentration_prior``.
296+
|details-end|
284297

285298

299+
|details-start|
286300
Pros and cons of variational inference with :class:`BayesianGaussianMixture`
287-
----------------------------------------------------------------------------
301+
|details-split|
288302

289303
Pros
290304
.....
@@ -323,12 +337,14 @@ Cons
323337
the Dirichlet process if used), and whenever there is a mismatch between
324338
these biases and the data it might be possible to fit better models using a
325339
finite mixture.
340+
|details-end|
326341

327342

328343
.. _dirichlet_process:
329344

345+
|details-start|
330346
The Dirichlet Process
331-
---------------------
347+
|details-split|
332348

333349
Here we describe variational inference algorithms on Dirichlet process
334350
mixture. The Dirichlet process is a prior probability distribution on
@@ -361,3 +377,4 @@ use, one just specifies the concentration parameter and an upper bound
361377
on the number of mixture components (this upper bound, assuming it is
362378
higher than the "true" number of components, affects only algorithmic
363379
complexity, not the actual number of components used).
380+
|details-end|

0 commit comments

Comments
 (0)
0