@@ -200,7 +200,7 @@ section.
200
200
* :ref: `example_feature_selection_plot_rfe_with_cross_validation.py `,
201
201
* :ref: `example_model_selection_grid_search_digits.py `,
202
202
* :ref: `example_model_selection_grid_search_text_feature_extraction.py `,
203
- * :ref: `example_plot_cv_predict.py `,
203
+ * :ref: `example_plot_cv_predict.py `.
204
204
205
205
Cross validation iterators
206
206
==========================
@@ -316,7 +316,7 @@ Potential users of LOO for model selection should weigh a few known caveats.
316
316
When compared with :math: `k`-fold cross validation, one builds :math: `n` models
317
317
from :math: `n` samples instead of :math: `k` models, where :math: `n > k`.
318
318
Moreover, each is trained on :math: `n - 1 ` samples rather than
319
- :math: `(k-1 )n / k`. In both ways, assuming :math: `k` is not too large
319
+ :math: `(k-1 ) n / k`. In both ways, assuming :math: `k` is not too large
320
320
and :math: `k < n`, LOO is more computationally expensive than :math: `k`-fold
321
321
cross validation.
322
322
@@ -335,17 +335,17 @@ fold cross validation should be preferred to LOO.
335
335
336
336
.. topic :: References:
337
337
338
- * http://www.faqs.org/faqs/ai-faq/neural-nets/part3/section-12.html
338
+ * `< http://www.faqs.org/faqs/ai-faq/neural-nets/part3/section-12.html >`_;
339
339
* T. Hastie, R. Tibshirani, J. Friedman, `The Elements of Statistical Learning
340
- <http://www-stat.stanford.edu/~tibs/ElemStatLearn> `_, Springer 2009
340
+ <http://www-stat.stanford.edu/~tibs/ElemStatLearn> `_, Springer 2009;
341
341
* L. Breiman, P. Spector `Submodel selection and evaluation in regression: The X-random case
342
- <http://digitalassets.lib.berkeley.edu/sdtr/ucb/text/197.pdf> `_, International Statistical Review 1992
342
+ <http://digitalassets.lib.berkeley.edu/sdtr/ucb/text/197.pdf> `_, International Statistical Review 1992;
343
343
* R. Kohavi, `A Study of Cross-Validation and Bootstrap for Accuracy Estimation and Model Selection
344
- <http://www.cs.iastate.edu/~jtian/cs573/Papers/Kohavi-IJCAI-95.pdf> `_, Intl. Jnt. Conf. AI
344
+ <http://www.cs.iastate.edu/~jtian/cs573/Papers/Kohavi-IJCAI-95.pdf> `_, Intl. Jnt. Conf. AI;
345
345
* R. Bharat Rao, G. Fung, R. Rosales, `On the Dangers of Cross-Validation. An Experimental Evaluation
346
- <http://www.siam.org/proceedings/datamining/2008/dm08_54_Rao.pdf> `_, SIAM 2008
346
+ <http://www.siam.org/proceedings/datamining/2008/dm08_54_Rao.pdf> `_, SIAM 2008;
347
347
* G. James, D. Witten, T. Hastie, R Tibshirani, `An Introduction to
348
- Statistical Learning <http://www-bcf.usc.edu/~gareth/ISL> `_, Springer 2013
348
+ Statistical Learning <http://www-bcf.usc.edu/~gareth/ISL> `_, Springer 2013.
349
349
350
350
351
351
Leave-P-Out - LPO
@@ -384,7 +384,7 @@ cross-validation folds.
384
384
Each training set is thus constituted by all the samples except the ones
385
385
related to a specific label.
386
386
387
- For example, in the cases of multiple experiments, * LOLO * can be used to
387
+ For example, in the cases of multiple experiments, LOLO can be used to
388
388
create a cross-validation based on the different experiments: we create
389
389
a training set using the samples of all the experiments except one::
390
390
@@ -405,9 +405,10 @@ for cross-validation against time-based splits.
405
405
406
406
.. warning ::
407
407
408
- Contrary to :class: `StratifiedKFold `, the ``labels `` of
409
- :class: `LeaveOneLabelOut ` should not encode the target class to predict:
410
- the goal of :class: `StratifiedKFold ` is to rebalance dataset classes across
408
+ Contrary to :class: `StratifiedKFold `,
409
+ the ``labels `` of :class: `LeaveOneLabelOut ` should not encode
410
+ the target class to predict: the goal of :class: `StratifiedKFold `
411
+ is to rebalance dataset classes across
411
412
the train / test split to ensure that the train and test folds have
412
413
approximately the same percentage of samples of each class while
413
414
:class: `LeaveOneLabelOut ` will do the opposite by ensuring that the samples
0 commit comments