8000 Merge pull request #6412 from Naereen/patch-3 · rhiever/scikit-learn@e631f2c · GitHub
[go: up one dir, main page]

Skip to content

Commit e631f2c

Browse files
committed
Merge pull request scikit-learn#6412 from Naereen/patch-3
Update svm.rst
2 parents c9d66db + 9f4b53e commit e631f2c

File tree

2 files changed

+27
-26
lines changed

2 files changed

+27
-26
lines changed

doc/modules/cross_validation.rst

Lines changed: 13 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -200,7 +200,7 @@ section.
200200
* :ref:`example_feature_selection_plot_rfe_with_cross_validation.py`,
201201
* :ref:`example_model_selection_grid_search_digits.py`,
202202
* :ref:`example_model_selection_grid_search_text_feature_extraction.py`,
203-
* :ref:`example_plot_cv_predict.py`,
203+
* :ref:`example_plot_cv_predict.py`.
204204

205205
Cross validation iterators
206206
==========================
@@ -316,7 +316,7 @@ Potential users of LOO for model selection should weigh a few known caveats.
316316
When compared with :math:`k`-fold cross validation, one builds :math:`n` models
317317
from :math:`n` samples instead of :math:`k` models, where :math:`n > k`.
318318
Moreover, each is trained on :math:`n - 1` samples rather than
319-
:math:`(k-1)n / k`. In both ways, assuming :math:`k` is not too large
319+
:math:`(k-1) n / k`. In both ways, assuming :math:`k` is not too large
320320
and :math:`k < n`, LOO is more computationally expensive than :math:`k`-fold
321321
cross validation.
322322

@@ -335,17 +335,17 @@ fold cross validation should be preferred to LOO.
335335

336336
.. topic:: References:
337337

338-
* http://www.faqs.org/faqs/ai-faq/neural-nets/part3/section-12.html
338+
* `<http://www.faqs.org/faqs/ai-faq/neural-nets/part3/section-12.html>`_;
339339
* T. Hastie, R. Tibshirani, J. Friedman, `The Elements of Statistical Learning
340-
<http://www-stat.stanford.edu/~tibs/ElemStatLearn>`_, Springer 2009
340+
<http://www-stat.stanford.edu/~tibs/ElemStatLearn>`_, Springer 2009;
341341
* L. Breiman, P. Spector `Submodel selection and evaluation in regression: The X-random case
342-
<http://digitalassets.lib.berkeley.edu/sdtr/ucb/text/197.pdf>`_, International Statistical Review 1992
342+
<http://digitalassets.lib.berkeley.edu/sdtr/ucb/text/197.pdf>`_, International Statistical Review 1992;
343343
* R. Kohavi, `A Study of Cross-Validation and Bootstrap for Accuracy Estimation and Model Selection
344-
<http://www.cs.iastate.edu/~jtian/cs573/Papers/Kohavi-IJCAI-95.pdf>`_, Intl. Jnt. Conf. AI
344+
<http://www.cs.iastate.edu/~jtian/cs573/Papers/Kohavi-IJCAI-95.pdf>`_, Intl. Jnt. Conf. AI;
345345
* R. Bharat Rao, G. Fung, R. Rosales, `On the Dangers of Cross-Validation. An Experimental Evaluation
346-
<http://www.siam.org/proceedings/datamining/2008/dm08_54_Rao.pdf>`_, SIAM 2008
346+
<http://www.siam.org/proceedings/datamining/2008/dm08_54_Rao.pdf>`_, SIAM 2008;
347347
* G. James, D. Witten, T. Hastie, R Tibshirani, `An Introduction to
348-
Statistical Learning <http://www-bcf.usc.edu/~gareth/ISL>`_, Springer 2013
348+
Statistical Learning <http://www-bcf.usc.edu/~gareth/ISL>`_, Springer 2013.
349349

350350

351351
Leave-P-Out - LPO
@@ -384,7 +384,7 @@ cross-validation folds.
384384
Each training set is thus constituted by all the samples except the ones
385385
related to a specific label.
386386

387-
For example, in the cases of multiple experiments, *LOLO* can be used to
387+
For example, in the cases of multiple experiments, LOLO can be used to
388388
create a cross-validation based on the different experiments: we create
389389
a training set using the samples of all the experiments except one::
390390

@@ -405,9 +405,10 @@ for cross-validation against time-based splits.
405405

406406
.. warning::
407407

408-
Contrary to :class:`StratifiedKFold`, the ``labels`` of
409-
:class:`LeaveOneLabelOut` should not encode the target class to predict:
410-
the goal of :class:`StratifiedKFold` is to rebalance dataset classes across
408+
Contrary to :class:`StratifiedKFold`,
409+
the ``labels`` of :class:`LeaveOneLabelOut` should not encode
410+
the target class to predict: the goal of :class:`StratifiedKFold`
411+
is to rebalance dataset classes across
411412
the train / test split to ensure that the train and test folds have
412413
approximately the same percentage of samples of each class while
413414
:class:`LeaveOneLabelOut` will do the opposite by ensuring that the samples

doc/modules/svm.rst

Lines changed: 14 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -238,7 +238,7 @@ and use ``decision_function`` instead of ``predict_proba``.
238238

239239
* Wu, Lin and Weng,
240240
`"Probability estimates for multi-class classification by pairwise coupling"
241-
<http://www.csie.ntu.edu.tw/~cjlin/papers/svmprob/svmprob.pdf>`_.
241+
<http://www.csie.ntu.edu.tw/~cjlin/papers/svmprob/svmprob.pdf>`_,
242242
JMLR 5:975-1005, 2004.
243243

244244

@@ -380,7 +380,7 @@ Tips on Practical Use
380380
* **Avoiding data copy**: For :class:`SVC`, :class:`SVR`, :class:`NuSVC` and
381381
:class:`NuSVR`, if the data passed to certain methods is not C-ordered
382382
contiguous, and double precision, it will be copied before calling the
383-
underlying C implementation. You can check whether a give numpy array is
383+
underlying C implementation. You can check whether a given numpy array is
384384
C-contiguous by inspecting its ``flags`` attribute.
385385

386386
For :class:`LinearSVC` (and :class:`LogisticRegression
@@ -588,7 +588,7 @@ Its dual is
588588
589589
where :math:`e` is the vector of all ones, :math:`C > 0` is the upper bound,
590590
:math:`Q` is an :math:`n` by :math:`n` positive semidefinite matrix,
591-
:math:`Q_{ij} \equiv y_i y_j K(x_i, x_j)` Where :math:`K(x_i, x_j) = \phi (x_i)^T \phi (x_j)`
591+
:math:`Q_{ij} \equiv y_i y_j K(x_i, x_j)`, where :math:`K(x_i, x_j) = \phi (x_i)^T \phi (x_j)`
592592
is the kernel. Here training vectors are implicitly mapped into a higher
593593
(maybe infinite) dimensional space by the function :math:`\phi`.
594594

@@ -613,14 +613,14 @@ term :math:`\rho` :
613613
.. topic:: References:
614614

615615
* `"Automatic Capacity Tuning of Very Large VC-dimension Classifiers"
616-
<http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.17.7215>`_
617-
I Guyon, B Boser, V Vapnik - Advances in neural information
618-
processing 1993,
616+
<http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.17.7215>`_,
617+
I. Guyon, B. Boser, V. Vapnik - Advances in neural information
618+
processing 1993.
619619

620620

621621
* `"Support-vector networks"
622-
<http://www.springerlink.com/content/k238jx04hm87j80g/>`_
623-
C. Cortes, V. Vapnik, Machine Leaming, 20, 273-297 (1995)
622+
<http://www.springerlink.com/content/k238jx04hm87j80g/>`_,
623+
C. Cortes, V. Vapnik - Machine Learning, 20, 273-297 (1995).
624624

625625

626626

@@ -681,9 +681,9 @@ term :math:`\rho`
681681
.. topic:: References:
682682

683683
* `"A Tutorial on Support Vector Regression"
684-
<http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.114.4288>`_
685-
Alex J. Smola, Bernhard Schölkopf -Statistics and Computing archive
686-
Volume 14 Issue 3, August 2004, p. 199-222
684+
<http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.114.4288>`_,
685+
Alex J. Smola, Bernhard Schölkopf - Statistics and Computing archive
686+
Volume 14 Issue 3, August 2004, p. 199-222.
687687

688688

689689
.. _svm_implementation_details:
@@ -703,9 +703,9 @@ computations. These libraries are wrapped using C and Cython.
703703
used, please refer to
704704

705705
- `LIBSVM: a library for Support Vector Machines
706-
<http://www.csie.ntu.edu.tw/~cjlin/papers/libsvm.pdf>`_
706+
<http://www.csie.ntu.edu.tw/~cjlin/papers/libsvm.pdf>`_.
707707

708-
- `LIBLINEAR -- A Library for Large Linear Classification
709-
<http://www.csie.ntu.edu.tw/~cjlin/liblinear/>`_
708+
- `LIBLINEAR -- a library for Large Linear Classification
709+
<http://www.csie.ntu.edu.tw/~cjlin/liblinear/>`_.
710710

711711

0 commit comments

Comments
 (0)
0