8000 DOC Add dropdowns to module 1.4 SVM (#26641) · scikit-learn/scikit-learn@7b94d6b · GitHub
[go: up one dir, main page]

Skip to content

Commit 7b94d6b

Browse files
Tech-NetiumsGaelVaroquaux
authored andcommitted
DOC Add dropdowns to module 1.4 SVM (#26641)
Co-authored-by: Gael Varoquaux <gael.varoquaux@normalesup.org>
1 parent bdf36ea commit 7b94d6b

File tree

1 file changed

+24
-9
lines changed

1 file changed

+24
-9
lines changed

doc/modules/svm.rst

Lines changed: 24 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -149,6 +149,10 @@ multi-class strategy, thus training `n_classes` models.
149149
See :ref:`svm_mathematical_formulation` for a complete description of
150150
the decision function.
151151

152+
|details-start|
153+
**Details on multi-class strategies**
154+
|details-split|
155+
152156
Note that the :class:`LinearSVC` also implements an alternative multi-class
153157
strategy, the so-called multi-class SVM formulated by Crammer and Singer
154158
[#8]_, by using the option ``multi_class='crammer_singer'``. In practice,
@@ -199,6 +203,8 @@ Then ``dual_coef_`` looks like this:
199203
|for SVs of class 0 |for SVs of class 1 |for SVs of class 2 |
200204
+--------------------------------------------------------------------------+-------------------------------------------------+-------------------------------------------------+
201205

206+
|details-end|
207+
202208
.. topic:: Examples:
203209

204210
* :ref:`sphx_glr_auto_examples_svm_plot_iris_svc.py`,
@@ -505,9 +511,9 @@ is advised to use :class:`~sklearn.model_selection.GridSearchCV` with
505511
* :ref:`sphx_glr_auto_examples_svm_plot_rbf_parameters.py`
506512
* :ref:`sphx_glr_auto_examples_svm_plot_svm_nonlinear.py`
507513

508-
509-
Custom Kernels
510-
--------------
514+
|details-start|
515+
**Custom Kernels**
516+
|details-split|
511517

512518
You can define your own kernels by either giving the kernel as a
513519
python function or by precomputing the Gram matrix.
@@ -571,6 +577,7 @@ test vectors must be provided:
571577
>>> clf.predict(gram_test)
572578
array([0, 1, 0])
573579

580+
|details-end|
574581

575582
.. _svm_mathematical_formulation:
576583

@@ -667,8 +674,9 @@ term :math:`b`
667674
estimator used is :class:`~sklearn.linear_model.Ridge` regression,
668675
the relation between them is given as :math:`C = \frac{1}{alpha}`.
669676

670-
LinearSVC
671-
---------
677+
|details-start|
678+
**LinearSVC**
679+< 10000 div class="diff-text-inner">|details-split|
672680

673681
The primal problem can be equivalently formulated as
674682

@@ -683,10 +691,13 @@ does not involve inner products between samples, so the famous kernel trick
683691
cannot be applied. This is why only the linear kernel is supported by
684692
:class:`LinearSVC` (:math:`\phi` is the identity function).
685693

694+
|details-end|
695+
686696
.. _nu_svc:
687697

688-
NuSVC
689-
-----
698+
|details-start|
699+
**NuSVC**
700+
|details-split|
690701

691702
The :math:`\nu`-SVC formulation [#7]_ is a reparameterization of the
692703
:math:`C`-SVC and therefore mathematically equivalent.
@@ -699,6 +710,7 @@ to a sample that lies on the wrong side of its margin boundary: it is either
699710
misclassified, or it is correctly classified but does not lie beyond the
700711
margin.
701712

713+
|details-end|
702714

703715
SVR
704716
---
@@ -747,8 +759,9 @@ which holds the difference :math:`\alpha_i - \alpha_i^*`, ``support_vectors_`` w
747759
holds the support vectors, and ``intercept_`` which holds the independent
748760
term :math:`b`
749761

750-
LinearSVR
751-
---------
762+
|details-start|
763+
**LinearSVR**
764+
|details-split|
752765

753766
The primal problem can be equivalently formulated as
754767

@@ -760,6 +773,8 @@ where we make use of the epsilon-insensitive loss, i.e. errors of less than
760773
:math:`\varepsilon` are ignored. This is the form that is directly optimized
761774
by :class:`LinearSVR`.
762775

776+
|details-end|
777+
763778
.. _svm_implementation_details:
764779

765780
Implementation details

0 commit comments

Comments
 (0)
0