@@ -149,6 +149,10 @@ multi-class strategy, thus training `n_classes` models.
149
149
See :ref: `svm_mathematical_formulation ` for a complete description of
150
150
the decision function.
151
151
152
+ |details-start |
153
+ **Details on multi-class strategies **
154
+ |details-split |
155
+
152
156
Note that the :class: `LinearSVC ` also implements an alternative multi-class
153
157
strategy, the so-called multi-class SVM formulated by Crammer and Singer
154
158
[#8 ]_, by using the option ``multi_class='crammer_singer' ``. In practice,
@@ -199,6 +203,8 @@ Then ``dual_coef_`` looks like this:
199
203
| for SVs of class 0 |for SVs of class 1 |for SVs of class 2 |
200
204
+--------------------------------------------------------------------------+-------------------------------------------------+-------------------------------------------------+
201
205
206
+ |details-end |
207
+
202
208
.. topic :: Examples:
203
209
204
210
* :ref: `sphx_glr_auto_examples_svm_plot_iris_svc.py `,
@@ -505,9 +511,9 @@ is advised to use :class:`~sklearn.model_selection.GridSearchCV` with
505
511
* :ref: `sphx_glr_auto_examples_svm_plot_rbf_parameters.py `
506
512
* :ref: `sphx_glr_auto_examples_svm_plot_svm_nonlinear.py `
507
513
508
-
509
- Custom Kernels
510
- --------------
514
+ | details-start |
515
+ ** Custom Kernels **
516
+ | details-split |
511
517
512
518
You can define your own kernels by either giving the kernel as a
513
519
python function or by precomputing the Gram matrix.
@@ -571,6 +577,7 @@ test vectors must be provided:
571
577
>>> clf.predict(gram_test)
572
578
array([0, 1, 0])
573
579
580
+ |details-end |
574
581
575
582
.. _svm_mathematical_formulation :
576
583
@@ -667,8 +674,9 @@ term :math:`b`
667
674
estimator used is :class: `~sklearn.linear_model.Ridge ` regression,
668
675
the relation between them is given as :math: `C = \frac {1 }{alpha}`.
669
676
670
- LinearSVC
671
- ---------
677
+ |details-start |
678
+ **LinearSVC **
679
+ <
10000
div class="diff-text-inner">|details-split |
672
680
673
681
The primal problem can be equivalently formulated as
674
682
@@ -683,10 +691,13 @@ does not involve inner products between samples, so the famous kernel trick
683
691
cannot be applied. This is why only the linear kernel is supported by
684
692
:class: `LinearSVC ` (:math: `\phi ` is the identity function).
685
693
694
+ |details-end |
695
+
686
696
.. _nu_svc :
687
697
688
- NuSVC
689
- -----
698
+ |details-start |
699
+ **NuSVC **
700
+ |details-split |
690
701
691
702
The :math: `\nu `-SVC formulation [#7 ]_ is a reparameterization of the
692
703
:math: `C`-SVC and therefore mathematically equivalent.
@@ -699,6 +710,7 @@ to a sample that lies on the wrong side of its margin boundary: it is either
699
710
misclassified, or it is correctly classified but does not lie beyond the
700
711
margin.
701
712
713
+ |details-end |
702
714
703
715
SVR
704
716
---
@@ -747,8 +759,9 @@ which holds the difference :math:`\alpha_i - \alpha_i^*`, ``support_vectors_`` w
747
759
holds the support vectors, and ``intercept_ `` which holds the independent
748
760
term :math: `b`
749
761
750
- LinearSVR
751
- ---------
762
+ |details-start |
763
+ **LinearSVR **
764
+ |details-split |
752
765
753
766
The primal problem can be equivalently formulated as
754
767
@@ -760,6 +773,8 @@ where we make use of the epsilon-insensitive loss, i.e. errors of less than
760
773
:math: `\varepsilon ` are ignored. This is the form that is directly optimized
761
774
by :class: `LinearSVR `.
762
775
776
+ |details-end |
777
+
763
778
.. _svm_implementation_details :
764
779
765
780
Implementation details
0 commit comments