@@ -243,8 +243,8 @@ that it comes with a computational cost.
243
243
`"Probability estimates for multi-class classification by pairwise coupling"
244
244
<https://www.csie.ntu.edu.tw/~cjlin/papers/svmprob/svmprob.pdf> `_,
245
245
JMLR 5:975-1005, 2004.
246
-
247
-
246
+
247
+
248
248
* Platt
249
249
`"Probabilistic outputs for SVMs and comparisons to regularized likelihood methods"
250
250
<https://www.cs.colorado.edu/~mozer/Teaching/syllabi/6622/papers/Platt1999.pdf> `_.
@@ -267,10 +267,11 @@ that sets the parameter ``C`` of class ``class_label`` to ``C * value``.
267
267
:scale: 75
268
268
269
269
270
- :class: `SVC `, :class: `NuSVC `, :class: `SVR `, :class: `NuSVR `, :class: `LinearSVC `, :class: `LinearSVR ` and
271
- :class: `OneClassSVM ` implement also weights for individual samples in method
272
- ``fit `` through keyword ``sample_weight ``. Similar to ``class_weight ``, these
273
- set the parameter ``C `` for the i-th example to ``C * sample_weight[i] ``.
270
+ :class: `SVC `, :class: `NuSVC `, :class: `SVR `, :class: `NuSVR `, :class: `LinearSVC `,
271
+ :class: `LinearSVR ` and :class: `OneClassSVM ` also implement weights for
272
+ individual samples in method ``fit `` through keyword ``sample_weight ``. Similar
273
+ to ``class_weight ``, these set the parameter ``C `` for the i-th example to
274
+ ``C * sample_weight[i] ``.
274
275
275
276
276
277
.. figure :: ../auto_examples/svm/images/sphx_glr_plot_weighted_samples_001.png
@@ -392,10 +393,10 @@ Tips on Practical Use
392
393
* **Setting C **: ``C `` is ``1 `` by default and it's a reasonable default
393
394
choice. If you have a lot of noisy observations you should decrease it.
394
395
It corresponds to regularize more the estimation.
395
-
396
+
396
397
:class: `LinearSVC ` and :class: `LinearSVR ` are less sensitive to ``C `` when
397
- it becomes large, and prediction results stop improving after a certain
398
- threshold. Meanwhile, larger ``C `` values will take more time to train,
398
+ it becomes large, and prediction results stop improving after a certain
399
+ threshold. Meanwhile, larger ``C `` values will take more time to train,
399
400
sometimes up to 10 times longer, as shown by Fan et al. (2008)
400
401
401
402
* Support Vector Machine algorithms are not scale invariant, so **it
@@ -412,7 +413,7 @@ Tips on Practical Use
412
413
positive and few negative), set ``class_weight='balanced' `` and/or try
413
414
different penalty parameters ``C ``.
414
415
415
- * **Randomness of the underlying implementations **: The underlying
416
+ * **Randomness of the underlying implementations **: The underlying
416
417
implementations of :class: `SVC ` and :class: `NuSVC ` use a random number
417
418
generator only to shuffle the data for probability estimation (when
418
419
``probability `` is set to ``True ``). This randomness can be controlled
@@ -547,7 +548,7 @@ correctly. ``gamma`` defines how much influence a single training example has.
547
548
The larger ``gamma `` is, the closer other examples must be to be affected.
548
549
549
550
Proper choice of ``C `` and ``gamma `` is critical to the SVM's performance. One
550
- is advised to use :class: `sklearn.model_selection.GridSearchCV ` with
551
+ is advised to use :class: `sklearn.model_selection.GridSearchCV ` with
551
552
``C `` and ``gamma `` spaced exponentially far apart to choose good values.
552
553
553
554
.. topic :: Examples:
@@ -698,7 +699,7 @@ term :math:`\rho`
698
699
* `"A Tutorial on Support Vector Regression"
699
700
<http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.114.4288> `_,
700
701
Alex J. Smola, Bernhard Schölkopf - Statistics and Computing archive
701
- Volume 14 Issue 3, August 2004, p. 199-222.
702
+ Volume 14 Issue 3, August 2004, p. 199-222.
702
703
703
704
704
705
.. _svm_implementation_details :
@@ -722,5 +723,3 @@ computations. These libraries are wrapped using C and Cython.
722
723
723
724
- `LIBLINEAR -- A Library for Large Linear Classification
724
725
<https://www.csie.ntu.edu.tw/~cjlin/liblinear/> `_.
725
-
726
-
0 commit comments