@@ -297,7 +297,7 @@ In this context, we can define the notions of precision, recall and F-measure:
297
297
298
298
F_\beta = (1 + \beta ^2 ) \frac {\text {precision} \times \text {recall}}{\beta ^2 \text {precision} + \text {recall}}.
299
299
300
- Here some small examples in binary classification:
300
+ Here some small examples in binary classification::
301
301
302
302
>>> from sklearn import metrics
303
303
>>> y_pred = [0, 1, 0, 0]
@@ -411,7 +411,7 @@ their support
411
411
412
412
\texttt {weighted\_ {}F\_ {}beta}(y,\hat {y}) &= \frac {1 }{n_\text {samples}} \sum _{i=0 }^{n_\text {samples} - 1 } (1 + \beta ^2 )\frac {|y_i \cap \hat {y}_i|}{\beta ^2 |\hat {y}_i| + |y_i|}.
413
413
414
- Here an example where ``average `` is set to ``average `` to ``macro ``:
414
+ Here an example where ``average `` is set to ``average `` to ``macro ``::
415
415
416
416
>>> from sklearn import metrics
417
417
>>> y_true = [0, 1, 2, 0, 1, 2]
@@ -427,7 +427,7 @@ Here an example where ``average`` is set to ``average`` to ``macro``:
427
427
>>> metrics.precision_recall_fscore_support(y_true, y_pred, average='macro') # doctest: +ELLIPSIS
428
428
(0.22..., 0.33..., 0.26..., None)
429
429
430
- Here an example where ``average `` is set to to ``micro ``:
430
+ Here an example where ``average `` is set to to ``micro ``::
431
431
432
432
>>> from sklearn import metrics
433
433
>>> y_true = [0, 1, 2, 0, 1, 2]
@@ -443,7 +443,7 @@ Here an example where ``average`` is set to to ``micro``:
443
443
>>> metrics.precision_recall_fscore_support(y_true, y_pred, average='micro') # doctest: +ELLIPSIS
444
444
(0.33..., 0.33..., 0.33..., None)
445
445
446
- Here an example where ``average `` is set to to ``weighted ``:
446
+ Here an example where ``average `` is set to to ``weighted ``::
447
447
448
448
>>> from sklearn import metrics
449
449
>>> y_true = [0, 1, 2, 0, 1, 2]
@@ -459,7 +459,7 @@ Here an example where ``average`` is set to to ``weighted``:
459
459
>>> metrics.precision_recall_fscore_support(y_true, y_pred, average='weighted') # doctest: +ELLIPSIS
460
460
(0.22..., 0.33..., 0.26..., None)
461
461
462
- Here an example where ``average `` is set to ``None ``:
462
+ Here an example where ``average `` is set to ``None ``::
463
463
464
464
>>> from sklearn import metrics
465
465
>>> y_true = [0, 1, 2, 0, 1, 2]
@@ -492,7 +492,7 @@ value and :math:`w` is the predicted decisions as output by
492
492
L_\text {Hinge}(y, w) = \max \left\{ 1 - wy, 0 \right \} = \left |1 - wy\right |_+
493
493
494
494
Here a small example demonstrating the use of the :func: `hinge_loss ` function
495
- with a svm classifier:
495
+ with a svm classifier::
496
496
497
497
>>> from sklearn import svm
498
498
>>> from sklearn.metrics import hinge_loss
@@ -613,6 +613,9 @@ where :math:`1(x)` is the `indicator function
613
613
>>> from sklearn.metrics import zero_one_loss
614
614
>>> y_pred = [1 , 2 , 3 , 4 ]
615
615
>>> y_true = [2 , 2 , 3 , 4 ]
616
+
617
+ G
618
+
616
619
>>> zero_one_loss(y_true, y_pred)
617
620
0.25
618
621
>>> zero_one_loss(y_true, y_pred, normalize = False )
@@ -653,7 +656,7 @@ variance is estimated as follow:
653
656
654
657
The best possible score is 1.0, lower values are worse.
655
658
656
- Here a small example of usage of the :func: `explained_variance_scoreé `
659
+ Here a small example of usage of the :func: `explained_variance_score `
657
660
function::
658
661
659
662
>>> from sklearn.metrics import explained_variance_score
0 commit comments