8000 DOC some copyediting · deepatdotnet/scikit-learn@f4bc9a2 · GitHub
[go: up one dir, main page]

Skip to content

Commit f4bc9a2

Browse files
committed
DOC some copyediting
1 parent 6f293a2 commit f4bc9a2

File tree

1 file changed

+10
-7
lines changed

1 file changed

+10
-7
lines changed

doc/modules/model_evaluation.rst

Lines changed: 10 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -297,7 +297,7 @@ In this context, we can define the notions of precision, recall and F-measure:
297297
298298
F_\beta = (1 + \beta^2) \frac{\text{precision} \times \text{recall}}{\beta^2 \text{precision} + \text{recall}}.
299299
300-
Here some small examples in binary classification:
300+
Here some small examples in binary classification::
301301

302302
>>> from sklearn import metrics
303303
>>> y_pred = [0, 1, 0, 0]
@@ -411,7 +411,7 @@ their support
411411
412412
\texttt{weighted\_{}F\_{}beta}(y,\hat{y}) &= \frac{1}{n_\text{samples}} \sum_{i=0}^{n_\text{samples} - 1} (1 + \beta^2)\frac{|y_i \cap \hat{y}_i|}{\beta^2 |\hat{y}_i| + |y_i|}.
413413
414-
Here an example where ``average`` is set to ``average`` to ``macro``:
414+
Here an example where ``average`` is set to ``average`` to ``macro``::
415415

416416
>>> from sklearn import metrics
417417
>>> y_true = [0, 1, 2, 0, 1, 2]
@@ -427,7 +427,7 @@ Here an example where ``average`` is set to ``average`` to ``macro``:
427427
>>> metrics.precision_recall_fscore_support(y_true, y_pred, average='macro') # doctest: +ELLIPSIS
428428
(0.22..., 0.33..., 0.26..., None)
429429

430-
Here an example where ``average`` is set to to ``micro``:
430+
Here an example where ``average`` is set to to ``micro``::
431431

432432
>>> from sklearn import metrics
433433
>>> y_true = [0, 1, 2, 0, 1, 2]
@@ -443,7 +443,7 @@ Here an example where ``average`` is set to to ``micro``:
443443
>>> metrics.precision_recall_fscore_support(y_true, y_pred, average='micro') # doctest: +ELLIPSIS
444444
(0.33..., 0.33..., 0.33..., None)
445445

446-
Here an example where ``average`` is set to to ``weighted``:
446+
Here an example where ``average`` is set to to ``weighted``::
447447

448448
>>> from sklearn import metrics
449449
>>> y_true = [0, 1, 2, 0, 1, 2]
@@ -459,7 +459,7 @@ Here an example where ``average`` is set to to ``weighted``:
459459
>>> metrics.precision_recall_fscore_support(y_true, y_pred, average='weighted') # doctest: +ELLIPSIS
460460
(0.22..., 0.33..., 0.26..., None)
461461

462-
Here an example where ``average`` is set to ``None``:
462+
Here an example where ``average`` is set to ``None``::
463463

464464
>>> from sklearn import metrics
465465
>>> y_true = [0, 1, 2, 0, 1, 2]
@@ -492,7 +492,7 @@ value and :math:`w` is the predicted decisions as output by
492492
L_\text{Hinge}(y, w) = \max\left\{1 - wy, 0\right\} = \left|1 - wy\right|_+
493493
494494
Here a small example demonstrating the use of the :func:`hinge_loss` function
495-
with a svm classifier:
495+
with a svm classifier::
496496

497497
>>> from sklearn import svm
498498
>>> from sklearn.metrics import hinge_loss
@@ -613,6 +613,9 @@ where :math:`1(x)` is the `indicator function
613613
>>> from sklearn.metrics import zero_one_loss
614614
>>> y_pred = [1, 2, 3, 4]
615615
>>> y_true = [2, 2, 3, 4]
616+
617+
G
618+
616619
>>> zero_one_loss(y_true, y_pred)
617620
0.25
618621
>>> zero_one_loss(y_true, y_pred, normalize=False)
@@ -653,7 +656,7 @@ variance is estimated as follow:
653656
654657
The best possible score is 1.0, lower values are worse.
655658

656-
Here a small example of usage of the :func:`explained_variance_scoreé`
659+
Here a small example of usage of the :func:`explained_variance_score`
657660
function::
658661

659662
>>> from sklearn.metrics import explained_variance_score

0 commit comments

Comments
 (0)
0