@@ -54,51 +54,52 @@ the model and the data, like :func:`metrics.mean_squared_error`, are
54
54
available as neg_mean_squared_error which return the negated value
55
55
of the metric.
56
56
57
- ============================== ============================================= ==================================
58
- Scoring Function Comment
59
- ============================== ============================================= ==================================
57
+ ==================================== = ============================================= ==================================
58
+ Scoring Function Comment
59
+ ==================================== = ============================================= ==================================
60
60
**Classification **
61
- 'accuracy' :func: `metrics.accuracy_score `
62
- 'balanced_accuracy' :func: `metrics.balanced_accuracy_score `
63
- 'average_precision' :func: `metrics.average_precision_score `
64
- 'neg_brier_score' :func: `metrics.brier_score_loss `
65
- 'f1' :func: `metrics.f1_score ` for binary targets
66
- 'f1_micro' :func: `metrics.f1_score ` micro-averaged
67
- 'f1_macro' :func: `metrics.f1_score ` macro-averaged
68
- 'f1_weighted' :func: `metrics.f1_score ` weighted average
69
- 'f1_samples' :func: `metrics.f1_score ` by multilabel sample
70
- 'neg_log_loss' :func: `metrics.log_loss ` requires ``predict_proba `` support
71
- 'precision' etc. :func: `metrics.precision_score ` suffixes apply as with 'f1'
72
- 'recall' etc. :func: `metrics.recall_score ` suffixes apply as with 'f1'
73
- 'jaccard' etc. :func: `metrics.jaccard_score ` suffixes apply as with 'f1'
74
- 'roc_auc' :func: `metrics.roc_auc_score `
75
- 'roc_auc_ovr' :func: `metrics.roc_auc_score `
76
- 'roc_auc_ovo' :func: `metrics.roc_auc_score `
77
- 'roc_auc_ovr_weighted' :func: `metrics.roc_auc_score `
78
- 'roc_auc_ovo_weighted' :func: `metrics.roc_auc_score `
61
+ 'accuracy' :func: `metrics.accuracy_score `
62
+ 'balanced_accuracy' :func: `metrics.balanced_accuracy_score `
63
+ 'average_precision' :func: `metrics.average_precision_score `
64
+ 'neg_brier_score' :func: `metrics.brier_score_loss `
65
+ 'f1' :func: `metrics.f1_score ` for binary targets
66
+ 'f1_micro' :func: `metrics.f1_score ` micro-averaged
67
+ 'f1_macro' :func: `metrics.f1_score ` macro-averaged
68
+ 'f1_weighted' :func: `metrics.f1_score ` weighted average
69
+ 'f1_samples' :func: `metrics.f1_score ` by multilabel sample
70
+ 'neg_log_loss' :func: `metrics.log_loss ` requires ``predict_proba `` support
71
+ 'precision' etc. :func: `metrics.precision_score ` suffixes apply as with 'f1'
72
+ 'recall' etc. :func: `metrics.recall_score ` suffixes apply as with 'f1'
73
+ 'jaccard' etc. :func: `metrics.jaccard_score ` suffixes apply as with 'f1'
74
+ 'roc_auc' :func: `metrics.roc_auc_score `
75
+ 'roc_auc_ovr' :func: `metrics.roc_auc_score `
76
+ 'roc_auc_ovo' :func: `metrics.roc_auc_score `
77
+ 'roc_auc_ovr_weighted' :func: `metrics.roc_auc_score `
78
+ 'roc_auc_ovo_weighted' :func: `metrics.roc_auc_score `
79
79
80
80
**Clustering**
81
- 'adjusted_mutual_info_score' :func: `metrics.adjusted_mutual_info_score `
82
- 'adjusted_rand_score' :func: `metrics.adjusted_rand_score `
83
- 'completeness_score' :func: `metrics.completeness_score `
84
- 'fowlkes_mallows_score' :func: `metrics.fowlkes_mallows_score `
85
- 'homogeneity_score' :func: `metrics.homogeneity_score `
86
- 'mutual_info_score' :func: `metrics.mutual_info_score `
87
- 'normalized_mutual_info_score' :func: `metrics.normalized_mutual_info_score `
88
- 'v_measure_score' :func: `metrics.v_measure_score `
81
+ 'adjusted_mutual_info_score' :func: `metrics.adjusted_mutual_info_score `
82
+ 'adjusted_rand_score' :func: `metrics.adjusted_rand_score `
83
+ 'completeness_score' :func: `metrics.completeness_score `
84
+ 'fowlkes_mallows_score' :func: `metrics.fowlkes_mallows_score `
85
+ 'homogeneity_score' :func: `metrics.homogeneity_score `
86
+ 'mutual_info_score' :func: `metrics.mutual_info_score `
87
+ 'normalized_mutual_info_score' :func: `metrics.normalized_mutual_info_score `
88
+ 'v_measure_score' :func: `metrics.v_measure_score `
89
89
90
90
**Regression**
91
- 'explained_variance' :func: `metrics.explained_variance_score `
92
- 'max_error' :func: `metrics.max_error `
93
- 'neg_mean_absolute_error' :func: `metrics.mean_absolute_error `
94
- 'neg_mean_squared_error' :func: `metrics.mean_squared_error `
95
- 'neg_root_mean_squared_error' :func: `metrics.mean_squared_error `
96
- 'neg_mean_squared_log_error' :func: `metrics.mean_squ
10000
ared_log_error `
97
- 'neg_median_absolute_error' :func: `metrics.median_absolute_error `
98
- 'r2' :func: `metrics.r2_score `
99
- 'neg_mean_poisson_deviance' :func: `metrics.mean_poisson_deviance `
100
- 'neg_mean_gamma_deviance' :func: `metrics.mean_gamma_deviance `
101
- ============================== ============================================= ==================================
91
+ 'explained_variance' :func: `metrics.explained_variance_score `
92
+ 'max_error' :func: `metrics.max_error `
93
+ 'neg_mean_absolute_error' :func: `metrics.mean_absolute_error `
94
+ 'neg_mean_squared_error' :func: `metrics.mean_squared_error `
95
+ 'neg_root_mean_squared_error' :func: `metrics.mean_squared_error `
96
+ 'neg_mean_squared_log_error' :func: `metrics.mean_squared_log_error `
97
+ 'neg_median_absolute_error' :func: `metrics.median_absolute_error `
98
+ 'r2' :func: `metrics.r2_score `
99
+ 'neg_mean_poisson_deviance' :func: `metrics.mean_poisson_deviance `
100
+ 'neg_mean_gamma_deviance' :func: `metrics.mean_gamma_deviance `
101
+ 'neg_mean_absolute_percentage_error' :func: `metrics.mean_absolute_percentage_error `
102
+ ==================================== ============================================== ==================================
102
103
103
104
104
105
Usage examples:
@@ -1963,6 +1964,42 @@ function::
1963
1964
>>> mean_squared_log_error(y_true, y_pred)
1964
1965
0.044...
1965
1966
1967
+ .. _mean_absolute_percentage_error :
1968
+
1969
+ Mean absolute percentage error
1970
+ ------------------------------
1971
+ The :func: `mean_absolute_percentage_error ` (MAPE), also known as mean absolute
1972
+ percentage deviation (MAPD), is an evaluation metric for regression problems.
1973
+ The idea of this metric is to be sensitive to relative errors. It is for example
1974
+ not changed by a global scaling of the target variable.
1975
+
1976
+ If :math: `\hat {y}_i` is the predicted value of the :math: `i`-th sample
1977
+ and :math: `y_i` is the corresponding true value, then the mean absolute percentage
1978
+ error (MAPE) estimated over :math: `n_{\text {samples}}` is defined as
1979
+
1980
+ .. math ::
1981
+
1982
+ \text {MAPE}(y, \hat {y}) = \frac {1 }{n_{\text {samples}}} \sum _{i=0 }^{n_{\text {samples}}-1 } \frac {{}\left | y_i - \hat {y}_i \right |}{max(\epsilon , \left | y_i \right |)}
1983
+
1984
+ where :math: `\epsilon ` is an arbitrary small yet strictly positive number to
1985
+ avoid undefined results when y is zero.
1986
+
1987
+ The :func: `mean_absolute_percentage_error ` function supports multioutput.
1988
+
1989
+ Here is a small example of usage of the :func: `mean_absolute_percentage_error `
1990
+ function::
1991
+
1992
+ >>> from sklearn.metrics import mean_absolute_percentage_error
1993
+ >>> y_true = [1, 10, 1e6]
1994
+ >>> y_pred = [0.9, 15, 1.2e6]
1995
+ >>> mean_absolute_percentage_error(y_true, y_pred)
1996
+ 0.2666...
1997
+
1998
+ In above example, if we had used `mean_absolute_error `, it would have ignored
1999
+ the small magnitude values and only reflected the error in prediction of highest
2000
+ magnitude value. But that problem is resolved in case of MAPE because it calculates
2001
+ relative percentage error with respect to actual output.
2002
+
1966
2003
.. _median_absolute_error :
1967
2004
1968
2005
Median absolute error
0 commit comments