-
-
Notifications
You must be signed in to change notification settings - Fork 25.8k
[MRG+1] Add new regression metric - Mean Squared Log Error #7655
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Please fix test failures |
---------------------- | ||
|
||
The :func:`mean_squared_log_error` function computes a risk metric corresponding | ||
to the expected value of the logarithmic squared (quadratic) error loss or loss. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
error loss or loss? Do you mean "error or loss"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oops, minor typo. Fixing it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@amueller I fixed this one ! There was a same typo above it as well. I fixed it on the fly.
y_type, y_true, y_pred, multioutput = _check_reg_targets( | ||
y_true, y_pred, multioutput) | ||
|
||
if not (y_true >= 0).all() and not (y_pred >= 0).all(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It can be used with anything > -1, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@amueller It can be, but (1 + log(x))
will give huge negative values which change erratically on little change of x
between (-1, 0)
. This will not make the score look sensible. Looking mathematically it is possible, but in practical usages this metric is used for non negative targets. Although if you suggest I'd change it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Additionally I just recalled that, I read somewhere - this metric is used for positive values only, still there is log(1 + x)
to make everything inside log greater than one, and finally outside the log positive, which would be greater than zero. Making it allowable till -1
will nullify this 😄
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
alright.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, my reading of the equation agrees that it's designed for non-negative values with an exponential trend.
RPGOne is spam, as far as I know On 18 October 2016 at 20:49, Karan Desai notifications@github.com wrote:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The code is cleanly written. Thanks!
Array-like value defines weights used to average errors. | ||
|
||
'raw_values' : | ||
Returns a full set of errors in case of multioutput input. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd phrase it as when the input is of multioutput format.
Sample weights. | ||
|
||
multioutput : string in ['raw_values', 'uniform_average'] | ||
or array-like of shape (n_outputs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Humm how does this render in the documentation?
Could you maybe leave a blank line after this, to visually separate the type from description?
|
||
if not (y_true >= 0).all() and not (y_pred >= 0).all(): | ||
raise ValueError("Mean Log Squared Error cannot be used when targets " | ||
"contain negative values.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After this validation I think we can reuse the mean_sqared_error
by passing the log values?
(There will be an additional check on y
, but it will save us 10 lines of code)...
@amueller WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@raghavrv it will break the test of this method, if in future mean_squared_error
gets broken at all. But then I think your review is more appropriate because:
- It will pacify DRY principle.
- As this metric is kind of adapted from
mean_squared_error
, its behavior can be similar to that method, hence there is no issue if one test fails due to broken underlying method.
I'm temporary choosing the path which is consistent with Don't Repeat Yourself and which saves some lines of code. I'll amend my commit accordingly, if @amueller thinks the other way around.
Documentation of There are some inconsistencies, my build after the changes you suggested looks like this ( To keep the diffs in this PR specific to only one metric, I am leaving other docstrings untouched for a while, I'll be taking them up in a separate documentation cleanup issue. I have rephrased the line you reviewed and reused |
Thanks for the screenshot of the doc!
Much appreciated. |
I think it should also be added to the scorer so users can readily refer to it by |
@raghavrv It looks like there is a renaming scheduled for regression metrics similar to this one. For the sake of uniformity, I have added a deprecation message to |
No, don't add a deprecated version. That's only there for people using On 2 November 2016 at 13:36, Karan Desai notifications@github.com wrote:
|
assert_almost_equal(mean_absolute_error([0.], [0.]), 0.00, 2) | ||
assert_almost_equal(median_absolute_error([0.], [0.]), 0.00, 2) | ||
assert_almost_equal(explained_variance_score([0.], [0.]), 1.00, 2) | ||
assert_almost_equal(r2_score([0., 1], [0., 1]), 1.00, 2) | ||
assert_raises(ValueError, mean_squared_log_error, [-1.], [-1.]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you also check for the error message to be sure...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@raghavrv ✅ Done !
Kaggle calls this "[root] mean squared logarithmic error", not "[root] mean squared log error" which sounds like it's a function of the log of the error. I think this is an important distinction. I'm not sure if you need to rename the function and scorer to reflect this, but at least the documentation needs to be absolutely clear. |
\text{MSLE}(y, \hat{y}) = \frac{1}{n_\text{samples}} \sum_{i=0}^{n_\text{samples} - 1} (\log (1 + y_i) - \log (1 + | ||
\hat{y}_i) )^2. | ||
|
||
Here is a small example of usage of the :func:`mean_squared_log_error` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Kaggle's note that "RMSLE penalizes an under-predicted estimate greater than an over-predicted estimate" may be valuable here.
---------------------- | ||
|
||
The :func:`mean_squared_log_error` function computes a risk metric corresponding | ||
to the expected value of the logarithmic squared (quadratic) error or loss. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you want "squared logarithmic" rather than "logarithmic squared".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oops, thanks for pointing this out. I would have missed it completely ! Changing it.
|
||
.. math:: | ||
|
||
\text{MSLE}(y, \hat{y}) = \frac{1}{n_\text{samples}} \sum_{i=0}^{n_\text{samples} - 1} (\log (1 + y_i) - \log (1 + |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I presume this is meant to be applicable for non-negative regression targets? This should be stated. I think you should also give some sense of when this measure should be used, presumably for regressions over population counts and similar (i.e. targets with exponential growth).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, this is a nice to be included information in our user guide.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also would be good to be clear what base we use for the log.
y_type, y_true, y_pred, multioutput = _check_reg_targets( | ||
y_true, y_pred, multioutput) | ||
|
||
if not (y_true >= 0).all() and not (y_pred >= 0).all(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, my reading of the equation agrees that it's designed for non-negative values with an exponential trend.
@jnothman Logarithmic made the name too long, but if needed, I'll change the names. But yes atleast I should be clear about it in the docstrings and User Guide. I'll push the required changes soon. Also, MSE and MAE have their square roots used quite frequently, but they are not included in scorer so I dropped [root]. Is it a good choice to provide RMSLE in scorer or it is fine this way ? |
def mean_squared_log_error(y_true, y_pred, | ||
sample_weight=None, | ||
multioutput='uniform_average'): | ||
"""Mean squared log error regression loss |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For instance, here "log" -> "logarithmic"
\hat{y}_i) )^2. | ||
\text{MSLE}(y, \hat{y}) = \frac{1}{n_\text{samples}} \sum_{i=0}^{n_\text{samples} - 1} (\log_e (1 + y_i) - \log_e (1 + \hat{y}_i) )^2. | ||
|
||
Where :math:`\log_e (x)` means the natural logarithm of :math:`x`. This metric is best to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
some of these lines are much longer than we usually try to keep to (80 chars)
FWIW, cleaning up commit history is superfluous. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Otherwise LGTM
y_true, y_pred, multioutput) | ||
|
||
if not (y_true >= 0).all() and not (y_pred >= 0).all(): | ||
raise ValueError("Mean Squared Log Error cannot be used when targets " |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Either "logarithmic" or "mean_squared_log_error"
@@ -23,6 +24,7 @@ def test_regression_metrics(n_samples=50): | |||
y_pred = y_true + 1 | |||
|
|||
assert_almost_equal(mean_squared_error(y_true, y_pred), 1.) | |||
assert_almost_equal(mean_squared_log_error(y_true, y_pred), 0.01915163) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd rather tests that explicitly check msle(x, y) = mse(ln(x), ln(y))
rather than checking against a hand-calculated number.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great, although I guess you mean ln(1+x)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, that
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Coming up in 5 minutes 😄
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jnothman Done ! I was skeptical about the fact that if mean_squared_error
actually gets faulty, then these tests will still pass as we are doing the same thing internally.
Often we go about testing by asserting invariances between two functions,
and then testing the more general of them. A main advantage of that
approach is legibility, but also that it should often be easy to pinpoint
which part is broken.
…On 29 November 2016 at 17:18, Karan Desai ***@***.***> wrote:
***@***.**** commented on this pull request.
------------------------------
In sklearn/metrics/tests/test_regression.py
<#7655>:
> @@ -23,6 +24,7 @@ def test_regression_metrics(n_samples=50):
y_pred = y_true + 1
assert_almost_equal(mean_squared_error(y_true, y_pred), 1.)
+ assert_almost_equal(mean_squared_log_error(y_true, y_pred), 0.01915163)
@jnothman <https://github.com/jnothman> Done ! I was skeptical about the
fact that if mean_squared_error actually gets faulty, then these tests
will still pass as we are doing the same thing internally.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#7655>, or mute the
thread
<https://github.com/notifications/unsubscribe-auth/AAEz69SpDk4ogDlHoHVdHIjMB9X3BE8Aks5rC8OjgaJpZM4KVLS7>
.
|
Oh yes, if |
LGTM |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM apart from nitpick. Would you mind fixing that?
|
||
Parameters | ||
---------- | ||
y_true : array-like of shape = (n_samples) or (n_samples, n_outputs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nitpick: you should write (n_samples,)
because it's a tuple. (also everywhere below where there's a tuple with one element)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, @karandesai-96 |
Sorry I missed your comment before merging, @amueller :/
…On 30 November 2016 at 13:40, Karan Desai ***@***.***> wrote:
***@***.**** commented on this pull request.
------------------------------
In sklearn/metrics/regression.py
<#7655>:
> @@ -241,6 +243,73 @@ def mean_squared_error(y_true, y_pred,
return np.average(output_errors, weights=multioutput)
+def mean_squared_log_error(y_true, y_pred,
+ sample_weight=None,
+ multioutput='uniform_average'):
+ """Mean squared logarithmic error regression loss
+
+ Read more in the :ref:`User Guide <mean_squared_log_error>`.
+
+ Parameters
+ ----------
+ y_true : array-like of shape = (n_samples) or (n_samples, n_outputs)
@amueller <https://github.com/amueller> i read your comment after the PR
was merged, although I think I will handle this is in a larger routine of
documentation consistency, @raghavrv <https://github.com/raghavrv>
already directed me to a matching issue for the same. I will work on one
more PR which is already open before starting that.
—
You are receiving this because you modified the open/close state.
Reply to this email directly, view it on GitHub
<#7655>, or mute the
thread
<https://github.com/notifications/unsubscribe-auth/AAEz67Xd4D6Ph_5sXuIhpjt4l9RK691Sks5rDOImgaJpZM4KVLS7>
.
|
@jnothman no worries, it was a nitpick of the highest order ;) |
Hi, I was wondering whether this should go in CHANGELOG for next release. |
With apologies, we forgot to ask you to add a changelog entry here. Please submit a new PR with it. THanks. |
@jnothman: Sure, I'll do that in a moment, thanks for the headsup. |
…arn#7655) * ENH Implement mean squared log error in sklearn.metrics.regression * TST Add tests for mean squared log error. * DOC Write user guide and docstring about mean squared log error. * ENH Add neg_mean_squared_log_error in metrics.scorer
…arn#7655) * ENH Implement mean squared log error in sklearn.metrics.regression * TST Add tests for mean squared log error. * DOC Write user guide and docstring about mean squared log error. * ENH Add neg_mean_squared_log_error in metrics.scorer
…arn#7655) * ENH Implement mean squared log error in sklearn.metrics.regression * TST Add tests for mean squared log error. * DOC Write user guide and docstring about mean squared log error. * ENH Add neg_mean_squared_log_error in metrics.scorer
…arn#7655) * ENH Implement mean squared log error in sklearn.metrics.regression * TST Add tests for mean squared log error. * DOC Write user guide and docstring about mean squared log error. * ENH Add neg_mean_squared_log_error in metrics.scorer
…arn#7655) * ENH Implement mean squared log error in sklearn.metrics.regression * TST Add tests for mean squared log error. * DOC Write user guide and docstring about mean squared log error. * ENH Add neg_mean_squared_log_error in metrics.scorer
Reference Issue: None
What does this implement/fix? Explain your changes.
mean_squared_log_error
). I have added the method alongwith other regression metrics insklearn.metrics.regression
module.Any other comments?
mean_squared_error
and MSE method can be used to calculate MSLE by cleverly passing arguments, but it always required external manual work.