8000 [MRG+1] Add new regression metric - Mean Squared Log Error by kdexd · Pull Request #7655 · scikit-learn/scikit-learn · GitHub
[go: up one dir, main page]

Skip to content

[MRG+1] Add new regression metric - Mean Squared Log Error #7655

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Nov 30, 2016
Merged

[MRG+1] Add new regression metric - Mean Squared Log Error #7655

merged 4 commits into from
Nov 30, 2016

Conversation

kdexd
Copy link
@kdexd kdexd commented Oct 12, 2016

Reference Issue: None

What does this implement/fix? Explain your changes.

  • This PR implements a new metric - "Mean Squared Logarithmic Error" (name truncated to mean_squared_log_error). I have added the method alongwith other regression metrics in sklearn.metrics.regression module.
  • Accompanying the implementation, this PR is complete with User Guide Documentation and API docstring.

Any other comments?

  • The metric is similar to mean_squared_error and MSE method can be used to calculate MSLE by cleverly passing arguments, but it always required external manual work.
  • I felt that it would be a nice to have metric due to its frequent requirement.
  • A lot of regression problems in various competitions, especially Kaggle, evaluate submissions based on this error metric or its square root. A Kaggle wiki page can be found here.

@jnothman
Copy link
Member

Please fix test failures

10000

@kdexd kdexd changed the title [MRG] Add new regression metric - Mean Squared Log Error [WIP] Add new regression metric - Mean Squared Log Error Oct 13, 2016
@kdexd kdexd changed the title [WIP] Add new regression metric - Mean Squared Log Error [MRG] Add new regression metric - Mean Squared Log Error Oct 13, 2016
----------------------

The :func:`mean_squared_log_error` function computes a risk metric corresponding
to the expected value of the logarithmic squared (quadratic) error loss or loss.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

error loss or loss? Do you mean "error or loss"?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oops, minor typo. Fixing it

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@amueller I fixed this one ! There was a same typo above it as well. I fixed it on the fly.

y_type, y_true, y_pred, multioutput = _check_reg_targets(
y_true, y_pred, multioutput)

if not (y_true >= 0).all() and not (y_pred >= 0).all():
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It can be used with anything > -1, right?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@amueller It can be, but (1 + log(x)) will give huge negative values which change erratically on little change of x between (-1, 0). This will not make the score look sensible. Looking mathematically it is possible, but in practical usages this metric is used for non negative targets. Although if you suggest I'd change it.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Additionally I just recalled that, I read somewhere - this metric is used for positive values only, still there is log(1 + x) to make everything inside log greater than one, and finally outside the log positive, which would be greater than zero. Making it allowable till -1 will nullify this 😄

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

alright.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, my reading of the equation agrees that it's designed for non-negative values with an exponential trend.

@kdexd
Copy link
Author
kdexd commented Oct 18, 2016

Hi @amueller and @jnothman, what more shall I do in this PR ? Also, is @RPGOne is a bot or a service ?

@jnothman
Copy link
Member
jnothman commented Oct 18, 2016

RPGOne is spam, as far as I know

On 18 October 2016 at 20:49, Karan Desai notifications@github.com wrote:

Hi @amueller https://github.com/amueller and @jnothman
https://github.com/jnothman, what more shall I do in this PR ? Also, is
@RPGOne https://github.com/RPGOne is a bot or a service ?


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#7655 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAEz64Yn9iA-2Ob_9lyUJmobNRC007N4ks5q1JYNgaJpZM4KVLS7
.

Copy link
Member
@raghavrv raghavrv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The code is cleanly written. Thanks!

Array-like value defines weights used to average errors.

'raw_values' :
Returns a full set of errors in case of multioutput input.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd phrase it as when the input is of multioutput format.

Sample weights.

multioutput : string in ['raw_values', 'uniform_average']
or array-like of shape (n_outputs)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Humm how does this render in the documentation?

Could you maybe leave a blank line after this, to visually separate the type from description?


if not (y_true >= 0).all() and not (y_pred >= 0).all():
raise ValueError("Mean Log Squared Error cannot be used when targets "
"contain negative values.")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After this validation I think we can reuse the mean_sqared_error by passing the log values?

(There will be an additional check on y, but it will save us 10 lines of code)...

@amueller WDYT?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@raghavrv it will break the test of this method, if in future mean_squared_error gets broken at all. But then I think your review is more appropriate because:

  1. It will pacify DRY principle.
  2. As this metric is kind of adapted from mean_squared_error, its behavior can be similar to that method, hence there is no issue if one test fails due to broken underlying method.

I'm temporary choosing the path which is consistent with Don't Repeat Yourself and which saves some lines of code. I'll amend my commit accordingly, if @amueller thinks the other way around.

@raghavrv raghavrv added this to the 0.19 milestone Oct 30, 2016
@kdexd
Copy link
Author
kdexd commented Oct 31, 2016

Documentation of mean_squared_error in current master renders like this:

image

There are some inconsistencies, my build after the changes you suggested looks like this ( mean_squared_log_error ):

image

To keep the diffs in this PR specific to only one metric, I am leaving other docstrings untouched for a while, I'll be taking them up in a separate documentation cleanup issue. I have rephrased the line you reviewed and reused mean_squared_error as well. Thanks !

@raghavrv
Copy link
Member
raghavrv commented Nov 1, 2016

Thanks for the screenshot of the doc!

I'll be taking them up in a separate documentation cleanup issue.

Much appreciated.

@raghavrv
Copy link
Member
raghavrv commented Nov 1, 2016

I think it should also be added to the scorer so users can readily refer to it by neg_mean_squared_log_error...

@kdexd
Copy link
Author
kdexd commented Nov 2, 2016

@raghavrv It looks like there is a renaming scheduled for regression metrics similar to this one. For the sake of uniformity, I have added a deprecation message to mean_squared_log_error_scorer just like mean_squared_error_scorer and others. Let me know if I should not include that, and I will amend the commit accordingly, thanks !

@jnothman
Copy link
Member
jnothman commented Nov 2, 2016

No, don't add a deprecated version. That's only there for people using
features in older versions.

On 2 November 2016 at 13:36, Karan Desai notifications@github.com wrote:

@raghavrv https://github.com/raghavrv It looks like there is a renaming
scheduled for regression metrics similar to this one. For the sake of
uniformity, I have added a deprecation message to
mean_squared_log_error_scorer just like mean_squared_error_scorer and
others. Let me know if I should not include that, and I will amend the
commit accordingly, thanks !


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#7655 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAEz6-edpn5mxkHZ8249LbmS10gyGymAks5q5_cxgaJpZM4KVLS7
.

@kdexd
Copy link
Author
kdexd commented Nov 2, 2016

Hello @jnothman, @raghavrv:
I have made the needful additions / modifications in the PR. I'm up for anything else which is suitable to go in here, please have a look 😄

assert_almost_equal(mean_absolute_error([0.], [0.]), 0.00, 2)
assert_almost_equal(median_absolute_error([0.], [0.]), 0.00, 2)
assert_almost_equal(explained_variance_score([0.], [0.]), 1.00, 2)
assert_almost_equal(r2_score([0., 1], [0., 1]), 1.00, 2)
assert_raises(ValueError, mean_squared_log_error, [-1.], [-1.])
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you also check for the error message to be sure...

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@raghavrv ✅ Done !

@jnothman
Copy link
Member
jnothman commented Nov 6, 2016

Kaggle calls this "[root] mean squared logarithmic error", not "[root] mean squared log error" which sounds like it's a function of the log of the error. I think this is an important distinction. I'm not sure if you need to rename the function and scorer to reflect this, but at least the documentation needs to be absolutely clear.

\text{MSLE}(y, \hat{y}) = \frac{1}{n_\text{samples}} \sum_{i=0}^{n_\text{samples} - 1} (\log (1 + y_i) - \log (1 +
\hat{y}_i) )^2.

Here is a small example of usage of the :func:`mean_squared_log_error`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Kaggle's note that "RMSLE penalizes an under-predicted estimate greater than an over-predicted estimate" may be valuable here.

----------------------

The :func:`mean_squared_log_error` function computes a risk metric corresponding
to the expected value of the logarithmic squared (quadratic) error or loss.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you want "squared logarithmic" rather than "logarithmic squared".

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oops, thanks for pointing this out. I would have missed it completely ! Changing it.


.. math::

\text{MSLE}(y, \hat{y}) = \frac{1}{n_\text{samples}} \sum_{i=0}^{n_\text{samples} - 1} (\log (1 + y_i) - \log (1 +
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I presume this is meant to be applicable for non-negative regression targets? This should be stated. I think you should also give some sense of when this measure should be used, presumably for regressions over population counts and similar (i.e. targets with exponential growth).

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, this is a nice to be included information in our user guide.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also would be good to be clear what base we use for the log.

y_type, y_true, y_pred, multioutput = _check_reg_targets(
y_true, y_pred, multioutput)

if not (y_true >= 0).all() and not (y_pred >= 0).all():
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, my reading of the equation agrees that it's designed for non-negative values with an exponential trend.

@kdexd
Copy link
Author
kdexd commented Nov 6, 2016

@jnothman Logarithmic made the name too long, but if needed, I'll change the names. But yes atleast I should be clear about it in the docstrings and User Guide. I'll push the required changes soon. Also, MSE and MAE have their square roots used quite frequently, but they are not included in scorer so I dropped [root]. Is it a good choice to provide RMSLE in scorer or it is fine this way ?

def mean_squared_log_error(y_true, y_pred,
sample_weight=None,
multioutput='uniform_average'):
"""Mean squared log error regression loss
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For instance, here "log" -> "logarithmic"

\hat{y}_i) )^2.
\text{MSLE}(y, \hat{y}) = \frac{1}{n_\text{samples}} \sum_{i=0}^{n_\text{samples} - 1} (\log_e (1 + y_i) - \log_e (1 + \hat{y}_i) )^2.

Where :math:`\log_e (x)` means the natural logarithm of :math:`x`. This metric is best to
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

some of these lines are much longer than we usually try to keep to (80 chars)

@kdexd
Copy link
Author
kdexd commented Nov 28, 2016

Hi @raghavrv, @jnothman:

I have addressed all of your review comments and cleaned up my commit history to reduce down the whole work into isolated sequential commits containing the implementation, tests and documentation one in each ! Please let me know if there's anything else I should do..

@jnothman
Copy link
Member

FWIW, cleaning up commit history is superfluous.

Copy link
Member
@jnothman jnothman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Otherwise LGTM

y_true, y_pred, multioutput)

if not (y_true >= 0).all() and not (y_pred >= 0).all():
raise ValueError("Mean Squared Log Error cannot be used when targets "
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Either "logarithmic" or "mean_squared_log_error"

@@ -23,6 +24,7 @@ def test_regression_metrics(n_samples=50):
y_pred = y_true + 1

assert_almost_equal(mean_squared_error(y_true, y_pred), 1.)
assert_almost_equal(mean_squared_log_error(y_true, y_pred), 0.01915163)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd rather tests that explicitly check msle(x, y) = mse(ln(x), ln(y)) rather than checking against a hand-calculated number.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great, although I guess you mean ln(1+x)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, that

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Coming up in 5 minutes 😄

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jnothman Done ! I was skeptical about the fact that if mean_squared_error actually gets faulty, then these tests will still pass as we are doing the same thing internally.

@jnothman
Copy link
Member
jnothman commented Nov 29, 2016 via email

@kdexd
Copy link
Author
kdexd commented Nov 29, 2016

Oh yes, if mean_squared_error would have broken that its test itself would fail !

@jnothman
Copy link
Member

LGTM

@jnothman jnothman changed the title [MRG] Add new regression metric - Mean Squared Log Error [MRG+1] Add new regression metric - Mean Squared Log Error Nov 29, 2016
Copy link
Member
@amueller amueller left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM apart from nitpick. Would you mind fixing that?


Parameters
----------
y_true : array-like of shape = (n_samples) or (n_samples, n_outputs)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nitpick: you should write (n_samples,) because it's a tuple. (also everywhere below where there's a tuple with one element)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@amueller i read your comment after the PR was merged, although I think I will handle this is in a larger routine of documentation consistency, @raghavrv already directed me to a matching issue for the same. I will work on one more PR which is already open before starting that.

@jnothman jnothman merged commit cb6a366 into scikit-learn:master Nov 30, 2016
@jnothman
Copy link
Member

Thanks, @karandesai-96

@jnothman
Copy link
Member
jnothman commented Nov 30, 2016 via email

@kdexd
Copy link
Author
kdexd commented Nov 30, 2016

Feels good to contribute to the community, thanks @jnothman @amueller @raghavrv for the review ! 😄

@amueller
Copy link
Member

@jnothman no worries, it was a nitpick of the highest order ;)

@kdexd kdexd deleted the msle-metric branch December 2, 2016 18:02
@kdexd
Copy link
Author
kdexd commented Dec 22, 2016

Hi, I was wondering whether this should go in CHANGELOG for next release.

@jnothman
Copy link
Member

With apologies, we forgot to ask you to add a changelog entry here. Please submit a new PR with it. THanks.

@kdexd
Copy link
Author
kdexd commented Dec 22, 2016

@jnothman: Sure, I'll do that in a moment, thanks for the headsup.

sergeyf pushed a commit to sergeyf/scikit-learn that referenced this pull request Feb 28, 2017
…arn#7655)

* ENH Implement mean squared log error in sklearn.metrics.regression

* TST Add tests for mean squared log error.

* DOC Write user guide and docstring about mean squared log error.

* ENH Add neg_mean_squared_log_error in metrics.scorer
sergeyf pushed a commit to sergeyf/scikit-learn that referenced this pull request Feb 28, 2017
@Przemo10 Przemo10 mentioned this pull request Mar 17, 2017
Sundrique pushed a commit to Sundrique/scikit-learn that referenced this pull request Jun 14, 2017
…arn#7655)

* ENH Implement mean squared log error in sklearn.metrics.regression

* TST Add tests for mean squared log error.

* DOC Write user guide and docstring about mean squared log error.

* ENH Add neg_mean_squared_log_error in metrics.scorer
Sundrique pushed a commit to Sundrique/scikit-learn that referenced this pull request Jun 14, 2017
NelleV pushed a commit to NelleV/scikit-learn that referenced this pull request Aug 11, 2017
…arn#7655)

* ENH Implement mean squared log error in sklearn.metrics.regression

* TST Add tests for mean squared log error.

* DOC Write user guide and docstring about mean squared log error.

* ENH Add neg_mean_squared_log_error in metrics.scorer
NelleV pushed a commit to NelleV/scikit-learn that referenced this pull request Aug 11, 2017
paulha pushed a commit to paulha/scikit-learn that referenced this pull request Aug 19, 2017
…arn#7655)

* ENH Implement mean squared log error in sklearn.metrics.regression

* TST Add tests for mean squared log error.

* DOC Write user guide and docstring about mean squared log error.

* ENH Add neg_mean_squared_log_error in metrics.scorer
paulha pushed a commit to paulha/scikit-learn that referenced this pull request Aug 19, 2017
maskani-moh pushed a commit to maskani-moh/scikit-learn that referenced this pull request Nov 15, 2017
…arn#7655)

* ENH Implement mean squared log error in sklearn.metrics.regression

* TST Add tests for mean squared log error.

* DOC Write user guide and docstring about mean squared log error.

* ENH Add neg_mean_squared_log_error in metrics.scorer
maskani-moh pushed a commit to maskani-moh/scikit-learn that referenced this pull request Nov 15, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants
0