-
-
Notifications
You must be signed in to change notification settings - Fork 7.9k
Cleanup scales #7144
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cleanup scales #7144
Conversation
return InvertedLog10Transform() | ||
with np.errstate(invalid="ignore"): | ||
a = np.where(a <= 0, self._fill_value, a) | ||
np.log(a, a) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nitpick: a = np.log(a)
is much more common and readable IMO
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a comment about returning a
itself from _mask_non_positives
, i.e. saving a copy. I make an extra copy in where
but get it back here. I don't mind just writing this as return np.log(a) / np.log(self.base) + 1
either -- it's probably a premature optimization anyways...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Recommendation: np.log(a, out=a)
keeps the memory and speed efficiency and improves the readability.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I personally tend to favor readability over memory and speed efficiency, but OK…
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the new version is not too bad:
return np.divide(np.log(a, out=a), np.log(self.base), out=a)
a = np.where(a <= 0, self._fill_value, a) | ||
np.log(a, a) | ||
a /= np.log(self.base) | ||
a += 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do you add one here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
... because the previous version did so (by multiplying by base before taking the log, which comes down to the same)... I didn't figure out why so I just left it, but apparently that's not needed. Removed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Eh… That was an odd choice from previous developers, and somehow very confusing considering the name of the functions. Thanks for clarifying.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like this practice of multiplying by the base in the forward transform and dividing by it in the inverse dates back to 8327bd4. I don't understand it, but it seems like it would have a substantial effect on log plots. @mdboom, do you remember why this was being done? Is there something that depends on it?
|
||
def transform_non_affine(self, a): | ||
return ma.power(10.0, a) / 10.0 | ||
return ma.power(self.base, a - 1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do you remove 1 here? (It is obviously linked to the +1 up there…)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like we need tests to ensure that the inverse actually reverses the action of the transform for any transform with an inverse.
63af7a7
to
f8b1a9a
Compare
@NelleV, can you revoke your "approval"? It is misleading, given that you pointed out a major bug. |
I'm not sure what is the "major bug" you're talking about? I think as long as we either keep the +/-1 on both sides or get rid of it on both sides we're good? |
Tests of the inverses sound like a good idea. I must say I am lost with the +1 -1 issue, I can't see that in the diff any more. |
Sorry, the problem is that the review comments on this page are showing an outdated diff. The current proposed modification looks fine to me--but I still think we need to understand the consequences, if any, of the change in output that these modifications make. Doesn't it make a substantive difference somewhere in mpl when the log transform returns log10(x) instead of 1 + log10(x)? |
|
@Kojoley I don't see how this is related? |
I'm wondering whether the +1 -1 business is actually a bug that has been present for nearly a decade, but that has mostly subtle effects. I'm also wondering whether it is related to #6915. |
@anntzer Nevermind. @efiring https://en.wikipedia.org/wiki/Logit |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am attempting to cancel my approval :(
Could this be milestoned to 2.0? 67E6 This gets rid of a spurious warning while maintaining full backcompatibility. |
milestoned an 2.0.1 so it will be backported, but will not block 2.0 |
@anntzer I'm inclined to just merge this ASAP to be sure it gets into 2.0. It looks like the test failure is a very slight difference in one image. Would you replace the test image, please, so we can get the tests passing and proceed? |
Ping me when #7410 is merged. I'm pretty sure the diff is due to some floating point imprecision. For example, if I subtract 1 from the return value of LogTransformBase.transform_non_affine and add 1 back to the return value of InvertedLogTransformBase.transform_non_affine (what the previous version used to do), the RMS error slightly increases to 0.010. |
done. |
base = 10.0 | ||
|
||
def inverted(self): | ||
return InvertedLog10Transform() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You could rewrite this like:
class Log10Transform(LogTransformBase):
base = 10.0
inverted = InvertedLog10Transform
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, because InvertedLog10Transform is not defined yet at that point. (InvertedLog10Transform could define inverted
as a reference to the Log10Transform
class but I decided to keep definitions symmetric.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, yes. You are right.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @anntzer
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
much improved! thanks for this.
Backported to v2.x as dc9e7ec |
I have broken |
Fixes #7143, together with some cleanups (in a separate commit).
There's a test failure (RMS=0.010) I can reproduce locally but I don't visually see any differences in the plots...
By the way the 1-1e-300 in the clipping of logit transforms is not very useful...