-
-
Notifications
You must be signed in to change notification settings - Fork 10.9k
ENH/WIP: add a max_abs ufunc #5509
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Thanks for writing this! I guess the new I see that the |
int cmp; | ||
|
||
absi1 = PyNumber_Absolute(i1); | ||
absi2 = PyNumber_Absolute(i2); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
these can probably raise exceptions and return NULL which needs handling (and a test)
shouldn't it be named abs_max, the absolute is applied first, similar for other functions of this type like square sum |
I'd thought that maxabs is a thing, kind of like how logsumexp is a thing. One example is https://software.intel.com/en-us/node/502168. But I may be wrong. |
hm maximum of absolute, sum of squares, I guess it makes sense. If other libraries use this ordering its probably best to not deviate |
instead of adding lots of new functions it would be kind of nice to have a sort of preproces argument for ufuncs that is another ufunc. Then you could e.g. do: kind of explicit unreadable lazy evaluation, on second though probably not so such a great idea |
This reminds me of the discussion near #5218 (comment). |
I think you're about 20% of the way to reinventing numexpr? Which is not a bad thing, a version of numexpr that worked with the regular On Tue, Jan 27, 2015 at 8:52 PM, Julian Taylor notifications@github.com
Nathaniel J. Smith |
Basically we'd need two things:
numpy_ufunc gets us part of the way there... Of course one could get a minimum-viable-product by starting with On Tue, Jan 27, 2015 at 9:00 PM, Nathaniel Smith njs@pobox.com wrote:
Nathaniel J. Smith |
This function computes maximum(absolute(x), absolute(y)), which when used as a reduction is the inf-norm of a vector. Providing a single call ufunc for this operation is 2x-3x faster than the composite operation. For large array sizes, this also eliminates a full sized temporary. On x86_64 this operation is similar in speed to maximum for contiguous reductions.
I've removed the code that depends on the float representation from loops.c.src. However, This is no longer faster than just calling This version is about a factor of 2 slower for the non-contiguous case. I'm not sure what kind, if any, code changes can regain that speed in a fully portable way. The script I've been using to benchmark things is here: https://gist.github.com/ewmoore/5f34c148b00ed461057d Although I think allowing arbitrary combinations of ufuncs would be nice and might make a good gsoc project I'm not going to have time to do that. So if we'd rather build that instead, lets close this. Its also possible to only expose this function (and a hypothetical |
Rejigger tests by closing and opening. |
I think it's reasonable to not do everything in the maximum amount of generality possible. For example there's a short list of functions in https://software.intel.com/en-us/node/502164. Personally I am most interested in inf-norm and 1-norm which could be implemented as
In my opinion there could be value in not waiting for arbitrary combinations of ufuncs.
This seems like a good idea to me. |
@ewmoore How come? |
Mostly because I don't intend to work on this further right now. I think On Mon, Feb 1, 2016 at 11:25 AM, Charles Harris notifications@github.com
|
I'm very sympathetic to the idea that having all of these On Mon, Feb 1, 2016 at 11:57 AM, Eric Moore ewm@redtetrahedron.org wrote:
|
This is a follow on to our discussion in #5420, about adding sum_abs and max_abs ufuncs. Testing here show this version to be almost as fast as a raw maximum.reduce for float64 and float32. In both the SIMD loop and the BINARY_REDUCE_LOOP paths.
Still needs tests.