-
Notifications
You must be signed in to change notification settings - Fork 24.3k
torch.finfo doesn't work for complex #35954
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I'd like to work on this issue |
@Justin-Huber thank you :) you can start by looking at this file: https://github.com/pytorch/pytorch/blob/master/torch/csrc/TypeInfo.cpp let me know if you have any questions or need any help and add me as a reviewer when you have a working PR :D |
Sounds good, thanks! |
@Justin-Huber Did you get a crack at this? I was wondering maybe we could both work on it? |
@jdklee No I haven't, that sounds good to me! Let me know if you fork it and we can work on that together :) |
On more thought, even though numpy supports finfo(floating point info) for complex dtypes, I think we should not support torch.finfo for complex dtypes as complex_tensor.is_floating_point() returns false. Perhaps we can add torch.cinfo for complex dtypes? as an FYI, this is numpy's current behavior:
|
I agree that conceptually it would be more elegant for PyTorch (in a bubble) to have finfo and cinfo and finfo doesn't support complex types, but in practice it seems fine to have finfo return complex values since the user is explicitly requesting them (making the risk of user confusion very low) and we greatly prefer being compatible with NumPy where possible. We could also add a cinfo, too, if we wanted, with redundant functionality. |
I'm finally getting around to this and have a few questions. Are there any tests/documentation for finfo? Is the expected behavior the same as numpy's finfo and are the only complex types np.complex_, np.complex128, np.complex64? |
@Justin-Huber There are existing tests for finfo in test/test_type_info.py and it's documented here https://pytorch.org/docs/master/type_info.html?highlight=finfo#torch.torch.finfo. I would focus on complex64 and complex128 and yes, the expected behavior is the same as NumPy's. It'd be great to see a PR for this. |
In https://github.com/pytorch/pytorch/blob/master/test/test_type_info.py#L40-L43, what's the reasoning behind constructing a tensor to then get the dtype back from? To be able to get the corresponding numpy dtype? I was looking at #33152 and saw that complex tensors were disabled for the current release, so was wondering if there's another build I should be working from or if just skipping the tensor construction would be fine. |
There's a lot of historic behavior in this test, and I think you're right they're constructing a NumPy tensor as a method of accessing NumPy's finfo. In torch/testing/_internal/common_utils.py we actually have torch_to_numpy_dtype_dict which will map torch to NumPy dtypes for you. You should work on the Master branch, which lets you construct complex tensors. |
Here's what I have so far. master...Justin-Huber:master Should the default dtype support complex too? |
Hey @Justin-Huber, hope you had a relaxing weekend. No, the default dtype already supports too many tensors. We probably should have restricted it to float32 and float64. You can open a draft PR with a [WIP] tag on it, too, by the way. That'll make it easy to track the conversation and look at the code together. Add me as a reviewer when you do. |
It has been a while since the last update on this issue. I self-assigned to grab this for bootcamp. Let me know if anyone already has a PR for this. |
I think we fixed this in #40488 and failed to close the issue. |
Uh oh!
There was an error while loading. Please reload this page.
cc @ezyang @anjali411 @dylanbespalko
The text was updated successfully, but these errors were encountered: